Misplaced Pages

Talk:Ricci calculus

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by Quondum (talk | contribs) at 14:16, 11 April 2021 (Role of connection and metric tensor: please expand/replace the bit on specific connections). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 14:16, 11 April 2021 by Quondum (talk | contribs) (Role of connection and metric tensor: please expand/replace the bit on specific connections)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
This article has not yet been rated on Misplaced Pages's content assessment scale.
It is of interest to the following WikiProjects:
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconMathematics High‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.MathematicsWikipedia:WikiProject MathematicsTemplate:WikiProject Mathematicsmathematics
HighThis article has been rated as High-priority on the project's priority scale.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconPhysics Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.PhysicsWikipedia:WikiProject PhysicsTemplate:WikiProject Physicsphysics
MidThis article has been rated as Mid-importance on the project's importance scale.

Archives

Index 1, 2


Proposal?...

Just a proposal, would it help to add the following table and explanation to Ricci calculus (Raised and lowered indices) to illustrate how sup-/super-scripts and summation fit together in a way that relates "co-/contra-variance" to invariance?

Proposed table/text

This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.

Basis transformation Component transformation Invariance
Covector, covariant vector, dual vector, 1-form e α ¯ = L α ¯ β e β {\displaystyle e^{\bar {\alpha }}=L^{\bar {\alpha }}{}_{\beta }e^{\beta }} a α ¯ = a γ L γ α ¯ {\displaystyle a_{\bar {\alpha }}=a_{\gamma }L^{\gamma }{}_{\bar {\alpha }}} a α ¯ e α ¯ = a γ L γ α ¯ L α ¯ β e β = a γ δ γ β e β = a β e β {\displaystyle a_{\bar {\alpha }}e^{\bar {\alpha }}=a_{\gamma }L^{\gamma }{}_{\bar {\alpha }}L^{\bar {\alpha }}{}_{\beta }e^{\beta }=a_{\gamma }\delta ^{\gamma }{}_{\beta }e^{\beta }=a_{\beta }e^{\beta }}
Vector, contravariant vector e α ¯ = L γ α ¯ e γ {\displaystyle e_{\bar {\alpha }}=L^{\gamma }{}_{\bar {\alpha }}e_{\gamma }} a α ¯ = a β L α ¯ β {\displaystyle a^{\bar {\alpha }}=a^{\beta }L^{\bar {\alpha }}{}_{\beta }} a α ¯ e α ¯ = a β L α ¯ β L γ α ¯ e γ = a β δ γ β e γ = a γ e γ {\displaystyle a^{\bar {\alpha }}e_{\bar {\alpha }}=a^{\beta }L^{\bar {\alpha }}{}_{\beta }L^{\gamma }{}_{\bar {\alpha }}e_{\gamma }=a^{\beta }\delta ^{\gamma }{}_{\beta }e_{\gamma }=a^{\gamma }e_{\gamma }}

This is far clearer and briefer (to me at least...) than the main article Covariance and contravariance of vectors, and it fits in with the summary style of this article. It’s also another example for the manipulation of indices, including the Kronecker delta.

What do others think? Just a suggestion - the article is excellent and I don't want to touch it! As always I'm not forcing this in - it's for take or leave. Thanks once again to the editors here (although maybe/maybe not F= after all...). Maschen (talk) 16:28, 15 August 2012 (UTC)

I also find that this gives a very intuitive and direct explanation for someone comfortable with symbolic algebra, and think it would be a sensible addition. I've taken the liberty of making a minor tweak to facilitate easy following (by avoiding the need for renaming dummy indices upon substitution to avoid duplication). I don't think it fits with the section Raised and lowered indices unless this is renamed, and should go in a separate (sub?)section Change of basis. I prefer that the way the passive transformation is introduced is not as a matrix with a non-zero determinant (especially since the a determinant of a matrix and the determinant of the abstract linear transformation it represents are entirely different things and have different values). This objection is superficial and can be ignored for now: a rewording in terms of linear combinations avoiding the terminology of matrices should be straightforward. — Quondum 02:13, 16 August 2012 (UTC)
That's very reasonable, by all means feel free to make changes.
About the determinant: I'm sure L could represent a linear transformation which is represented by an invertible matrix, so how could they have a different determinant? Active and passive transformation is a good link to add though.
About sources: The only source that supports what I'm saying is Maths methods for Physics and Engineering, Riley, Hobson, Bence, 2010 but this seems to be restricted to Cartesian tensors for most of the tensor chapter, no others to hand right now (will look for some soon...).
On the minus side for a balanced view: there is the concern it makes the article longer for not that much gain (according to the last archive, there were repeated additions and trims to make the article as short as possible with no inessential details, which this proposal may be...). Maschen (talk) 06:14, 16 August 2012 (UTC)
The expression of a general Lorentz transformation in the notation of this article does not seem excessive to me, but this proposal does unavoidably introduce explicit use of the abstract basis vectors, thus potentially expanding the scope. On this score I'll be interested in input from others.
On the determinant of an abstract quantity (tensor), being a linear transformation VV, or a linear transformation VV (i.e. any type (1,1) tensor), a basis-independent definition of the determinant would be the scalar change in n-volume it introduces, or equivalently the product of its eigenvalues. This cannot be defined for a type (2,0) or type (0,2) tensor (that is, without the use of a metric tensor). When a type (1,1) tensor is expressed in terms of a given basis (and its dual), this corresponds with the determinant of the associated matrix of components, and is a true invariant. A passive transformation, on the other hand, relates to distinct bases, and the matrix determinant is not invariant.
In this proposal, the concept of determinant is quite unnecessary; non-singularity of the basis-mapping is all that is necessary, and this is guaranteed by the fact that they are both bases. All that is needed is the two bases and their components when expressed in terms of the other basis, and the rest follows. No determinants, no inverses, no non-singularity requirement, and no mention of matrices; only of components. — Quondum 13:20, 16 August 2012 (UTC)
I made the suggested modifications. Maschen (talk) 13:32, 16 August 2012 (UTC)
I've tweaked it slightly again. No other comments seem to be forthcoming yet... — Quondum 02:25, 17 August 2012 (UTC)
Added to lead of covariance and contravariance of vectors, see talk. Maschen (talk) 08:06, 3 September 2012 (UTC)

I can understand the removal from covariance and contravariance of vectors; very hasty at the time the image was added... However, it's been 3 months, no objections and one favour. I will take the liberty of adding it as planned long ago, better here than anywhere else (in a slightly extended form)... feel free to revert. Maschen (talk) 10:15, 24 November 2012 (UTC)

While the proposed table is pretty compact and not controvertial, I'm not too happy with the extended form that has been inserted. I could list a few objections:
  • It introduces a pedagogical rather than explanatory perspective, and starts to deviate from the compact style of the article
  • It assumes a holonomic basis, which the Ricci calculus does not require
  • The term "normal" (orthogonal) has no meaning in the absence of a metric tensor; Ricci calculus does not need one
  • The term "inner product" similarly does not apply (it is not the same thing as a contraction, which corresponds to the action of a covector on a vector)
  • We have kept the article to scalar components in keeping with the way it is often presented; it's not a great idea to introduce symbols for abstract objects now without explanation (the closest we've come is to use the word "basis").
Perhaps you'd like to put in the originally proposed table instead; I'd certainly be happy with that. BTW, my courses used capital gamma as standard for the components of a Lorentz transformation. Whatever the dominant convention is, I think we should use. — Quondum 12:23, 24 November 2012 (UTC)
Ok - I anticipated this would be suggested; the simpler table replaces the extended one. I've always seen L or Λ for the transformation, several books (and in my tensor course) use L, so let’s keep L. Maschen (talk) 13:02, 24 November 2012 (UTC)

Query on revert

This revert carries an edit summary that could be construed as a personal attack. The reverted edits introduced explicit notation giving the abstract tensors rather than only their components. We have stayed away from the abstract presentation in this article thusfar, but only as a matter of article style. While I do not object to the revert because of this, I do not in any way agree with the edit summary, and in particular with its inference about the editor. — Quondum 10:23, 2 January 2013 (UTC)

My main objection was that he was confusing contravariant and covariant. See the details of his first edit. JRSpriggs (talk) 10:29, 2 January 2013 (UTC)
I agree that there was a minor confusion in this respect on the description of the basis and cobasis elements, which would be a reason only to correct it. (However, now that you draw attention to it, the normally confusing terminology as applied to the vectors and tensors rather than to their components is a further good reason to confine this article to components.) With summaries, please take care to stay within WP policy. — Quondum 15:16, 2 January 2013 (UTC)

Another standard notation for the derivatives?

In the section on differentiation, for the partial derivative should we not have

γ A α β = A α β , γ = γ A α β = x γ A α β {\displaystyle \nabla _{\gamma }A_{\alpha \beta \cdots }=A_{\alpha \beta \cdots ,\gamma }=\partial _{\gamma }A_{\alpha \beta \cdots }={\dfrac {\partial }{\partial x^{\gamma }}}A_{\alpha \beta \cdots }} ?

My impression is that using the nabla is a preferred by some authors, is intuitive and fits into the notation.

Similarly, for the covariant derivative, D γ T α = T α ; γ {\displaystyle D_{\gamma }T_{\alpha \dots }=T_{\alpha \dots ;\gamma }} seems to be notable. I notice that Penrose (in The Road to Reality) uses the nabla for the covariant derivative. Should we deal with these notations in the article? I have too little experience on what is notable here. — Quondum 12:03, 29 June 2013 (UTC)

The nabla symbol with index subscripts is definitely used (it certainly was in the bygone 3rd year SR and continuum mechanics courses), although I don't have any books using this convention to hand right now. As for covariant derivatives, Penrose may use it but not sure about this in general. Given more sources we should indicate this in the article. M∧ŜcħεИτlk 12:22, 29 June 2013 (UTC)
Browsing Google books throws up so many variants of notation in both cases that I am left not knowing what is notable. on a side note, the covariant and related derivatives sit slightly uncomfortably with the notation inasmuch as they use the whole set of components of a tensor, not only the explicit component regarded as a function on the manifold. I'm not sure whether this is worth making mention of. — Quondum 15:51, 29 June 2013 (UTC)

The recently added notation D d λ A α {\displaystyle {\frac {D}{d\lambda }}A^{\alpha }} seems to me to be too incomplete to be encyclopaedic. In particular, it omits crucial information from the notation that makes it pretty meaningless without explanatory text defining the family of curves that apply. It strikes me as made-up notation that an author (even MTW) might use by way of explaining something, not a notation that might see any use in other contexts. Does it really belong here? — Quondum 10:49, 10 August 2013 (UTC)

I also thought it was rather obscure in a way, and included it because it may be used in other GR literature, but let's remove it. M∧ŜcħεИτlk 15:32, 10 August 2013 (UTC)

Sequential Summation?

The operation referred to here as "sequential summation" doesn't make sense -- at least not as it's currently written.

Please explain how and why it's used, and why it's considered a tensor operation.

198.228.228.176 (talk) 21:37, 6 February 2014 (UTC) Collin237

I presume its use is restricted to cases when one of the two tensors involved is either symmetric or anti-symmetric. JRSpriggs (talk) 07:10, 7 February 2014 (UTC)
The section does mention the "either symmetric or antisymmetric" use, though it does not make sense to me in the symmetric case. The exclusion of summed terms is presumably merely a labour-saving contrivance and equivalent to a constant multiplier for the expression. Mentioning this use, the equivalence and an expression giving the constant factor in place of the rather vague "This is useful to prevent over-counting in some summations" would be sensible and would enhance this section's reference value somewhat. Any volunteers from those with access to the reference? —Quondum 17:27, 7 February 2014 (UTC)
Not sure what is difficult to understand in that section, nevertheless I tried to make it clearer. Yes, it does seem to be restricted to symmetric and antisymmetric tensors. M∧ŜcħεИτlk 17:04, 26 March 2014 (UTC)
The definition given is clear enough, but AFAICT the restriction should be to only tensors that are fully antisymmetric in each set of indices that are sequentially summed over, otherwise I expect that it will not be Lorenz-invariant. In this context, where we are introducing the notation as a reference, so the restrictions should be given correctly. References that only use it (i.e. they do not bother to define it other than to explain what it means in the particular case) might not provide the correct criteria, because they would have preselected the tensors. Any use that is not inherently restricted to exclusively fully antisymmetric cases (or perhaps a special basis choice?) would surprise me. If I had access to the actual references that this comes from, I could probably figure out what is appropriate, but "sequential summation" on Google books seems to draw a complete blank. In effect, I'm saying that I expect the equation
A P P Q = k A P P Q {\displaystyle A_{\underset {\rightharpoondown }{P}}{}^{PQ}=kA_{P}{}^{PQ}}
to be satisfied for some constant k in all allowable cases. —Quondum 19:39, 27 March 2014 (UTC)
On second thought, I think that Quondum is correct. Symmetric is not good enough because having two indices (of the same kind (contravariant or covariant) in the same tensor) equal cannot be represented in an invariant way. JRSpriggs (talk) 06:24, 28 March 2014 (UTC)
I don't follow your argument, probably my misunderstanding your choice of words; symmetry in two indices of the same type is an invariant property, and it sounds almost as though you are saying the opposite. My argument runs along the following lines: Consider the product sequential summation of two symmetric order-2 tensors, the metric tensor and its inverse g|αβ|g in say 2 dimensions. This is the componentwise product summed, but only on one side of the diagonal, so with an orthogonal basis, the result is zero. Change to a basis that is not orthogonal, so that the off-diagonal components become nonzero, and the result of the sequential summation would become nonzero, and hence not invariant. If half the sum of the products of the diagonal elements was included, it would have stayed invariant. One can go through all the symmetric/antisymmetric combinations, and only the case where both tensors are antisymmetric seems to remain invariant (it is easy to show that the sequential sum is half the full sum by symmetry and the zero diagonal, and we know that the full sum is invariant). I assume that this generalizes to more indices as a full antisymmetry requirement.
The question is essentially: does any source use this sequential summation when the indices involved are not fully antisymmetric? I have no way of finding or accessing such sources without links. Without this, I would incline towards simply asserting the full antisymmetric requirement, but really we should prove its correctness. —Quondum 00:40, 29 March 2014 (UTC)

The earliest I can tell MTW use it is in chapter 4: Electromagnetism and differential forms, box 4.1 (p. 91). It only seems to be used in the context of p-forms (which are ... antisymmetric tensors). The authors only say "the sum is over i1 < i2 < i3 < ... in". So Quondum is correct so far. I don't know any other sources using this notation for this purpose, and it doesn't appear in Schouten's original work either (cited and linked in the article). But this summation seems to appear in a different notation which Quondum quotes above, in another reference by T. Frankel (which I don't have, and haven't seen it at the library).

Clearly, this convention of "sequential summation" exists so we shouldn't really remove it from the article. For now, let's just restrict to antisymmetric tensors. M∧ŜcħεИτlk 08:27, 29 March 2014 (UTC)

Agreed, we should keep it (with the correct qualifications). But my reasoning says that we should change the wording from "when one of the tensors is antisymmetric" to "when both of the tensors are antisymmetric". —Quondum 16:04, 29 March 2014 (UTC)
Thanks for your edits. M∧ŜcħεИτlk 08:26, 30 March 2014 (UTC)

Further index notations

Further notations appear to be introduced in this reference, specifically pp. 30–31. I don't understand German, but it appears to allow nesting of , () and || on indices. My supposition is that the intention is that each of the inner nested index expressions is excluded from the higher-level symmetrization/antisymmetrization. Since this article covers a subset of exactly this type of notation, and this appears to be explicitly documented in this reference (and such exclusions make perfect sense), could someone with knowledge of German please verify my supposition so that we can include this? —Quondum 17:43, 29 March 2014 (UTC)

I never noticed that before, if we can find out the meaning it should be in the article. There is an English translation of the book by Schouten and Courant at the library (if I recall correctly), I'll check next time I go. M∧ŜcħεИτlk 08:26, 30 March 2014 (UTC)
It looks fascinating. It appears to be a detailed explanation. My interpretation of nesting is evidently incorrect. The various types of brackets evidently overlap rather than nest. The explanation seems to be saying that the indices are allocated to each (anti)symmetrization in turn, skipping anything between bars ||. Thus if A = B, then A = B. Rather convoluted. This appears to give a simple notation for the Kulkarni–Nomizu product, for example. The English version would be helpful. —Quondum 17:54, 30 March 2014 (UTC)

Braiding on an expression

Does anyone know of conventions on the braiding of the free indices an expression in Ricci calculus? If so, this would be a useful addition to the article. The most obvious convention that might apply would lexicographic ordering be as in Abstract index notation#Braiding, but I do not know whether this extends to this context. —Quondum 00:09, 31 March 2014 (UTC)

"in the denominator"

This edit (with edit note I am referring to an expression where the x^{\mu} is in the denominator or x_{\mu} is in the denominator. I tried to clarify however I'm not the best at explaining. But I do think it is important enough to have.) appears to refer to a partial derivative. This is not a fraction, and has no numerator or denominator. In general the statement is also false, as the partial derivative only transforms covariantly (contravariantly) when the expression being differentiated is a scalar. This is handled under Ricci calculus#Differentiation, where I've added a mention of this special case. —Quondum 06:20, 26 August 2014 (UTC)

What about in the covariant derivative? Take the covariant derivative of a (1,0) tensor as an example.
μ A ν = A ν x μ + Γ μ α ν A α {\displaystyle \nabla _{\mu }A^{\nu }={\frac {\partial A^{\nu }}{\partial x^{\mu }}}+\Gamma _{\mu \alpha }^{\nu }A^{\alpha }}
The μ {\displaystyle \mu } in x μ {\displaystyle x^{\mu }} is treated as a lower index, but in the fraction it 'appears' as a upper index.
That is what I mean. — Preceding unsigned comment added by Theoretical wormhole (talkcontribs) 2014-08-26T16:13:30‎
The covariant derivative is a bit more complicated to describe properly due to the extra term; it is probably best to let readers simply understand the behaviour from the expression, which is already there.
We could draw more attention to the apparent moving of the variance of the index in a partial derivative, but keep in mind that this is of a mnemonic nature. You probably would not have noticed this "variance switch" if it were not for the suggestive nature of the partial derivative used. Perhaps we could add to the section on the partial derivative the following:
Coordinates are typically denoted by x, but do not in general form the components of a vector. In flat spacetime and linear coordinatization, differences in coordinates, Δx, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. This is reflected by the lower index in the left of the notational equivalence μ = x μ . {\displaystyle {\partial _{\mu }}={\tfrac {\partial }{\partial x^{\mu }}}.}
Would this do what you want? —Quondum 17:04, 26 August 2014 (UTC)

Yea that sounds good to me. Would you like to add it in or should I? Theoretical wormhole (talk) 21:21, 26 August 2014 (UTC)

Done. —Quondum 21:42, 26 August 2014 (UTC)

Connect with vector algebra concepts

The present (early 2019) state of this article does very little to go beyond the subject as "a bunch of rules for operating on arrays of scalars". I think it would be useful to also provide connections with concepts from elementary linear algebra where convenient. For example (material from subsection "Upper and lower indices"):

Contravariant tensor components

An upper index (superscript) indicates contravariance of the components with respect to that index:

A α β γ {\displaystyle A^{\alpha \beta \gamma \cdots }}

A vector v {\displaystyle \mathbf {v} } corresponds to a tensor with one upper index v α {\displaystyle v^{\alpha }} . The counterpart of a tensor with two upper indices (a bivector) is less commonly seen in elementary linear algebra because it gets notationally cumbersome; many authors prefer to switch to tensor index notation when they need such objects.

Covariant tensor components

A lower index (subscript) indicates covariance of the components with respect to that index:

A α β γ {\displaystyle A_{\alpha \beta \gamma \cdots }}

A tensor with k {\displaystyle k} lower indices may correspond to a map that takes k {\displaystyle k} vectors as arguments. For example, the metric tensor g α β {\displaystyle g_{\alpha \beta }} corresponds to the dot product of vectors.

Mixed-variance tensor components

A tensor may have both upper and lower indices:

A α β γ δ . {\displaystyle A_{\alpha }{}^{\beta }{}_{\gamma }{}^{\delta \cdots }.}

A matrix A {\displaystyle A} is usually a tensor A i j {\displaystyle A^{i}{}_{j}} with one upper and one lower index — this makes matrix–vector multiplication A v {\displaystyle A\mathbf {v} } correspond to applying a linear transformation to the vector, and makes matrix multiplication A B {\displaystyle AB} correspond to a contraction A i j B j k {\displaystyle A^{i}{}_{j}B^{j}{}_{k}} of tensor indices — but there are matrices which rather have two indices of the same variance: the matrix of a bilinear form naturally has two lower indices, and the R-matrix of a quasitriangular Hopf algebra naturally has two upper indices.

Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. on the generalized Kronecker delta).

Raising and lowering indices

By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa:

B γ β = g γ α A α β and A α β = g α γ B γ β {\displaystyle B^{\gamma }{}_{\beta \cdots }=g^{\gamma \alpha }A_{\alpha \beta \cdots }\quad {\text{and}}\quad A_{\alpha \beta \cdots }=g_{\alpha \gamma }B^{\gamma }{}_{\beta \cdots }}

The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.

Repositioning an index often corresponds to taking a transpose (or similar, such as a conjugate transpose) in matrix formalism. For example, that the dot-product u v {\displaystyle \mathbf {u} \cdot \mathbf {v} } may also be written u T v {\displaystyle u^{\mathrm {T} }v} corresponds to the fact that the two tensor expressions g i j u i v j {\displaystyle g_{ij}u^{i}v^{j}} and u j v j {\displaystyle u_{j}v^{j}} are the same. A difference is that the transpose repositions all indices of a tensor, whereas raising or lowering acts on individual indices.

130.243.68.240 (talk) 14:26, 30 April 2019 (UTC)

More connection with meaning and other formalisms (as you do here) would be helpful, though I would be inclined to limit translation or comparison to vector calculus or matrices and focus on the concepts of vectors and multilinear algebra. You have a few minor technical errors (e.g. a tensor of degree 2 must be antisymmetric to correspond to a bivector), but this can be fixed. Do you want to try your hand at this? I might review and tinker. —Quondum 18:03, 7 May 2019 (UTC)

Coordinate basis

This article applies with the more general tetrad formalism, aside from Ricci calculus § Differentiation, which assumes a coordinate basis. We should be clear about the applicability, and it would be nice to make even the differentiation section general, though a suitable source would be needed. —Quondum 18:09, 7 May 2019 (UTC)

Role of connection and metric tensor

The article does not make clear that the Christoffel symbols are only defined in the context of a connection nor that multiple metrics may induce the same connection. I'm not sure whether that belongs in the lead, but it should be somewhere prior to reference to the Christoffel sysmbols and metrics.Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:19, 25 October 2020 (UTC)

@Quondum: User:Quondum made a change with a description even restriction to a pseudo-Riemannian manifold is unduly restrictive: Ricci calculus does not require a metric tensor; it merely accommodates it. and in another change removed the footnote While the raising and lowering of indices is dependent on the metric tensor, the covariant derivative is only dependent on the affine connection derived from it. from the lede. If the article is to be more general then there should be discussion of the facts that
  1. Raising and lowering of an index depends on the choice of metric tensor.
  2. The covariant derivative depends on the choice of connection, which need not be the Affine connection of a metric tensor
  3. The exterior derivative and Lie Derivative depend on neither a connection nor a metric tensor
This affect the lede, #Raising and lowering indices and #Differentiation. Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:08, 9 April 2021 (UTC)
The points you list are valid. I do not think that properties and interdependencies of operations that can be expressed in Ricci calculus need to be detailed in the lead, however. For example, your second and third points belong in sections describing these operations, but I do not see them as belonging in the lead, not even in a footnote. The lead should be easily readable without distraction by someone who has been introduced by the topic, and subtleties should be omitted there. —Quondum 16:17, 9 April 2021 (UTC)
In my last edit, I added some text to #Differentiation, but did not rewrite the reference to Christoffel symbols to apply to an arbitrary connection. I'm having trouble coming up with an accurate, clear and concise replacement. Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:31, 9 April 2021 (UTC)
According to Covariant derivative, the covariant derivative is dependent on an arbitrary (Koszul) connection, but not on a metric tensor (unless it does so through the connection). This would suggest that any reference to the metric should simply be removed (aside from alerting the reader that this is more general than the Levi-Civita connection) from the first part of §Covariant_derivative until the metric dependence and tortionless of the Levi-Civita connection is mentioned near the bottom of the section as a special case. —Quondum 00:17, 10 April 2021 (UTC)
Your recent edits look good. There is still text that assumes a metric tensor:
  1. where Γβγ is a Christoffel symbol of the second kind.
  2. This derivative is characterized by the product rule and applied to the metric tensor gμν it gives zero:
the first implicitly via the Christoffel symbols and the second explicitly.
Maybe
  1. where Γ β γ α {\displaystyle \Gamma _{\beta \gamma }^{\alpha }} are the components of the connection. When Γ β γ α {\displaystyle \Gamma _{\beta \gamma }^{\alpha }} are the components of the metric connection of a metric tensor g μ ν {\displaystyle g_{\mu \nu }} then Γ β γ α {\displaystyle \Gamma _{\beta \gamma }^{\alpha }} is a Christoffel symbol of the second kind.
  2. This derivative is characterized by the product rule. When Γ β γ α {\displaystyle \Gamma _{\beta \gamma }^{\alpha }} are the components of the metric connection of a metric tensor g μ ν {\displaystyle g_{\mu \nu }} then the covariant derivative of that metric tensor is zero:
Is that wording good enough, or does it still need tweaking? Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:11, 10 April 2021 (UTC)
I think it makes sense to first make statements that apply to a general connection without introducing any specialization to use a metric tensor or a Levi-Civita connection. I have separated it out in this way, trying to incorporate what you gave above. The product rule of the covariant derivative should still probably be made explicit for the general case. The specialization (which I put under the subheading "Metric connection") is still very sketchy. For example, it does not mention a Levi-Civita connection, which is the dominantly used connection, which is itself a specialization of a metric connection. You might want to modify that. —Quondum 01:44, 11 April 2021 (UTC)
Looks good. There are a few stylistic concerns
  1. It's convention to use consecutive Greek or Roman letters, thus g α β ; γ {\displaystyle g_{\alpha \beta ;\gamma }} or g μ ν ; ξ {\displaystyle g_{\mu \nu ;\xi }} , not g μ ν ; γ {\displaystyle g_{\mu \nu ;\gamma }}
  2. I find it easier to edit with the names of the Greek letters, since the letters aren't on most keyboards.
  3. In large articles, should terms be linked at the first occurrence in each section, or only at the first occurance in the article?
  4. In the subsection Metric connection, the reference to Christoffel symbols should be for Levi-Civita connections, i.e., metric connections with no torsion.
  5. I've been using <math>...</math> rather than {{math}} because I find LaTeX easier to read and edit than HTML+wikitext; the article appears to use both. Is there a preferred style? Shmuel (Seymour J.) Metz Username:Chatul (talk) 08:19, 11 April 2021 (UTC)
You're welcome to make changes directly – discussion can follow if need be; this is often more efficient than proposing them first unless you still need to make up your mind. Style choice is always tricky.
I tend to think that finding a link should be where a reader can easily refer back to it, rather than having to do a text search, so I incline to linking a term more than once in a large article, so once per piece (e.g. section) that might be referenced. However, this is one of those style things that preferences vary on, and I have no strong feelings on this.
Using <math>...</math> versus {{math}} is all over the place on WP, and is complicated by different browsers and skins rendering things differently. I tend to try to keep the style in an article consistent, and if a style is established, to leave it as is. Inline <math>...</math> has some issues of alignment, size and wrapping that can be problematic, and {{math}} is not as neat standalone, nor is it as flexible. The style at the moment is {{math}} when inline, and {{tag|math} on standalone lines. I would get a broader consensus from several editors before change this.
My knowledge of connections is primarily from WP. Now that I have separated the general connection from any more specific choice of connection, the latter should be edited freely. I inferred from Christoffel symbols that these apply to any metric connection and that a Levi-Civita connection is the special case defined as torsion-free (but I guess some people might reserve the term Christoffel symbols for a Levi-Civita connection); here we are using the same gamma symbols for the general connection. I would make it clear that there are distinct constraints: what constraint defines a metric connection, what constraint defines torsion-free, and that both constraints uniquely produce a Levi-Civita connection. Go ahead and edit this according to your understanding; I have a significant chance of unwittingly introducing some terminological or even a mathematical error. —Quondum 14:16, 11 April 2021 (UTC)

Exterior derivative still to be added?

The exterior derivative is a notable operator expressible in Ricci calculus, so it seems appropriate to include it. It might not be defined by this name in most texts, since pretty much every derivative can be constructed from a covariant derivative. However, derivatives that are independent of the connection should be shown independently, for example, the Lie derivative (already present). Though I have not seen this defined in a text, I expect the exterior derivative of any totally antisymmetric covariant tensor with components Xα...γ to be X in any coordinate basis. Expressions like this occur (e.g. in Maxwell's equations), but the name "exterior derivative" is not often used. —Quondum 16:42, 9 April 2021 (UTC)

Yes, and I have seen Physics books giving Maxwell's equations as d F = 0 {\displaystyle \mathrm {d} F=0} and d F = J {\displaystyle \mathrm {d} *F=J} , where F = d A {\displaystyle F=\mathrm {d} {\overrightarrow {A}}} Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:25, 9 April 2021 (UTC)
Yes, even our article gives this. I've added a section since I found a source; however, I suspect that the sign is wrong when the tensor field being differentiated has odd degree: the source puts the index of differentiation at the start of the antisymmetrization. —Quondum 23:29, 9 April 2021 (UTC)
Adjusted now. —Quondum 23:53, 9 April 2021 (UTC)
Categories: