Revision as of 18:09, 7 May 2019 editQuondum (talk | contribs)Extended confirmed users36,927 edits →Coordinate basis: new section← Previous edit | Revision as of 20:19, 25 October 2020 edit undoChatul (talk | contribs)Extended confirmed users7,551 edits →Role of connection and metric tensor: new sectionNext edit → | ||
Line 207: | Line 207: | ||
This article applies with the more general ], aside from {{section link|Ricci calculus#Differentiation}}, which assumes a ]. We should be clear about the applicability, and it would be nice to make even the differentiation section general, though a suitable source would be needed. —] 18:09, 7 May 2019 (UTC) | This article applies with the more general ], aside from {{section link|Ricci calculus#Differentiation}}, which assumes a ]. We should be clear about the applicability, and it would be nice to make even the differentiation section general, though a suitable source would be needed. —] 18:09, 7 May 2019 (UTC) | ||
== Role of connection and metric tensor == | |||
The article does not make clear that the ]s are only defined in the context of a ] nor that multiple metrics may induce the same connection. I'm not sure whether that belongs in the lead, but it should be somewhere prior to reference to the Christoffel sysmbols and metrics.] (]) 20:19, 25 October 2020 (UTC) |
Revision as of 20:19, 25 October 2020
This article has not yet been rated on Misplaced Pages's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
{{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
|
Archives |
Proposal?...
Just a proposal, would it help to add the following table and explanation to Ricci calculus (Raised and lowered indices) to illustrate how sup-/super-scripts and summation fit together in a way that relates "co-/contra-variance" to invariance?
Proposed table/text | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.
|
This is far clearer and briefer (to me at least...) than the main article Covariance and contravariance of vectors, and it fits in with the summary style of this article. It’s also another example for the manipulation of indices, including the Kronecker delta.
What do others think? Just a suggestion - the article is excellent and I don't want to touch it! As always I'm not forcing this in - it's for take or leave. Thanks once again to the editors here (although maybe/maybe not F= after all...). Maschen (talk) 16:28, 15 August 2012 (UTC)
- I also find that this gives a very intuitive and direct explanation for someone comfortable with symbolic algebra, and think it would be a sensible addition. I've taken the liberty of making a minor tweak to facilitate easy following (by avoiding the need for renaming dummy indices upon substitution to avoid duplication). I don't think it fits with the section Raised and lowered indices unless this is renamed, and should go in a separate (sub?)section Change of basis. I prefer that the way the passive transformation is introduced is not as a matrix with a non-zero determinant (especially since the a determinant of a matrix and the determinant of the abstract linear transformation it represents are entirely different things and have different values). This objection is superficial and can be ignored for now: a rewording in terms of linear combinations avoiding the terminology of matrices should be straightforward. — Quondum 02:13, 16 August 2012 (UTC)
- That's very reasonable, by all means feel free to make changes.
- About the determinant: I'm sure L could represent a linear transformation which is represented by an invertible matrix, so how could they have a different determinant? Active and passive transformation is a good link to add though.
- About sources: The only source that supports what I'm saying is Maths methods for Physics and Engineering, Riley, Hobson, Bence, 2010 but this seems to be restricted to Cartesian tensors for most of the tensor chapter, no others to hand right now (will look for some soon...).
- On the minus side for a balanced view: there is the concern it makes the article longer for not that much gain (according to the last archive, there were repeated additions and trims to make the article as short as possible with no inessential details, which this proposal may be...). Maschen (talk) 06:14, 16 August 2012 (UTC)
- The expression of a general Lorentz transformation in the notation of this article does not seem excessive to me, but this proposal does unavoidably introduce explicit use of the abstract basis vectors, thus potentially expanding the scope. On this score I'll be interested in input from others.
- On the determinant of an abstract quantity (tensor), being a linear transformation V → V, or a linear transformation V → V (i.e. any type (1,1) tensor), a basis-independent definition of the determinant would be the scalar change in n-volume it introduces, or equivalently the product of its eigenvalues. This cannot be defined for a type (2,0) or type (0,2) tensor (that is, without the use of a metric tensor). When a type (1,1) tensor is expressed in terms of a given basis (and its dual), this corresponds with the determinant of the associated matrix of components, and is a true invariant. A passive transformation, on the other hand, relates to distinct bases, and the matrix determinant is not invariant.
- In this proposal, the concept of determinant is quite unnecessary; non-singularity of the basis-mapping is all that is necessary, and this is guaranteed by the fact that they are both bases. All that is needed is the two bases and their components when expressed in terms of the other basis, and the rest follows. No determinants, no inverses, no non-singularity requirement, and no mention of matrices; only of components. — Quondum 13:20, 16 August 2012 (UTC)
- I made the suggested modifications. Maschen (talk) 13:32, 16 August 2012 (UTC)
- I've tweaked it slightly again. No other comments seem to be forthcoming yet... — Quondum 02:25, 17 August 2012 (UTC)
Added to lead of covariance and contravariance of vectors, see talk. Maschen (talk) 08:06, 3 September 2012 (UTC)
I can understand the removal from covariance and contravariance of vectors; very hasty at the time the image was added... However, it's been 3 months, no objections and one favour. I will take the liberty of adding it as planned long ago, better here than anywhere else (in a slightly extended form)... feel free to revert. Maschen (talk) 10:15, 24 November 2012 (UTC)
- While the proposed table is pretty compact and not controvertial, I'm not too happy with the extended form that has been inserted. I could list a few objections:
- It introduces a pedagogical rather than explanatory perspective, and starts to deviate from the compact style of the article
- It assumes a holonomic basis, which the Ricci calculus does not require
- The term "normal" (orthogonal) has no meaning in the absence of a metric tensor; Ricci calculus does not need one
- The term "inner product" similarly does not apply (it is not the same thing as a contraction, which corresponds to the action of a covector on a vector)
- We have kept the article to scalar components in keeping with the way it is often presented; it's not a great idea to introduce symbols for abstract objects now without explanation (the closest we've come is to use the word "basis").
- Perhaps you'd like to put in the originally proposed table instead; I'd certainly be happy with that. BTW, my courses used capital gamma as standard for the components of a Lorentz transformation. Whatever the dominant convention is, I think we should use. — Quondum 12:23, 24 November 2012 (UTC)
- Ok - I anticipated this would be suggested; the simpler table replaces the extended one. I've always seen L or Λ for the transformation, several books (and in my tensor course) use L, so let’s keep L. Maschen (talk) 13:02, 24 November 2012 (UTC)
Query on revert
This revert carries an edit summary that could be construed as a personal attack. The reverted edits introduced explicit notation giving the abstract tensors rather than only their components. We have stayed away from the abstract presentation in this article thusfar, but only as a matter of article style. While I do not object to the revert because of this, I do not in any way agree with the edit summary, and in particular with its inference about the editor. — Quondum 10:23, 2 January 2013 (UTC)
- My main objection was that he was confusing contravariant and covariant. See the details of his first edit. JRSpriggs (talk) 10:29, 2 January 2013 (UTC)
- I agree that there was a minor confusion in this respect on the description of the basis and cobasis elements, which would be a reason only to correct it. (However, now that you draw attention to it, the normally confusing terminology as applied to the vectors and tensors rather than to their components is a further good reason to confine this article to components.) With summaries, please take care to stay within WP policy. — Quondum 15:16, 2 January 2013 (UTC)
Another standard notation for the derivatives?
In the section on differentiation, for the partial derivative should we not have
- ?
My impression is that using the nabla is a preferred by some authors, is intuitive and fits into the notation.
Similarly, for the covariant derivative, seems to be notable. I notice that Penrose (in The Road to Reality) uses the nabla for the covariant derivative. Should we deal with these notations in the article? I have too little experience on what is notable here. — Quondum 12:03, 29 June 2013 (UTC)
- The nabla symbol with index subscripts is definitely used (it certainly was in the bygone 3rd year SR and continuum mechanics courses), although I don't have any books using this convention to hand right now. As for covariant derivatives, Penrose may use it but not sure about this in general. Given more sources we should indicate this in the article. M∧ŜcħεИτlk 12:22, 29 June 2013 (UTC)
- Browsing Google books throws up so many variants of notation in both cases that I am left not knowing what is notable. on a side note, the covariant and related derivatives sit slightly uncomfortably with the notation inasmuch as they use the whole set of components of a tensor, not only the explicit component regarded as a function on the manifold. I'm not sure whether this is worth making mention of. — Quondum 15:51, 29 June 2013 (UTC)
The recently added notation seems to me to be too incomplete to be encyclopaedic. In particular, it omits crucial information from the notation that makes it pretty meaningless without explanatory text defining the family of curves that apply. It strikes me as made-up notation that an author (even MTW) might use by way of explaining something, not a notation that might see any use in other contexts. Does it really belong here? — Quondum 10:49, 10 August 2013 (UTC)
- I also thought it was rather obscure in a way, and included it because it may be used in other GR literature, but let's remove it. M∧ŜcħεИτlk 15:32, 10 August 2013 (UTC)
Sequential Summation?
The operation referred to here as "sequential summation" doesn't make sense -- at least not as it's currently written.
Please explain how and why it's used, and why it's considered a tensor operation.
198.228.228.176 (talk) 21:37, 6 February 2014 (UTC) Collin237
- I presume its use is restricted to cases when one of the two tensors involved is either symmetric or anti-symmetric. JRSpriggs (talk) 07:10, 7 February 2014 (UTC)
- The section does mention the "either symmetric or antisymmetric" use, though it does not make sense to me in the symmetric case. The exclusion of summed terms is presumably merely a labour-saving contrivance and equivalent to a constant multiplier for the expression. Mentioning this use, the equivalence and an expression giving the constant factor in place of the rather vague "This is useful to prevent over-counting in some summations" would be sensible and would enhance this section's reference value somewhat. Any volunteers from those with access to the reference? —Quondum 17:27, 7 February 2014 (UTC)
- Not sure what is difficult to understand in that section, nevertheless I tried to make it clearer. Yes, it does seem to be restricted to symmetric and antisymmetric tensors. M∧ŜcħεИτlk 17:04, 26 March 2014 (UTC)
- The definition given is clear enough, but AFAICT the restriction should be to only tensors that are fully antisymmetric in each set of indices that are sequentially summed over, otherwise I expect that it will not be Lorenz-invariant. In this context, where we are introducing the notation as a reference, so the restrictions should be given correctly. References that only use it (i.e. they do not bother to define it other than to explain what it means in the particular case) might not provide the correct criteria, because they would have preselected the tensors. Any use that is not inherently restricted to exclusively fully antisymmetric cases (or perhaps a special basis choice?) would surprise me. If I had access to the actual references that this comes from, I could probably figure out what is appropriate, but "sequential summation" on Google books seems to draw a complete blank. In effect, I'm saying that I expect the equation
- to be satisfied for some constant k in all allowable cases. —Quondum 19:39, 27 March 2014 (UTC)
- The definition given is clear enough, but AFAICT the restriction should be to only tensors that are fully antisymmetric in each set of indices that are sequentially summed over, otherwise I expect that it will not be Lorenz-invariant. In this context, where we are introducing the notation as a reference, so the restrictions should be given correctly. References that only use it (i.e. they do not bother to define it other than to explain what it means in the particular case) might not provide the correct criteria, because they would have preselected the tensors. Any use that is not inherently restricted to exclusively fully antisymmetric cases (or perhaps a special basis choice?) would surprise me. If I had access to the actual references that this comes from, I could probably figure out what is appropriate, but "sequential summation" on Google books seems to draw a complete blank. In effect, I'm saying that I expect the equation
- On second thought, I think that Quondum is correct. Symmetric is not good enough because having two indices (of the same kind (contravariant or covariant) in the same tensor) equal cannot be represented in an invariant way. JRSpriggs (talk) 06:24, 28 March 2014 (UTC)
- I don't follow your argument, probably my misunderstanding your choice of words; symmetry in two indices of the same type is an invariant property, and it sounds almost as though you are saying the opposite. My argument runs along the following lines: Consider the product sequential summation of two symmetric order-2 tensors, the metric tensor and its inverse g|αβ|g in say 2 dimensions. This is the componentwise product summed, but only on one side of the diagonal, so with an orthogonal basis, the result is zero. Change to a basis that is not orthogonal, so that the off-diagonal components become nonzero, and the result of the sequential summation would become nonzero, and hence not invariant. If half the sum of the products of the diagonal elements was included, it would have stayed invariant. One can go through all the symmetric/antisymmetric combinations, and only the case where both tensors are antisymmetric seems to remain invariant (it is easy to show that the sequential sum is half the full sum by symmetry and the zero diagonal, and we know that the full sum is invariant). I assume that this generalizes to more indices as a full antisymmetry requirement.
- The question is essentially: does any source use this sequential summation when the indices involved are not fully antisymmetric? I have no way of finding or accessing such sources without links. Without this, I would incline towards simply asserting the full antisymmetric requirement, but really we should prove its correctness. —Quondum 00:40, 29 March 2014 (UTC)
- On second thought, I think that Quondum is correct. Symmetric is not good enough because having two indices (of the same kind (contravariant or covariant) in the same tensor) equal cannot be represented in an invariant way. JRSpriggs (talk) 06:24, 28 March 2014 (UTC)
The earliest I can tell MTW use it is in chapter 4: Electromagnetism and differential forms, box 4.1 (p. 91). It only seems to be used in the context of p-forms (which are ... antisymmetric tensors). The authors only say "the sum is over i1 < i2 < i3 < ... in". So Quondum is correct so far. I don't know any other sources using this notation for this purpose, and it doesn't appear in Schouten's original work either (cited and linked in the article). But this summation seems to appear in a different notation which Quondum quotes above, in another reference by T. Frankel (which I don't have, and haven't seen it at the library).
Clearly, this convention of "sequential summation" exists so we shouldn't really remove it from the article. For now, let's just restrict to antisymmetric tensors. M∧ŜcħεИτlk 08:27, 29 March 2014 (UTC)
- Agreed, we should keep it (with the correct qualifications). But my reasoning says that we should change the wording from "when one of the tensors is antisymmetric" to "when both of the tensors are antisymmetric". —Quondum 16:04, 29 March 2014 (UTC)
- Thanks for your edits. M∧ŜcħεИτlk 08:26, 30 March 2014 (UTC)
Further index notations
Further notations appear to be introduced in this reference, specifically pp. 30–31. I don't understand German, but it appears to allow nesting of , () and || on indices. My supposition is that the intention is that each of the inner nested index expressions is excluded from the higher-level symmetrization/antisymmetrization. Since this article covers a subset of exactly this type of notation, and this appears to be explicitly documented in this reference (and such exclusions make perfect sense), could someone with knowledge of German please verify my supposition so that we can include this? —Quondum 17:43, 29 March 2014 (UTC)
- I never noticed that before, if we can find out the meaning it should be in the article. There is an English translation of the book by Schouten and Courant at the library (if I recall correctly), I'll check next time I go. M∧ŜcħεИτlk 08:26, 30 March 2014 (UTC)
- It looks fascinating. It appears to be a detailed explanation. My interpretation of nesting is evidently incorrect. The various types of brackets evidently overlap rather than nest. The explanation seems to be saying that the indices are allocated to each (anti)symmetrization in turn, skipping anything between bars ||. Thus if A = B, then A = B. Rather convoluted. This appears to give a simple notation for the Kulkarni–Nomizu product, for example. The English version would be helpful. —Quondum 17:54, 30 March 2014 (UTC)
Braiding on an expression
Does anyone know of conventions on the braiding of the free indices an expression in Ricci calculus? If so, this would be a useful addition to the article. The most obvious convention that might apply would lexicographic ordering be as in Abstract index notation#Braiding, but I do not know whether this extends to this context. —Quondum 00:09, 31 March 2014 (UTC)
"in the denominator"
This edit (with edit note I am referring to an expression where the x^{\mu} is in the denominator or x_{\mu} is in the denominator. I tried to clarify however I'm not the best at explaining. But I do think it is important enough to have.) appears to refer to a partial derivative. This is not a fraction, and has no numerator or denominator. In general the statement is also false, as the partial derivative only transforms covariantly (contravariantly) when the expression being differentiated is a scalar. This is handled under Ricci calculus#Differentiation, where I've added a mention of this special case. —Quondum 06:20, 26 August 2014 (UTC)
- What about in the covariant derivative? Take the covariant derivative of a (1,0) tensor as an example.
- The in is treated as a lower index, but in the fraction it 'appears' as a upper index.
- That is what I mean. — Preceding unsigned comment added by Theoretical wormhole (talk • contribs) 2014-08-26T16:13:30
- The covariant derivative is a bit more complicated to describe properly due to the extra term; it is probably best to let readers simply understand the behaviour from the expression, which is already there.
- We could draw more attention to the apparent moving of the variance of the index in a partial derivative, but keep in mind that this is of a mnemonic nature. You probably would not have noticed this "variance switch" if it were not for the suggestive nature of the partial derivative used. Perhaps we could add to the section on the partial derivative the following:
- Coordinates are typically denoted by x, but do not in general form the components of a vector. In flat spacetime and linear coordinatization, differences in coordinates, Δx, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. This is reflected by the lower index in the left of the notational equivalence
- Would this do what you want? —Quondum 17:04, 26 August 2014 (UTC)
Yea that sounds good to me. Would you like to add it in or should I? Theoretical wormhole (talk) 21:21, 26 August 2014 (UTC)
Connect with vector algebra concepts
The present (early 2019) state of this article does very little to go beyond the subject as "a bunch of rules for operating on arrays of scalars". I think it would be useful to also provide connections with concepts from elementary linear algebra where convenient. For example (material from subsection "Upper and lower indices"):
Contravariant tensor components
An upper index (superscript) indicates contravariance of the components with respect to that index:
A vector corresponds to a tensor with one upper index . The counterpart of a tensor with two upper indices (a bivector) is less commonly seen in elementary linear algebra because it gets notationally cumbersome; many authors prefer to switch to tensor index notation when they need such objects.
Covariant tensor components
A lower index (subscript) indicates covariance of the components with respect to that index:
A tensor with lower indices may correspond to a map that takes vectors as arguments. For example, the metric tensor corresponds to the dot product of vectors.
Mixed-variance tensor components
A tensor may have both upper and lower indices:
A matrix is usually a tensor with one upper and one lower index — this makes matrix–vector multiplication correspond to applying a linear transformation to the vector, and makes matrix multiplication correspond to a contraction of tensor indices — but there are matrices which rather have two indices of the same variance: the matrix of a bilinear form naturally has two lower indices, and the R-matrix of a quasitriangular Hopf algebra naturally has two upper indices.
Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. on the generalized Kronecker delta).
Raising and lowering indices
By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa:
The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.
Repositioning an index often corresponds to taking a transpose (or similar, such as a conjugate transpose) in matrix formalism. For example, that the dot-product may also be written corresponds to the fact that the two tensor expressions and are the same. A difference is that the transpose repositions all indices of a tensor, whereas raising or lowering acts on individual indices.
130.243.68.240 (talk) 14:26, 30 April 2019 (UTC)
- More connection with meaning and other formalisms (as you do here) would be helpful, though I would be inclined to limit translation or comparison to vector calculus or matrices and focus on the concepts of vectors and multilinear algebra. You have a few minor technical errors (e.g. a tensor of degree 2 must be antisymmetric to correspond to a bivector), but this can be fixed. Do you want to try your hand at this? I might review and tinker. —Quondum 18:03, 7 May 2019 (UTC)
Coordinate basis
This article applies with the more general tetrad formalism, aside from Ricci calculus § Differentiation, which assumes a coordinate basis. We should be clear about the applicability, and it would be nice to make even the differentiation section general, though a suitable source would be needed. —Quondum 18:09, 7 May 2019 (UTC)
Role of connection and metric tensor
The article does not make clear that the Christoffel symbols are only defined in the context of a connection nor that multiple metrics may induce the same connection. I'm not sure whether that belongs in the lead, but it should be somewhere prior to reference to the Christoffel sysmbols and metrics.Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:19, 25 October 2020 (UTC)
Categories: