Misplaced Pages

Ricci calculus: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 23:32, 9 April 2021 editQuondum (talk | contribs)Extended confirmed users36,932 edits Exterior derivative: adding a sign to compensate for position of index in notation to match source's meaning (mentioned in talk)← Previous edit Latest revision as of 09:18, 17 December 2024 edit undoPretendus (talk | contribs)312 edits removed self-referential link 
(75 intermediate revisions by 21 users not shown)
Line 1: Line 1:
{{Short description|Tensor index notation for tensor-based calculations}} {{Short description|Tensor index notation for tensor-based calculations}}
{{redirect|Tensor index notation|a summary of tensors in general|Glossary of tensor theory}} {{redirect|Tensor index notation|a summary of tensors in general|Glossary of tensor theory}}
In ], '''Ricci calculus''' <!--({{IPA-it|ˈrittʃi}})--> constitutes the rules of index notation and manipulation for ] and ] on a ], with or without a ] or ].{{efn|While the raising and lowering of indices is dependent on a ], the ] is only dependent on the ] while the ] and the ] are dependent on neither.}}<ref>{{cite book |author1=Synge J.L. |author2=Schild A. |publisher=first Dover Publications 1978 edition |title=Tensor Calculus |pages=6–108|year= 1949}}</ref><ref>{{cite book |pages=85–86, §3.5|author1=J.A. Wheeler |author2=C. Misner |author3=K.S. Thorne | title=]| publisher=W.H. Freeman & Co| year=1973 | isbn=0-7167-0344-0}}</ref><ref>{{cite book |author=R. Penrose| title=]| publisher= Vintage books| year=2007 | isbn=0-679-77631-1}}</ref> It is also the modern name for what used to be called the '''absolute differential calculus''' (the foundation of ]), developed by ] in 1887–1896, and subsequently popularized in a paper written with his pupil ] in 1900.<ref>{{cite journal |last1=Ricci |first1=Gregorio |author-link1=Gregorio Ricci-Curbastro |last2=Levi-Civita |first2=Tullio |author-link2=Tullio Levi-Civita |title=Méthodes de calcul différentiel absolu et leurs applications |trans-title=Methods of the absolute differential calculus and their applications |journal=] |date=March 1900 |access-date=19 October 2019 |volume=54 |issue=1-2 |pages=125–201 |doi=10.1007/BF01454201 |url=http://gdz.sub.uni-goettingen.de/dms/resolveppn/?PPN=GDZPPN002258102 |publisher=Springer |language=fr}}</ref> ] developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to ] and ] in the early twentieth century.<ref>{{cite book|last=Schouten|first=Jan A.|title=Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Ricci Calculus – An introduction in the latest methods and problems in multi-dimmensional differential geometry)|language=de|year=1924|series=Grundlehren der mathematischen Wissenschaften|volume=10|editor= R. Courant|publisher=Springer Verlag|location=Berlin|url=http://resolver.sub.uni-goettingen.de/purl?PPN373339186}}</ref> In ], '''Ricci calculus''' <!--({{IPA-it|ˈrittʃi}})--> constitutes the rules of index notation and manipulation for ] and ] on a ], with or without a ] or ].{{efn|While the raising and lowering of indices is dependent on a ], the ] is only dependent on the ] while the ] and the ] are dependent on neither.}}<ref>{{cite book |author1=Synge J.L. |author2=Schild A. |publisher=first Dover Publications 1978 edition |title=Tensor Calculus |pages=6–108|year= 1949}}</ref><ref>{{cite book |pages=85–86, §3.5|author1=J.A. Wheeler |author2=C. Misner |author3=K.S. Thorne | title=]| publisher=W.H. Freeman & Co| year=1973 | isbn=0-7167-0344-0}}</ref><ref>{{cite book |author=R. Penrose| title=]| publisher= Vintage books| year=2007 | isbn=978-0-679-77631-4}}</ref> It is also the modern name for what used to be called the '''absolute differential calculus''' (the foundation of tensor calculus), '''tensor calculus''' or '''tensor analysis''' developed by ] in 1887–1896, and subsequently popularized in a paper written with his pupil ] in 1900.<ref>{{cite journal |last1=Ricci |first1=Gregorio |author-link1=Gregorio Ricci-Curbastro |last2=Levi-Civita |first2=Tullio |author-link2=Tullio Levi-Civita |title=Méthodes de calcul différentiel absolu et leurs applications |trans-title=Methods of the absolute differential calculus and their applications |journal=] |date=March 1900 |access-date=19 October 2019 |volume=54 |issue=1–2 |pages=125–201 |doi=10.1007/BF01454201 |url=http://gdz.sub.uni-goettingen.de/dms/resolveppn/?PPN=GDZPPN002258102 |publisher=Springer |s2cid=120009332 |language=fr}}</ref> ] developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to ] and ] in the early twentieth century.<ref>{{cite book|last=Schouten|first=Jan A.|title=Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Ricci Calculus – An introduction in the latest methods and problems in multi-dimensional differential geometry)|language=de|year=1924|series=Grundlehren der mathematischen Wissenschaften|volume=10|editor= R. Courant|publisher=Springer Verlag|location=Berlin|url=http://resolver.sub.uni-goettingen.de/purl?PPN373339186}}</ref> The basis of modern tensor analysis was developed by ] in his a paper from 1861.<ref>{{cite book|date=2003 |first=Hans Niels |isbn=0-8218-2623-9 |last=Jahnke |location=Providence, RI |oclc=51607350 |page=244 |publisher=American Mathematical Society |title=A history of analysis}}</ref>


A component of a tensor is a ] that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a ] are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly ]s. A component of a tensor is a ] that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a ] are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly ]s.
Line 7: Line 7:
A tensor may be expressed as a linear sum of the ] of ] and ] basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per ] of the underlying ]. The number of indices equals the degree (or order) of the tensor. A tensor may be expressed as a linear sum of the ] of ] and ] basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per ] of the underlying ]. The number of indices equals the degree (or order) of the tensor.


For compactness and convenience, the notational convention implies summation over indices repeated within a term and ] over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules. For compactness and convenience, the Ricci calculus incorporates ], which implies summation over indices repeated within a term and ] over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.


==Applications==
==Notation for indices==
Tensor calculus has many applications in ], ] and ] including ], ], ] (see ]), ] (see ]), ], and ].


Working with a main proponent of the ] ], the influential geometer ] summarizes the role of tensor calculus:<ref>{{Cite journal |journal=Notices of the AMS |volume=45 |issue=7 |pages=860–5 |date=August 1998 |url=https://www.ams.org/notices/199807/chern.pdf|title=Interview with Shiing Shen Chern}}</ref><blockquote>In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.</blockquote>

== Notation for indices ==
{{see also|Index notation}} {{see also|Index notation}}


===Basis-related distinctions=== === Basis-related distinctions ===


====Space and time coordinates==== ==== Space and time coordinates ====


Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:<ref>{{citation | author=C. Møller|title=The Theory of Relativity|year=1952|page=234}} is an example of a variation: 'Greek indices run from 1 to 3, Latin indices from 1 to 4'</ref> Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:<ref>{{citation | author=C. Møller|title=The Theory of Relativity|year=1952|page=234}} is an example of a variation: 'Greek indices run from 1 to 3, Latin indices from 1 to 4'</ref>
*The lowercase ] {{math|''a'', ''b'', ''c'', ...}} is used to indicate restriction to 3-dimensional ], which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately. * The lowercase ] {{math|''a'', ''b'', ''c'', ...}} is used to indicate restriction to 3-dimensional ], which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately.
*The lowercase ] {{math|''α'', ''β'', ''γ'', ...}} is used for 4-dimensional ], which typically take values 0 for time components and 1, 2, 3 for the spatial components. * The lowercase ] {{math|''α'', ''β'', ''γ'', ...}} is used for 4-dimensional ], which typically take values 0 for time components and 1, 2, 3 for the spatial components.


Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space. Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.


====Coordinate and index notation==== ==== Coordinate and index notation ====


The author(s) will usually make it clear whether a subscript is intended as an index or as a label. The author(s) will usually make it clear whether a subscript is intended as an index or as a label.
Line 29: Line 33:
For example, in 3-D Euclidean space and using ]; the ] {{math|1='''A''' = (''A''<sub>1</sub>, ''A''<sub>2</sub>, ''A''<sub>3</sub>) = (''A''<sub>x</sub>, ''A''<sub>y</sub>, ''A''<sub>z</sub>)}} shows a direct correspondence between the subscripts 1, 2, 3 and the labels {{math|x}}, {{math|y}}, {{math|z}}. In the expression {{math|''A<sub>i</sub>''}}, {{math|''i''}} is interpreted as an index ranging over the values 1, 2, 3, while the {{math|x}}, {{math|y}}, {{math|z}} subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label {{math|t}}. For example, in 3-D Euclidean space and using ]; the ] {{math|1='''A''' = (''A''<sub>1</sub>, ''A''<sub>2</sub>, ''A''<sub>3</sub>) = (''A''<sub>x</sub>, ''A''<sub>y</sub>, ''A''<sub>z</sub>)}} shows a direct correspondence between the subscripts 1, 2, 3 and the labels {{math|x}}, {{math|y}}, {{math|z}}. In the expression {{math|''A<sub>i</sub>''}}, {{math|''i''}} is interpreted as an index ranging over the values 1, 2, 3, while the {{math|x}}, {{math|y}}, {{math|z}} subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label {{math|t}}.


====Reference to basis==== ==== Reference to basis ====


Indices themselves may be ''labelled'' using ]-like symbols, such as a ] (ˆ), ] (¯), ] (˜), or prime (′) as in: Indices themselves may be ''labelled'' using ]-like symbols, such as a ] (ˆ), ] (¯), ] (˜), or prime (′) as in:
: <math>X_{\hat{\phi}}\,, Y_{\bar{\lambda}}\,, Z_{\tilde{\eta}}\,, T_{\mu'} </math>

:<math>X_{\hat{\phi}}\,, Y_{\bar{\lambda}}\,, Z_{\tilde{\eta}}\,, T_{\mu'} </math>

to denote a possibly different ] for that index. An example is in ]s from one ] to another, where one frame could be unprimed and the other primed, as in: to denote a possibly different ] for that index. An example is in ]s from one ] to another, where one frame could be unprimed and the other primed, as in:
:<math> v^{\mu'} = v^{\nu}L_\nu{}^{\mu'} .</math> : <math> v^{\mu'} = v^{\nu}L_\nu{}^{\mu'} .</math>


This is not to be confused with ] for ]s, which uses hats and overdots on indices to reflect the chirality of a spinor. This is not to be confused with ] for ]s, which uses hats and overdots on indices to reflect the chirality of a spinor.


===Upper and lower indices=== === Upper and lower indices ===


Ricci calculus, and ] more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are ''not'' exponents, even though they may look as such to the reader only familiar with other parts of mathematics. Ricci calculus, and ] more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are ''not'' exponents, even though they may look as such to the reader only familiar with other parts of mathematics.


It is in special cases (that the metric tensor is everywhere equal to the identity matrix) possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position – coordinate formulae in linear algebra such as <math> a_{ij} b_{jk} </math> for the product of matrices can sometimes be understood as examples of this – but in general the notation requires that the distinction between upper and lower indices is observed and maintained. In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as <math> a_{ij} b_{jk} </math> for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained.


====]==== ==== ] ====


A ''lower index'' (subscript) indicates covariance of the components with respect to that index: A ''lower index'' (subscript) indicates covariance of the components with respect to that index:
:<math>A_{\alpha\beta\gamma \cdots}</math> : <math>A_{\alpha\beta\gamma \cdots}</math>


====]==== ==== ] ====


An ''upper index'' (superscript) indicates contravariance of the components with respect to that index: An ''upper index'' (superscript) indicates contravariance of the components with respect to that index:
:<math>A^{\alpha\beta\gamma \cdots}</math> : <math>A^{\alpha\beta\gamma \cdots}</math>


====]==== ==== ] ====


A tensor may have both upper and lower indices: A tensor may have both upper and lower indices:
:<math>A_{\alpha}{}^{\beta}{}_{\gamma}{}^{\delta\cdots}.</math> : <math>A_{\alpha}{}^{\beta}{}_{\gamma}{}^{\delta\cdots}.</math>


Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the ]). Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the ]).


====Tensor type and degree==== ==== Tensor type and degree ====


The number of each upper and lower indices of a tensor gives its ''type'': a tensor with {{math|''p''}} upper and {{math|''q''}} lower indices is said to be of type {{math|(''p'', ''q'')}}, or to be a type-{{math|(''p'', ''q'')}} tensor. The number of each upper and lower indices of a tensor gives its ''type'': a tensor with {{math|''p''}} upper and {{math|''q''}} lower indices is said to be of type {{math|(''p'', ''q'')}}, or to be a type-{{math|(''p'', ''q'')}} tensor.
Line 69: Line 71:
The number of indices of a tensor, regardless of variance, is called the ''degree'' of the tensor (alternatively, its ''valence'', ''order'' or ''rank'', although ''rank'' is ambiguous). Thus, a tensor of type {{math|(''p'', ''q'')}} has degree {{math|''p'' + ''q''}}. The number of indices of a tensor, regardless of variance, is called the ''degree'' of the tensor (alternatively, its ''valence'', ''order'' or ''rank'', although ''rank'' is ambiguous). Thus, a tensor of type {{math|(''p'', ''q'')}} has degree {{math|''p'' + ''q''}}.


====]==== ==== ] ====


The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over: The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over:
: <math> A_\alpha B^\alpha \equiv \sum_\alpha A_{\alpha}B^\alpha \quad \text{or} \quad A^\alpha B_\alpha \equiv \sum_\alpha A^{\alpha}B_\alpha \,.</math>

:<math> A_\alpha B^\alpha \equiv \sum_\alpha A_{\alpha}B^\alpha \quad \text{or} \quad A^\alpha B_\alpha \equiv \sum_\alpha A^{\alpha}B_\alpha \,.</math>


The operation implied by such a summation is called ]: The operation implied by such a summation is called ]:
: <math> A_\alpha B^\beta \rightarrow A_\alpha B^\alpha \equiv \sum_\alpha A_{\alpha}B^\alpha \,.</math>

:<math> A_\alpha B^\beta \rightarrow A_\alpha B^\alpha \equiv \sum_\alpha A_{\alpha}B^\alpha \,.</math>


This summation may occur more than once within a term with a distinct symbol per pair of indices, for example: This summation may occur more than once within a term with a distinct symbol per pair of indices, for example:
: <math> A_{\alpha}{}^\gamma B^\alpha C_\gamma{}^\beta \equiv \sum_\alpha \sum_\gamma A_{\alpha}{}^\gamma B^\alpha C_\gamma{}^\beta \,.</math>

:<math> A_{\alpha}{}^\gamma B^\alpha C_\gamma{}^\beta \equiv \sum_\alpha \sum_\gamma A_{\alpha}{}^\gamma B^\alpha C_\gamma{}^\beta \,.</math>


Other combinations of repeated indices within a term are considered to be ill-formed, such as Other combinations of repeated indices within a term are considered to be ill-formed, such as
:{| : {|
|- |-
| <math> A_{\alpha\alpha}{}^{\gamma} \qquad </math> || (both occurrences of <math>\alpha</math> are lower; <math>A_\alpha{}^{\alpha\gamma}</math> would be fine) | <math> A_{\alpha\alpha}{}^{\gamma} \qquad </math> || (both occurrences of <math>\alpha</math> are lower; <math>A_\alpha{}^{\alpha\gamma}</math> would be fine)
Line 92: Line 91:
The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis. The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.


====]==== ==== ] ====


If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:<ref>{{citation | author=T. Frankel|page=67| title = The Geometry of Physics| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref> If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:<ref>{{citation | author=T. Frankel|page=67| title = The Geometry of Physics| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref>
:<math> A_{i_1 \cdots i_n}B^{i_1 \cdots i_n j_1 \cdots j_m}C_{j_1 \cdots j_m} \equiv A_I B^{IJ} C_J ,</math>

:<math> A_{i_1 \cdots i_n}B^{i_1 \cdots i_n j_1 \cdots j_m}C_{j_1 \cdots j_m} \equiv A_I B^{IJ} C_J </math>

where {{math|1=''I'' = ''i''<sub>1</sub> ''i''<sub>2</sub> ⋅⋅⋅ ''i<sub>n</sub>''}} and {{math|1=''J'' = ''j''<sub>1</sub> ''j''<sub>2</sub> ⋅⋅⋅ ''j<sub>m</sub>''}}. where {{math|1=''I'' = ''i''<sub>1</sub> ''i''<sub>2</sub> ⋅⋅⋅ ''i<sub>n</sub>''}} and {{math|1=''J'' = ''j''<sub>1</sub> ''j''<sub>2</sub> ⋅⋅⋅ ''j<sub>m</sub>''}}.


====Sequential summation==== ==== Sequential summation ====


A pair of vertical bars {{math|{{!}} {{!}}}} around a set of all-upper indices or all-lower indices, associated with contraction with another set of indices:<ref>Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, p.&nbsp;91, {{isbn|0-7167-0344-0}}</ref> A pair of vertical bars {{math|{{!}} &sdot; {{!}}}} around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is ] in each of the two sets of indices:<ref>{{cite book |page=91|author1=J.A. Wheeler |author2=C. Misner |author3=K.S. Thorne | title=]| publisher=W.H. Freeman & Co| year=1973 | isbn=0-7167-0344-0}}</ref>
: <math>

:<math>
A_{|\alpha \beta \gamma| \cdots} B^{\alpha\beta\gamma \cdots} = A_{|\alpha \beta \gamma| \cdots} B^{\alpha\beta\gamma \cdots} =
A_{\alpha \beta \gamma \cdots} B^{|\alpha\beta\gamma| \cdots} = A_{\alpha \beta \gamma \cdots} B^{|\alpha\beta\gamma| \cdots} =
\sum_{\alpha < \beta < \gamma} A_{\alpha \beta \gamma \cdots} B^{\alpha\beta\gamma \cdots} \sum_{\alpha < \beta < \gamma} A_{\alpha \beta \gamma \cdots} B^{\alpha\beta\gamma \cdots}
</math> </math>
means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example:

: <math>\begin{align}
means a restricted sum over index values, where each index is constrained to being strictly less than the next. The vertical bars are placed around either the upper set or the lower set of contracted indices, not both sets. Normally when contracting indices, the sum is over all values. In this notation, the summations are restricted as a computational convenience. This is useful when the expression is ] in each of the two sets of indices, as might occur on the tensor product of a {{math|''p''}}-vector with a {{math|''q''}}-form. More than one group can be summed in this way, for example:

:<math>\begin{align}
&A_{|\alpha \beta\gamma|}{}^{|\delta\epsilon\cdots\lambda|} &A_{|\alpha \beta\gamma|}{}^{|\delta\epsilon\cdots\lambda|}
B^{\alpha \beta\gamma}{}_{\delta\epsilon\cdots\lambda|\mu \nu \cdots\zeta|} B^{\alpha \beta\gamma}{}_{\delta\epsilon\cdots\lambda|\mu \nu \cdots\zeta|}
Line 123: Line 117:


When using multi-index notation, an underarrow is placed underneath the block of indices:<ref>{{citation | author=T. Frankel| title = The Geometry of Physics|page=67| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref> When using multi-index notation, an underarrow is placed underneath the block of indices:<ref>{{citation | author=T. Frankel| title = The Geometry of Physics|page=67| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref>
: <math>

:<math>
A_{\underset{\rightharpoondown}{P}}{}^{\underset{\rightharpoondown}{Q}} B^P{}_{Q\underset{\rightharpoondown}{R}} C^R = A_{\underset{\rightharpoondown}{P}}{}^{\underset{\rightharpoondown}{Q}} B^P{}_{Q\underset{\rightharpoondown}{R}} C^R =
\sum_\underset{\rightharpoondown}{P} \sum_\underset{\rightharpoondown}{Q} \sum_\underset{\rightharpoondown}{R} A_{P}{}^{Q} B^P{}_{QR} C^R </math> \sum_\underset{\rightharpoondown}{P} \sum_\underset{\rightharpoondown}{Q} \sum_\underset{\rightharpoondown}{R} A_{P}{}^{Q} B^P{}_{QR} C^R </math>

where where
: <math>

:<math>
\underset{\rightharpoondown}{P} = |\alpha \beta\gamma|\,,\quad \underset{\rightharpoondown}{P} = |\alpha \beta\gamma|\,,\quad
\underset{\rightharpoondown}{Q} = |\delta\epsilon\cdots\lambda|\,,\quad \underset{\rightharpoondown}{Q} = |\delta\epsilon\cdots\lambda|\,,\quad
Line 136: Line 127:
</math> </math>


====]==== ==== ] ====


By contracting an index with a non-singular ], the ] of a tensor can be changed, converting a lower index to an upper index or vice versa: By contracting an index with a non-singular ], the ] of a tensor can be changed, converting a lower index to an upper index or vice versa:
Line 143: Line 134:
The base symbol in many cases is retained (e.g. using {{math|''A''}} where {{math|''B''}} appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation. The base symbol in many cases is retained (e.g. using {{math|''A''}} where {{math|''B''}} appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.


===Correlations between index positions and invariance=== === Correlations between index positions and invariance ===


This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a ] between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.<ref>{{cite book |pages=61, 202–203, 232|author1=J.A. Wheeler |author2=C. Misner |author3=K.S. Thorne | title=]| publisher=W.H. Freeman & Co| year=1973 | isbn=0-7167-0344-0}}</ref> This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a ] between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.<ref>{{cite book |pages=61, 202–203, 232|author1=J.A. Wheeler |author2=C. Misner |author3=K.S. Thorne | title=]| publisher=W.H. Freeman & Co| year=1973 | isbn=0-7167-0344-0}}</ref>
Line 149: Line 140:
The ] is used, ]. The ] is used, ].


:{| class="wikitable" : {| class="wikitable"
|- |-
! !
Line 157: Line 148:
|- |-
! Covector, covariant vector, 1-form ! Covector, covariant vector, 1-form
| <math>e^\bar{\alpha} = L^\bar{\alpha}{}_\beta e^\beta</math> | <math>\omega^\bar{\alpha} = L_\beta{}^\bar{\alpha} \omega^\beta</math>
| <math>a_\bar{\alpha} = a_\gamma L^\gamma{}_\bar{\alpha}</math> | <math>a_\bar{\alpha} = a_\gamma L^\gamma{}_\bar{\alpha}</math>
| <math>a_\bar{\alpha}e^\bar{\alpha} = a_\gamma L^\gamma{}_\bar{\alpha} L^\bar{\alpha}{}_\beta e^\beta = a_\gamma \delta^\gamma{}_\beta e^\beta = a_\beta e^\beta</math> | <math>a_\bar{\alpha} \omega^\bar{\alpha} = a_\gamma L^\gamma{}_\bar{\alpha} L_\beta{}^\bar{\alpha} \omega^\beta = a_\gamma \delta^\gamma{}_\beta \omega^\beta = a_\beta \omega^\beta</math>
|- |-
! Vector, contravariant vector ! Vector, contravariant vector
| <math>e_\bar{\alpha} = L^\gamma{}_\bar{\alpha} e_\gamma</math> | <math>e_\bar{\alpha} = e_\gamma L_\bar{\alpha}{}^\gamma</math>
| <math>a^\bar{\alpha} = a^\beta L^\bar{\alpha}{}_\beta</math> | <math>u^\bar{\alpha} = L^\bar{\alpha}{}_\beta u^\beta</math>
| <math>a^\bar{\alpha}e_\bar{\alpha} = a^\beta L^\bar{\alpha}{}_\beta L^\gamma{}_\bar{\alpha} e_\gamma = a^\beta \delta^\gamma{}_\beta e_\gamma = a^\gamma e_\gamma</math> | <math>e_\bar{\alpha} u^\bar{\alpha} = e_\gamma L_\bar{\alpha}{}^\gamma L^\bar{\alpha}{}_\beta u^\beta = e_\gamma \delta^\gamma{}_\beta u^\beta = e_\gamma u^\gamma</math>
|} |}


==General outlines for index notation and operations== == General outlines for index notation and operations ==

Tensors are equal ] every corresponding component is equal; e.g., tensor {{math|''A''}} equals tensor {{math|''B''}} if and only if Tensors are equal ] every corresponding component is equal; e.g., tensor {{math|''A''}} equals tensor {{math|''B''}} if and only if


Line 174: Line 166:
for all {{math|''α'', ''β'', ''γ''}}. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to ]). for all {{math|''α'', ''β'', ''γ''}}. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to ]).


===]=== === ] ===


Indices not involved in contractions are called ''free indices''. Indices used in contractions are termed ''dummy indices'', or ''summation indices''. Indices not involved in contractions are called ''free indices''. Indices used in contractions are termed ''dummy indices'', or ''summation indices''.


===A tensor equation represents many ordinary (real-valued) equations=== === A tensor equation represents many ordinary (real-valued) equations ===


The components of tensors (like {{math|''A<sup>α</sup>''}}, {{math|''B<sub>β</sub><sup>γ</sup>''}} etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has {{math|''n''}} free indices, and if the dimensionality of the underlying vector space is {{math|''m''}}, the equality represents {{math|''m<sup>n</sup>''}} equations: each index takes on every value of a specific set of values. The components of tensors (like {{math|''A<sup>α</sup>''}}, {{math|''B<sub>β</sub><sup>γ</sup>''}} etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has {{math|''n''}} free indices, and if the dimensionality of the underlying vector space is {{math|''m''}}, the equality represents {{math|''m<sup>n</sup>''}} equations: each index takes on every value of a specific set of values.


For instance, if For instance, if
: <math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\beta{} E_\delta = T^\alpha{}_\beta{}_\delta </math>

:<math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\beta{} E_\delta = T^\alpha{}_\beta{}_\delta </math>

is in ] (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices ({{math|''α'', ''β'', ''δ''}}), there are 4<sup>3</sup> = 64 equations. Three of these are: is in ] (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices ({{math|''α'', ''β'', ''δ''}}), there are 4<sup>3</sup> = 64 equations. Three of these are:
: <math>\begin{align}

:<math>\begin{align}
A^0 B_1{}^0 C_{00} + A^0 B_1{}^1 C_{10} + A^0 B_1{}^2 C_{20} + A^0 B_1{}^3 C_{30} + D^0{}_1{} E_0 &= T^0{}_1{}_0 \\ A^0 B_1{}^0 C_{00} + A^0 B_1{}^1 C_{10} + A^0 B_1{}^2 C_{20} + A^0 B_1{}^3 C_{30} + D^0{}_1{} E_0 &= T^0{}_1{}_0 \\
A^1 B_0{}^0 C_{00} + A^1 B_0{}^1 C_{10} + A^1 B_0{}^2 C_{20} + A^1 B_0{}^3 C_{30} + D^1{}_0{} E_0 &= T^1{}_0{}_0 \\ A^1 B_0{}^0 C_{00} + A^1 B_0{}^1 C_{10} + A^1 B_0{}^2 C_{20} + A^1 B_0{}^3 C_{30} + D^1{}_0{} E_0 &= T^1{}_0{}_0 \\
Line 196: Line 185:
This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation. This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.


===Indices are replaceable labels=== === Indices are replaceable labels ===


Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify ] or identities of the ] and ] (see also below). An example of a correct change is: Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify ] or identities of the ] and ] (see also below). An example of a correct change is:
: <math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\beta{} E_\delta \rightarrow A^\lambda B_\beta{}^\mu C_{\mu\delta} + D^\lambda{}_\beta{} E_\delta \,,</math>

:<math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\beta{} E_\delta \rightarrow A^\lambda B_\beta{}^\mu C_{\mu\delta} + D^\lambda{}_\beta{} E_\delta \,,</math>

whereas an erroneous change is: whereas an erroneous change is:
: <math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\beta{} E_\delta \nrightarrow A^\lambda B_\beta{}^\gamma C_{\mu\delta} + D^\alpha{}_\beta{} E_\delta \,.</math>

:<math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\beta{} E_\delta \nrightarrow A^\lambda B_\beta{}^\gamma C_{\mu\delta} + D^\alpha{}_\beta{} E_\delta \,.</math>


In the first replacement, {{math|''λ''}} replaced {{math|''α''}} and {{math|''μ''}} replaced {{math|''γ''}} ''everywhere'', so the expression still has the same meaning. In the second, {{math|''λ''}} did not fully replace {{math|''α''}}, and {{math|''μ''}} did not fully replace {{math|''γ''}} (incidentally, the contraction on the {{math|''γ''}} index became a tensor product), which is entirely inconsistent for reasons shown next. In the first replacement, {{math|''λ''}} replaced {{math|''α''}} and {{math|''μ''}} replaced {{math|''γ''}} ''everywhere'', so the expression still has the same meaning. In the second, {{math|''λ''}} did not fully replace {{math|''α''}}, and {{math|''μ''}} did not fully replace {{math|''γ''}} (incidentally, the contraction on the {{math|''γ''}} index became a tensor product), which is entirely inconsistent for reasons shown next.


===Indices are the same in every term=== === Indices are the same in every term ===


The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example: The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example:
:<math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\delta E_\beta = T^\alpha{}_\beta{}_\delta </math> : <math>A^\alpha B_\beta{}^\gamma C_{\gamma\delta} + D^\alpha{}_\delta E_\beta = T^\alpha{}_\beta{}_\delta </math>


as for an erroneous expression: as for an erroneous expression:
Line 218: Line 204:
In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, {{math|''α'', ''β'', ''δ''}} line up throughout and {{math|''γ''}} occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while {{math|''β''}} lines up, {{math|''α''}} and {{math|''δ''}} do not, and {{math|''γ''}} appears twice in one term (contraction) ''and'' once in another term, which is inconsistent. In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, {{math|''α'', ''β'', ''δ''}} line up throughout and {{math|''γ''}} occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while {{math|''β''}} lines up, {{math|''α''}} and {{math|''δ''}} do not, and {{math|''γ''}} appears twice in one term (contraction) ''and'' once in another term, which is inconsistent.


===Brackets and punctuation used once where implied=== === Brackets and punctuation used once where implied ===


When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply. When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.
Line 226: Line 212:
Similarly if brackets enclose ''contravariant indices'' – the rule applies only to ''all enclosed contravariant indices'', not to intermediately placed covariant indices. Similarly if brackets enclose ''contravariant indices'' – the rule applies only to ''all enclosed contravariant indices'', not to intermediately placed covariant indices.


==Symmetric and antisymmetric parts== == Symmetric and antisymmetric parts ==
===] part of tensor===


=== ] part of tensor ===
], around multiple indices denotes the symmetrized part of the tensor. When symmetrizing {{math|''p''}} indices using {{math|''σ''}} to range over permutations of the numbers 1 to {{math|''p''}}, one takes a sum over the ]s of those indices {{math|''α''<sub>''σ''(''i'')</sub>}} for {{math|1=''i'' = 1, 2, 3, …, ''p''}}, and then divides by the number of permutations:


], around multiple indices denotes the symmetrized part of the tensor. When symmetrizing {{math|''p''}} indices using {{math|''σ''}} to range over permutations of the numbers 1 to {{math|''p''}}, one takes a sum over the ]s of those indices {{math|''α''<sub>''σ''(''i'')</sub>}} for {{math|1=''i'' = 1, 2, 3, ..., ''p''}}, and then divides by the number of permutations:
:<math>
: <math>
A_{(\alpha_1\alpha_2\cdots\alpha_p)\alpha_{p + 1}\cdots\alpha_q} = A_{(\alpha_1\alpha_2\cdots\alpha_p)\alpha_{p + 1}\cdots\alpha_q} =
\dfrac{1}{p!} \sum_{\sigma} A_{\alpha_{\sigma(1)}\cdots\alpha_{\sigma(p)}\alpha_{p + 1}\cdots\alpha_{q}} \,. \dfrac{1}{p!} \sum_{\sigma} A_{\alpha_{\sigma(1)}\cdots\alpha_{\sigma(p)}\alpha_{p + 1}\cdots\alpha_{q}} \,.
Line 237: Line 223:


For example, two symmetrizing indices mean there are two indices to permute and sum over: For example, two symmetrizing indices mean there are two indices to permute and sum over:
: <math>A_{(\alpha\beta)\gamma\cdots} = \dfrac{1}{2!} \left(A_{\alpha\beta\gamma\cdots} + A_{\beta\alpha\gamma\cdots} \right)</math>

:<math>A_{(\alpha\beta)\gamma\cdots} = \dfrac{1}{2!} \left(A_{\alpha\beta\gamma\cdots} + A_{\beta\alpha\gamma\cdots} \right)</math>

while for three symmetrizing indices, there are three indices to sum over and permute: while for three symmetrizing indices, there are three indices to sum over and permute:
: <math>

:<math>
A_{(\alpha\beta\gamma)\delta\cdots} = A_{(\alpha\beta\gamma)\delta\cdots} =
\dfrac{1}{3!} \left(A_{\alpha\beta\gamma\delta\cdots} \dfrac{1}{3!} \left(A_{\alpha\beta\gamma\delta\cdots}
Line 254: Line 237:


The symmetrization is ] over addition; The symmetrization is ] over addition;
: <math>A_{(\alpha} \left(B_{\beta)\gamma\cdots} + C_{\beta)\gamma\cdots} \right) = A_{(\alpha}B_{\beta)\gamma\cdots} + A_{(\alpha}C_{\beta)\gamma\cdots}</math>

:<math>A_{(\alpha} \left(B_{\beta)\gamma\cdots} + C_{\beta)\gamma\cdots} \right) = A_{(\alpha}B_{\beta)\gamma\cdots} + A_{(\alpha}C_{\beta)\gamma\cdots}</math>


Indices are not part of the symmetrization when they are: Indices are not part of the symmetrization when they are:
* not on the same level, for example;

*: <math>A_{(\alpha}B^{\beta}{}_{\gamma)} = \dfrac{1}{2!} \left(A_{\alpha}B^{\beta}{}_{\gamma} + A_{\gamma}B^{\beta}{}_{\alpha} \right)</math>
*not on the same level, for example;
*:<math>A_{(\alpha}B^{\beta}{}_{\gamma)} = \dfrac{1}{2!} \left(A_{\alpha}B^{\beta}{}_{\gamma} + A_{\gamma}B^{\beta}{}_{\alpha} \right)</math>
*within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; *within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
*:<math>A_{(\alpha}B_{|\beta|}{}_{\gamma)} = \dfrac{1}{2!} \left(A_{\alpha}B_{\beta \gamma} + A_{\gamma}B_{\beta \alpha} \right)</math> *: <math>A_{(\alpha}B_{|\beta|}{}_{\gamma)} = \dfrac{1}{2!} \left(A_{\alpha}B_{\beta \gamma} + A_{\gamma}B_{\beta \alpha} \right)</math>


Here the {{math|''α''}} and {{math|''γ''}} indices are symmetrized, {{math|''β''}} is not. Here the {{math|''α''}} and {{math|''γ''}} indices are symmetrized, {{math|''β''}} is not.


===] or alternating part of tensor=== === ] or alternating part of tensor ===


</nowiki>]], around multiple indices denotes the ''anti''symmetrized part of the tensor. For {{math|''p''}} antisymmetrizing indices – the sum over the permutations of those indices {{math|''α''<sub>''σ''(''i'')</sub>}} multiplied by the ] {{math|sgn(''σ'')}} is taken, then divided by the number of permutations: </nowiki>]], around multiple indices denotes the ''anti''symmetrized part of the tensor. For {{math|''p''}} antisymmetrizing indices – the sum over the permutations of those indices {{math|''α''<sub>''σ''(''i'')</sub>}} multiplied by the ] {{math|sgn(''σ'')}} is taken, then divided by the number of permutations:
: <math>\begin{align}

:<math>\begin{align}
& A_{\alpha_{p+1}\cdots\alpha_q} \\ & A_{\alpha_{p+1}\cdots\alpha_q} \\
={} & \dfrac{1}{p!} \sum_{\sigma}\sgn(\sigma) A_{\alpha_{\sigma(1)}\cdots\alpha_{\sigma(p)}\alpha_{p+1}\cdots\alpha_{q}} \\ ={} & \dfrac{1}{p!} \sum_{\sigma}\sgn(\sigma) A_{\alpha_{\sigma(1)}\cdots\alpha_{\sigma(p)}\alpha_{p+1}\cdots\alpha_{q}} \\
={} & \delta_{\alpha_1 \cdots \alpha_p}^{\beta_1 \dots \beta_p} A_{\beta_1 \cdots \beta_p\alpha_{p+1}\cdots\alpha_q} \\ ={} & \delta_{\alpha_1 \cdots \alpha_p}^{\beta_1 \dots \beta_p} A_{\beta_1 \cdots \beta_p\alpha_{p+1}\cdots\alpha_q} \\
\end{align} </math> \end{align} </math>

where {{math|''δ''{{su|b=''α''<sub>1</sub>⋅⋅⋅''α<sub>p</sub>''|p=''β''<sub>1</sub>⋅⋅⋅''β<sub>p</sub>''|lh=0.8em}}}} is the ] of degree {{math|2''p''}}, with scaling as defined below. where {{math|''δ''{{su|b=''α''<sub>1</sub>⋅⋅⋅''α<sub>p</sub>''|p=''β''<sub>1</sub>⋅⋅⋅''β<sub>p</sub>''|lh=0.8em}}}} is the ] of degree {{math|2''p''}}, with scaling as defined below.


For example, two antisymmetrizing indices imply: For example, two antisymmetrizing indices imply:
: <math>A_{\gamma\cdots} = \dfrac{1}{2!} \left(A_{\alpha\beta\gamma\cdots} - A_{\beta\alpha\gamma\cdots} \right)</math>

:<math>A_{\gamma\cdots} = \dfrac{1}{2!} \left(A_{\alpha\beta\gamma\cdots} - A_{\beta\alpha\gamma\cdots} \right)</math>

while three antisymmetrizing indices imply: while three antisymmetrizing indices imply:
: <math>

:<math>
A_{\delta\cdots} = A_{\delta\cdots} =
\dfrac{1}{3!} \left(A_{\alpha\beta\gamma\delta\cdots} \dfrac{1}{3!} \left(A_{\alpha\beta\gamma\delta\cdots}
Line 294: Line 270:
\right) \right)
</math> </math>

as for a more specific example, if {{math|''F''}} represents the ], then the equation as for a more specific example, if {{math|''F''}} represents the ], then the equation
: <math>

:<math>
0 = F_{} = \dfrac{1}{3!} \left( 0 = F_{} = \dfrac{1}{3!} \left(
F_{\alpha\beta,\gamma} F_{\alpha\beta,\gamma}
Line 307: Line 281:
\right) \, \right) \,
</math> </math>

represents ] and ]. represents ] and ].


As before, the antisymmetrization is distributive over addition; As before, the antisymmetrization is distributive over addition;
: <math>

:<math>
A_{\gamma\cdots} + C_{\beta]\gamma\cdots} \right) = A_{\gamma\cdots} + C_{\beta]\gamma\cdots} \right) =
A_{\gamma\cdots} + A_{\gamma\cdots} A_{\gamma\cdots} + A_{\gamma\cdots}
Line 318: Line 290:


As with symmetrization, indices are not antisymmetrized when they are: As with symmetrization, indices are not antisymmetrized when they are:
* not on the same level, for example;

*: <math>
*not on the same level, for example;
*:<math>
A_{} = A_{} =
\dfrac{1}{2!} \left(A_{\alpha}B^{\beta}{}_{\gamma} - A_{\gamma}B^{\beta}{}_{\alpha} \right) \dfrac{1}{2!} \left(A_{\alpha}B^{\beta}{}_{\gamma} - A_{\gamma}B^{\beta}{}_{\alpha} \right)
</math> </math>
*within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; * within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
*:<math> *: <math>
A_{} = A_{} =
\dfrac{1}{2!} \left(A_{\alpha}B_{\beta \gamma} - A_{\gamma}B_{\beta \alpha} \right) \dfrac{1}{2!} \left(A_{\alpha}B_{\beta \gamma} - A_{\gamma}B_{\beta \alpha} \right)
Line 332: Line 303:
Here the {{math|''α''}} and {{math|''γ''}} indices are antisymmetrized, {{math|''β''}} is not. Here the {{math|''α''}} and {{math|''γ''}} indices are antisymmetrized, {{math|''β''}} is not.


===Sum of symmetric and antisymmetric parts=== === Sum of symmetric and antisymmetric parts ===


Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices: Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:
: <math>A_{\alpha\beta\gamma\cdots} = A_{(\alpha\beta)\gamma\cdots}+A_{\gamma\cdots}</math>

:<math>A_{\alpha\beta\gamma\cdots} = A_{(\alpha\beta)\gamma\cdots}+A_{\gamma\cdots}</math>

as can be seen by adding the above expressions for {{math|''A''<sub>(''αβ'')''γ''⋅⋅⋅</sub>}} and {{math|''A''<sub>''γ''⋅⋅⋅</sub>}}. This does not hold for other than two indices. as can be seen by adding the above expressions for {{math|''A''<sub>(''αβ'')''γ''⋅⋅⋅</sub>}} and {{math|''A''<sub>''γ''⋅⋅⋅</sub>}}. This does not hold for other than two indices.


==Differentiation== == Differentiation ==


{{see also|Four-gradient|d'Alembertian|Intrinsic derivative}} {{see also|Four-gradient|d'Alembertian|Intrinsic derivative}}
Line 346: Line 315:
For compactness, derivatives may be indicated by adding indices after a comma or semicolon.<ref>{{cite book | author=G. Woan| title=The Cambridge Handbook of Physics Formulas| url=https://archive.org/details/cambridgehandboo0000woan| url-access=registration| publisher=Cambridge University Press| year=2010 | isbn=978-0-521-57507-2}}</ref><ref> – Mathworld, Wolfram</ref> For compactness, derivatives may be indicated by adding indices after a comma or semicolon.<ref>{{cite book | author=G. Woan| title=The Cambridge Handbook of Physics Formulas| url=https://archive.org/details/cambridgehandboo0000woan| url-access=registration| publisher=Cambridge University Press| year=2010 | isbn=978-0-521-57507-2}}</ref><ref> – Mathworld, Wolfram</ref>


===]=== === ] ===


While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a ]: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by {{math|''x''{{isup|''μ''}}}}, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of ''differences'' in coordinates, {{math|Δ''x''{{isup|''μ''}}}}, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant and Lie derivatives below. While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a ]: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by {{math|''x''{{isup|''μ''}}}}, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of ''differences'' in coordinates, {{math|Δ''x''{{isup|''μ''}}}}, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below.


To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable {{math|''x''{{isup|''γ''}}}}, a '']'' is placed before an appended lower index of the coordinate variable. To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable {{math|''x''{{isup|''γ''}}}}, a '']'' is placed before an appended lower index of the coordinate variable.
: <math>A_{\alpha\beta\cdots,\gamma} = \dfrac{\partial}{\partial x^\gamma} A_{\alpha\beta\cdots}</math>

:<math>A_{\alpha\beta\cdots,\gamma} = \dfrac{\partial}{\partial x^\gamma} A_{\alpha\beta\cdots}</math>


This may be repeated (without adding further commas): This may be repeated (without adding further commas):
: <math>

:<math>
A_{\alpha_1\alpha_2\cdots\alpha_p\,,\,\alpha_{p+1}\cdots\alpha_q} = A_{\alpha_1\alpha_2\cdots\alpha_p\,,\,\alpha_{p+1}\cdots\alpha_q} =
\dfrac{\partial^{q-p}}{\partial x^{\alpha_q}\cdots\partial x^{\alpha_{p+2}}\partial x^{\alpha_{p+1}}} A_{\alpha_1\alpha_2\cdots\alpha_p}. \dfrac{\partial}{\partial x^{\alpha_q}}\cdots\dfrac{\partial}{\partial x^{\alpha_{p+2}}}\dfrac{\partial}{\partial x^{\alpha_{p+1}}} A_{\alpha_1\alpha_2\cdots\alpha_p}.
</math> </math>


These components do ''not'' transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the ] and the derivatives of the coordinates These components do ''not'' transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the ] and the derivatives of the coordinates
: <math>x^{\alpha}{}_{, \gamma} = \delta^{\alpha}_\gamma ,</math>

:<math>x^{\alpha}{}_{, \gamma} = \delta^{\alpha}{}_\gamma ,</math>

where {{math|''δ''}} is the ]. where {{math|''δ''}} is the ].


===]=== === ] ===


The covariant derivative is only defined if a ] is defined. For any tensor field, a '']'' ({{math| ; }}) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a '']'' ({{math| / }})<ref>{{citation | author=T. Frankel|page=298| title = The Geometry of Physics| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref> or in three-dimensional curved space a single vertical bar ({{math|&nbsp;{{!}}&nbsp;}}).<ref>{{cite book |pages=510, §21.5|author1=J.A. Wheeler |author2=C. Misner |author3=K.S. Thorne | title=]| publisher=W.H. Freeman & Co| year=1973 | isbn=0-7167-0344-0}}</ref>
The covariant derivative is only defined if a ] is specified, either explicitly or implicitly as the ] of a ].
To indicate covariant differentiation of any tensor field, a '']'' ({{math| ; }}) is placed before an appended lower (covariant) index. Less common alternatives to the semicolon include a '']'' ({{math| / }})<ref>{{citation | author=T. Frankel|page=298| title = The Geometry of Physics| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref> or in three-dimensional curved space a single vertical bar ({{math|&nbsp;{{!}}&nbsp;}}).<ref>{{cite book |pages=510, §21.5|author1=J.A. Wheeler |author2=C. Misner |author3=K.S. Thorne | title=]| publisher=W.H. Freeman & Co| year=1973 | isbn=0-7167-0344-0}}</ref>


For a contravariant vector, its covariant derivative is: The covariant derivative of a scalar function, a contravariant vector and a covariant vector are:
:<math>A^{\alpha}{}_{;\beta} = A^{\alpha}{}_{,\beta} + \Gamma^{\alpha} {}_{\gamma\beta}A^\gamma</math> : <math>f_{;\beta} = f_{,\beta}</math>
: <math>A^{\alpha}{}_{;\beta} = A^{\alpha}{}_{,\beta} + \Gamma^{\alpha} {}_{\gamma\beta}A^\gamma</math>
: <math>A_{\alpha ;\beta} = A_{\alpha,\beta} - \Gamma^{\gamma} {}_{\alpha\beta}A_\gamma \,,</math>
where {{math|Γ''<sup>α</sup><sub>βγ</sub>''}} is a ] of the second kind.
where {{math|Γ''<sup>α</sup><sub>γβ</sub>''}} are the connection coefficients.

For a covariant vector, its covariant derivative is:
:<math>A_{\alpha ;\beta} = A_{\alpha,\beta} - \Gamma^{\gamma} {}_{\alpha\beta}A_\gamma \,.</math>


For an arbitrary tensor:<ref>{{citation | author=T. Frankel|page=299| title = The Geometry of Physics| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref> For an arbitrary tensor:<ref>{{citation | author=T. Frankel|page=299| title = The Geometry of Physics| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}</ref>
:<math> \begin{align} : <math> \begin{align}
T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s ; \gamma} T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s ; \gamma}
& \\ & \\
Line 388: Line 351:
\end{align}</math> \end{align}</math>


An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol {{math|∇<sub>''β''</sub>}}. For the case of a vector field {{math|''A<sup>α</sup>''}}:<ref>{{cite book|title=Relativity|series=Demystified|isbn=0-07-145545-0|year=2006|author=D. McMahon|publisher=McGraw Hill|page=67}}</ref>
The components of this derivative of a tensor field transform covariantly, and hence form another tensor field. This derivative is characterized by the product rule and applied to the metric tensor {{math|''g<sub>μν</sub>''}} it gives zero:
: <math>\nabla_\beta A^\alpha = A^\alpha{}_{;\beta} \,.</math>

:<math>g_{\mu \nu ; \gamma} = 0 \,.</math>


The covariant formulation of the ] of any tensor field along a vector {{math|''v<sup>γ</sup>''}} may be expressed as its contraction with the covariant derivative, e.g.: The covariant formulation of the ] of any tensor field along a vector {{math|''v<sup>γ</sup>''}} may be expressed as its contraction with the covariant derivative, e.g.:
: <math>v^\gamma A_{\alpha ;\gamma} \,.</math>


The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly.
:<math>v^\gamma A_{\alpha ;\gamma} \,.</math>


This derivative is characterized by the product rule:
One alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol {{math|∇<sub>''β''</sub>}}. For the case of a vector field {{math|''A<sup>α</sup>''}}:<ref>{{cite book|title=Relativity|series=Demystified|isbn=0-07-145545-0|year=2006|author=D. McMahon|publisher=McGraw Hill|page=67}}</ref>
: <math>(A^{\alpha}{}_{\beta\cdots}B^{\gamma}{}_{\delta\cdots})_{;\epsilon} = A^{\alpha}{}_{\beta\cdots;\epsilon}B^{\gamma}{}_{\delta\cdots} + A^{\alpha}{}_{\beta\cdots}B^{\gamma}{}_{\delta\cdots;\epsilon} \,.</math>


==== Connection types ====
:<math>\nabla_\beta A^\alpha = \frac{\partial A^\alpha}{\partial x^\beta} + \Gamma^\alpha{}_{\gamma\beta}A^\gamma .</math>


A ] on the ] of a ] is called an ].
===Exterior derivative===


A connection is a ] when the covariant derivative of the metric tensor vanishes:
The ] of a totally antisymmetric type {{math|(0, ''s'')}} tensor (also called a ]) with components {{math|''A''{{sub|''α''{{sub|1}}⋅⋅⋅''α''{{sub|''s''}}}}}} is a derivative that is covariant under basis transformations. It does not dependent on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor coordinates:<ref>{{cite book |author=R. Penrose| title=]| publisher= Vintage books| year=2007 | isbn=0-679-77631-1}}</ref>{{rp|232–233}}
: <math>g_{\mu \nu ; \xi} = 0 \,.</math>


An ] that is also a metric connection is called a ]. A Riemannian connection that is torsion-free (i.e., for which the ] vanishes: {{math|1=''T''{{sup|''α''}}{{sub|''βγ''}} = 0}}) is a ].
:<math>(\mathrm{d}A)_{\alpha_1\cdots\alpha_s\gamma} = (-1)^s A_{} .</math>


The {{math|Γ{{sup|''α''}}{{sub|''βγ''}}}} for a Levi-Civita connection in a coordinate basis are called ] of the second kind.
This derivative is not defined on any tensor with contravariant indices or that is not totally antisymmetric.


===Lie derivative=== === ] ===


The ] is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not dependent on either a metric tensor or a connection. The Lie derivative of a type {{math|(''r'', ''s'')}} tensor field {{math|''T''}} along (the flow of) a contravariant vector field {{math|''X''{{isup|''ρ''}}}} ] using a coordinate basis as<ref>{{citation | last1=Bishop|first1=R.L.|last2=Goldberg|first2=S.I.| year=1968| title = Tensor Analysis on Manifolds|page=130}}</ref> The exterior derivative of a totally antisymmetric type {{math|(0, ''s'')}} tensor field with components {{math|''A''{{sub|''α''{{sub|1}}⋅⋅⋅''α''{{sub|''s''}}}}}} (also called a ]) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:<ref>{{cite book |author=R. Penrose| title=]| publisher= Vintage books| year=2007 | isbn=978-0-679-77631-4}}</ref>{{rp|232–233}}


:<math>(\mathrm{d}A)_{\gamma\alpha_1\cdots\alpha_s} = \frac{\partial}{\partial x^{} = A_{} .</math>
:<math> \begin{align}

This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule.

=== ] ===

The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type {{math|(''r'', ''s'')}} tensor field {{math|''T''}} along (the flow of) a contravariant vector field {{math|''X''{{isup|''ρ''}}}} ] using a coordinate basis as<ref>{{citation | last1=Bishop|first1=R.L.|last2=Goldberg|first2=S.I.| year=1968| title = Tensor Analysis on Manifolds|page=130}}</ref>
: <math> \begin{align}
(\mathcal{L}_X T)^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s} (\mathcal{L}_X T)^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s}
& \\ & \\
Line 420: Line 392:
\end{align}</math> \end{align}</math>


This derivative is characterized by the product rule and that the Lie derivative of the given contravariant vector field with components {{math|''X''{{isup|''ρ''}}}} along itself is zero: This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero:
: <math>(\mathcal{L}_X X)^{\alpha} = X^\gamma X^\alpha{}_{,\gamma} - X^\alpha{}_{,\gamma} X^\gamma = 0 \,.</math>


== Notable tensors ==
:<math>(\mathcal{L}_X X)^{\rho} = 0 \,.</math>


=== ] ===
==Notable tensors==
===]=== The Kronecker delta is like the ] when multiplied and contracted:
: <math>\begin{align}
The Kronecker delta is like the ]
:<math>\begin{align}
\delta^{\alpha}_{\beta} \, A^{\beta} &= A^{\alpha} \\ \delta^{\alpha}_{\beta} \, A^{\beta} &= A^{\alpha} \\
\delta^{\mu}_{\nu} \, B_{\mu} &= B_{\nu} \, \delta^{\mu}_{\nu} \, B_{\mu} &= B_{\nu} .
\end{align}</math> \end{align}</math>


when multiplied and contracted. The components {{math|''δ''{{su|b=''β''|p=''α''|lh=1em}}}} are the same in any basis and form an invariant tensor of type {{nowrap|(1, 1)}}, i.e. the identity of the ] over the ] of the ], and so its trace is an invariant.<ref>{{citation | last1=Bishop|first1=R.L.|last2=Goldberg|first2=S.I.| year=1968| title = Tensor Analysis on Manifolds|page=85}}</ref> The components {{math|''δ''{{su|b=''β''|p=''α''|lh=1em}}}} are the same in any basis and form an invariant tensor of type {{math|(1, 1)}}, i.e. the identity of the ] over the ] of the ], and so its trace is an invariant.<ref>{{citation | last1=Bishop|first1=R.L.|last2=Goldberg|first2=S.I.| year=1968| title = Tensor Analysis on Manifolds|page=85}}</ref>
Its ] is the dimensionality of the space; for example, in four-dimensional ], Its ] is the dimensionality of the space; for example, in four-dimensional ],
:<math>\delta^{\rho}_{\rho} = \delta^{0}_{0} + \delta^{1}_{1} + \delta^{2}_{2} + \delta^{3}_{3} = 4 .</math> : <math>\delta^{\rho}_{\rho} = \delta^{0}_{0} + \delta^{1}_{1} + \delta^{2}_{2} + \delta^{3}_{3} = 4 .</math>


The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree {{math|2''p''}} may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of {{math|''p''!}} on the right): The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree {{math|2''p''}} may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of {{math|''p''!}} on the right):
:<math>\delta^{\alpha_1 \cdots \alpha_p}_{\beta_1 \cdots \beta_p} = \delta^{}_{\beta_p}</math> : <math>\delta^{\alpha_1 \cdots \alpha_p}_{\beta_1 \cdots \beta_p} = \delta^{}_{\beta_p} ,</math>
and acts as an antisymmetrizer on {{math|''p''}} indices: and acts as an antisymmetrizer on {{math|''p''}} indices:
:<math>\delta^{\alpha_1 \cdots \alpha_p}_{\beta_1 \cdots \beta_p} \, A^{\beta_1 \cdots \beta_p} = A^{} .</math> : <math>\delta^{\alpha_1 \cdots \alpha_p}_{\beta_1 \cdots \beta_p} \, A^{\beta_1 \cdots \beta_p} = A^{} .</math>


===]=== === ] ===


The metric tensor {{math|''g''{{sub|''αβ''}}}} is used for lowering indices and gives the length of any ] curve An affine connection has a torsion tensor {{math|''T''{{sup|''α''}}{{sub|''βγ''}}}}:
:<math>\text{length} = \int^{y_2}_{y_1} \sqrt{ g_{\alpha \beta} \frac{d x^{\alpha}}{d \gamma} \frac{d x^{\beta}}{d \gamma} } \, d \gamma \,,</math> : <math> T^\alpha{}_{\beta\gamma} = \Gamma^\alpha{}_{\beta\gamma} - \Gamma^\alpha{}_{\gamma\beta} - \gamma^\alpha{}_{\beta\gamma} ,</math>
where {{math|''&gamma;''{{sup|''&alpha;''}}{{sub|''&beta;&gamma;''}}}} are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis.


For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations
where {{math|''γ''}} is any ] ] ] of the path. It also gives the duration of any ] curve
:<math>\text{duration} = \int^{t_2}_{t_1} \sqrt{ \frac{-1}{c^2} g_{\alpha \beta} \frac{d x^{\alpha}}{d \gamma} \frac{d x^{\beta}}{d \gamma} } \, d \gamma \,,</math> : <math> \Gamma^\alpha{}_{\beta\gamma} = \Gamma^\alpha{}_{\gamma\beta}.</math>


=== ] ===
where {{math|''γ''}} is any smooth strictly monotone parameterization of the trajectory. See also ].

The ] {{math|''g''{{sup|''αβ''}}}} of the metric tensor is another important tensor, used for raising indices:
:<math> g^{\alpha \beta} g_{\beta \gamma} = \delta^{\alpha}_{\gamma} \,.</math>

===]===


If this tensor is defined as If this tensor is defined as
:<math>R^\rho{}_{\sigma\mu\nu} = \Gamma^\rho{}_{\nu\sigma,\mu} : <math>R^\rho{}_{\sigma\mu\nu} = \Gamma^\rho{}_{\nu\sigma,\mu}
- \Gamma^\rho{}_{\mu\sigma,\nu} - \Gamma^\rho{}_{\mu\sigma,\nu}
+ \Gamma^\rho{}_{\mu\lambda}\Gamma^\lambda{}_{\nu\sigma} + \Gamma^\rho{}_{\mu\lambda}\Gamma^\lambda{}_{\nu\sigma}
- \Gamma^\rho{}_{\nu\lambda}\Gamma^\lambda{}_{\mu\sigma} \,, - \Gamma^\rho{}_{\nu\lambda}\Gamma^\lambda{}_{\mu\sigma} \,,
</math> </math>

then it is the ] of the covariant derivative with itself:<ref>{{cite book |author1=Synge J.L. |author2=Schild A. |publisher=first Dover Publications 1978 edition |title=Tensor Calculus |pages=83, p. 107|year= 1949}}</ref><ref>{{cite book |author=P. A. M. Dirac|pages=20–21| title=General Theory of Relativity}}</ref> then it is the ] of the covariant derivative with itself:<ref>{{cite book |author1=Synge J.L. |author2=Schild A. |publisher=first Dover Publications 1978 edition |title=Tensor Calculus |pages=83, p. 107|year= 1949}}</ref><ref>{{cite book |author=P. A. M. Dirac|pages=20–21| title=General Theory of Relativity}}</ref>
:<math>A_{\nu ; \rho \sigma} - A_{\nu ; \sigma \rho} = A_{\beta} R^{\beta}{}_{\nu \rho \sigma} \,,</math> : <math>A_{\nu ; \rho \sigma} - A_{\nu ; \sigma \rho} = A_{\beta} R^{\beta}{}_{\nu \rho \sigma} \,,</math>
since the connection is torsionless, which means that the torsion tensor vanishes.

since the ] {{math|Γ''<sup>α</sup><sub>βμ</sub>''}} is torsionless, which means that the ]
:<math>\Gamma^\lambda{}_{\mu\nu} - \Gamma^\lambda{}_{\nu\mu}\,</math>
vanishes.


This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows: This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows:
:<math>\begin{align} : <math>\begin{align}
&T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s ; \gamma \delta} T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s ; \gamma \delta}&
- T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s ; \delta \gamma} \\ - T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s ; \delta \gamma} \\
= - &R^{\alpha_1}{}_{\rho \gamma \delta} T^{\rho \alpha_2 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s} &\!\!\!\!\!\!\!\!\!\!= - R^{\alpha_1}{}_{\rho \gamma \delta} T^{\rho \alpha_2 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_s}
- \cdots - \cdots
- R^{\alpha_r}{}_{\rho \gamma \delta} T^{\alpha_1 \cdots \alpha_{r-1} \rho}{}_{\beta_1 \cdots \beta_s} \\ - R^{\alpha_r}{}_{\rho \gamma \delta} T^{\alpha_1 \cdots \alpha_{r-1} \rho}{}_{\beta_1 \cdots \beta_s} \\
{}+{} &R^\sigma{}_{\beta_1 \gamma \delta} T^{\alpha_1 \cdots \alpha_r}{}_{\sigma \beta_2 \cdots \beta_s} &+ R^\sigma{}_{\beta_1 \gamma \delta} T^{\alpha_1 \cdots \alpha_r}{}_{\sigma \beta_2 \cdots \beta_s}
+ \cdots + \cdots
+ R^\sigma{}_{\beta_s \gamma \delta} T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_{s-1} \sigma} \, + R^\sigma{}_{\beta_s \gamma \delta} T^{\alpha_1 \cdots \alpha_r}{}_{\beta_1 \cdots \beta_{s-1} \sigma} \,
Line 483: Line 447:
which are often referred to as the ''Ricci identities''.<ref>{{cite book| last = Lovelock| first = David|author2=Hanno Rund | year= 1989|title = Tensors, Differential Forms, and Variational Principles|page=84}}</ref> which are often referred to as the ''Ricci identities''.<ref>{{cite book| last = Lovelock| first = David|author2=Hanno Rund | year= 1989|title = Tensors, Differential Forms, and Variational Principles|page=84}}</ref>


=== ] ===
==See also==
{{Div col|colwidth=20em}}
*]
*]
*]
*]
*]
*]
*]
*]
*]
*]
*]
*]
*]
{{Div col end}}


The metric tensor {{math|''g''{{sub|''αβ''}}}} is used for lowering indices and gives the length of any ] curve
==Notes==
:<math>\text{length} = \int^{y_2}_{y_1} \sqrt{ g_{\alpha \beta} \frac{d x^{\alpha}}{d \gamma} \frac{d x^{\beta}}{d \gamma} } \, d \gamma \,,</math>
{{Notelist}}
where {{math|''γ''}} is any ] ] ] of the path. It also gives the duration of any ] curve
:<math>\text{duration} = \int^{t_2}_{t_1} \sqrt{ \frac{-1}{c^2} g_{\alpha \beta} \frac{d x^{\alpha}}{d \gamma} \frac{d x^{\beta}}{d \gamma} } \, d \gamma \,,</math>
where {{math|''γ''}} is any smooth strictly monotone parameterization of the trajectory. See also '']''.

The ] {{math|''g''{{sup|''αβ''}}}} of the metric tensor is another important tensor, used for raising indices:
: <math> g^{\alpha \beta} g_{\beta \gamma} = \delta^{\alpha}_{\gamma} \,.</math>

== See also ==
{{div col|colwidth=20em}}
* ]
* ]
* ]
**]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
* ]
{{div col end}}

== Notes ==

{{notelist}}


==References== ==References==
Line 508: Line 492:


==Sources== ==Sources==

* {{citation|last1=Bishop|first1=R.L.|author1-link=Richard L. Bishop|last2=Goldberg|first2=S.I.|title=Tensor Analysis on Manifolds|publisher=The Macmillan Company|year=1968|edition=First Dover 1980|isbn=0-486-64039-6|url-access=registration|url=https://archive.org/details/tensoranalysison00bish}}
* {{citation |last1=Bishop |first1=R.L. |author1-link=Richard L. Bishop |last2=Goldberg|first2=S.I. |title=Tensor Analysis on Manifolds |publisher=The Macmillan Company |year=1968 |edition=First Dover 1980 |isbn=0-486-64039-6|url-access=registration |url=https://archive.org/details/tensoranalysison00bish }}
*{{cite book
* {{cite book
| last = Danielson
| first = Donald A. | last = Danielson | first = Donald A. | author-link1 = Donald A. Danielson
| author-link1 = Donald A. Danielson
| title = Vectors and Tensors in Engineering and Physics | title = Vectors and Tensors in Engineering and Physics
| edition = 2/e | edition = 2/e
Line 519: Line 502:
| isbn = 978-0-8133-4080-7 | isbn = 978-0-8133-4080-7
}} }}
*{{cite book * {{cite book
| last = Dimitrienko | last = Dimitrienko | first = Yuriy | title = Tensor Analysis and Nonlinear Tensor Functions
| first = Yuriy
| title = Tensor Analysis and Nonlinear Tensor Functions
| year= 2002 | year= 2002
| publisher = Kluwer Academic Publishers (Springer) | publisher = Kluwer Academic Publishers (Springer)
Line 528: Line 509:
| isbn = 1-4020-1015-X | isbn = 1-4020-1015-X
}} }}
*{{cite book * {{cite book
| last = Lovelock | last = Lovelock | first = David
| first = David | author2 = Hanno Rund
| author2=Hanno Rund
| title = Tensors, Differential Forms, and Variational Principles | title = Tensors, Differential Forms, and Variational Principles
| year= 1989 | year= 1989
Line 538: Line 518:
| orig-year = 1975 | orig-year = 1975
}} }}
* {{citation
* {{citation | author=C. Møller| title = The Theory of Relativity| publisher=Oxford University Press|edition=3rd|year=1952|url=https://archive.org/details/theoryofrelativi029229mbp }}
| author = C. Møller
* {{cite book |author1=Synge J.L. |author2=Schild A. |title=Tensor Calculus |publisher=first Dover Publications 1978 edition |year=1949 |isbn=978-0-486-63612-2 |url-access=registration |url=https://archive.org/details/tensorcalculus00syng }}
| title = The Theory of Relativity
* {{citation | author=J.R. Tyldesley| title = An introduction to Tensor Analysis: For Engineers and Applied Scientists| publisher=Longman| year=1975|isbn=0-582-44355-5}}
| publisher=Oxford University Press
* {{citation | author=D.C. Kay| title = Tensor Calculus| publisher=Schaum’s Outlines, McGraw Hill (USA)|year=1988|isbn=0-07-033484-6}}
| edition=3rd
* {{citation | author=T. Frankel| title = The Geometry of Physics| publisher=Cambridge University Press|edition=3rd|year=2012|isbn=978-1107-602601}}
| year=1952
| url=https://archive.org/details/theoryofrelativi029229mbp
}}
* {{cite book
| author1=Synge J.L.
| author2=Schild A.
| title=Tensor Calculus
| publisher=first Dover Publications 1978 edition
| year=1949
| isbn=978-0-486-63612-2
| url-access=registration |url=https://archive.org/details/tensorcalculus00syng
}}
* {{citation
| author=J.R. Tyldesley
| title = An introduction to Tensor Analysis: For Engineers and Applied Scientists
| publisher=Longman
| year=1975
| isbn=0-582-44355-5
}}
* {{citation
| author=D.C. Kay
| title = Tensor Calculus
| publisher=Schaum's Outlines, McGraw Hill (USA)
| year=1988
| isbn=0-07-033484-6
}}
* {{citation
| author=T. Frankel
| title = The Geometry of Physics
| publisher=Cambridge University Press
| edition=3rd
|year=2012
|isbn=978-1107-602601
}}

== Further reading ==
*{{cite book | last = Dimitrienko | first = Yuriy | title = Tensor Analysis and Nonlinear Tensor Functions | year= 2002 | publisher = Springer | url = https://books.google.com/books?as_isbn=140201015X | isbn = 1-4020-1015-X
}}
*{{cite book | last = Sokolnikoff | first = Ivan S | title = Tensor Analysis: Theory and Applications to Geometry and Mechanics of Continua| url = https://archive.org/details/tensoranalysisth0000soko | url-access = registration | year= 1951 | publisher = Wiley| isbn = 0471810525}}
*{{cite book |first=A.I. |last=Borisenko |first2=I.E. |last2=Tarapov | title = Vector and Tensor Analysis with Applications| year= 1979| publisher = Dover |edition=2nd | isbn = 0486638332}}
*{{cite book | last = Itskov | first = Mikhail | title = Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum Mechanics | year= 2015| publisher = Springer |edition=2nd | isbn = 9783319163420}}
*{{cite book|first=J. R. |last=Tyldesley| title=An introduction to Tensor Analysis: For Engineers and Applied Scientists| publisher=Longman| year=1973 | isbn=0-582-44355-5}}
*{{cite book|first=D. C. |last=Kay| title=Tensor Calculus| publisher=McGraw Hill |series=Schaum’s Outlines | year=1988 | isbn=0-07-033484-6}}
*{{cite book|first=P. |last=Grinfeld| title=Introduction to Tensor Analysis and the Calculus of Moving Surfaces | publisher=Springer| year=2014 | isbn=978-1-4614-7866-9}}

== External links ==
*{{cite web |last1=Dullemond|first1=Kees|last2=Peeters|first2=Kasper|title=Introduction to Tensor Calculus|date=1991–2010|url=http://www.ita.uni-heidelberg.de/~dullemond/lectures/tensor/tensor.pdf|access-date=17 May 2018}}


{{Differentiable computing}}
{{tensors}}
{{Tensors}}
{{Calculus topics}}
{{Analysis-footer}}


] ]

Latest revision as of 09:18, 17 December 2024

Tensor index notation for tensor-based calculations "Tensor index notation" redirects here. For a summary of tensors in general, see Glossary of tensor theory.

In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. The basis of modern tensor analysis was developed by Bernhard Riemann in his a paper from 1861.

A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays.

A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor.

For compactness and convenience, the Ricci calculus incorporates Einstein notation, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.

Applications

Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning.

Working with a main proponent of the exterior calculus Élie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:

In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.

Notation for indices

See also: Index notation

Basis-related distinctions

Space and time coordinates

Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:

  • The lowercase Latin alphabet a, b, c, ... is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately.
  • The lowercase Greek alphabet α, β, γ, ... is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components.

Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.

Coordinate and index notation

The author(s) will usually make it clear whether a subscript is intended as an index or as a label.

For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector A = (A1, A2, A3) = (Ax, Ay, Az) shows a direct correspondence between the subscripts 1, 2, 3 and the labels x, y, z. In the expression Ai, i is interpreted as an index ranging over the values 1, 2, 3, while the x, y, z subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label t.

Reference to basis

Indices themselves may be labelled using diacritic-like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in:

X ϕ ^ , Y λ ¯ , Z η ~ , T μ {\displaystyle X_{\hat {\phi }}\,,Y_{\bar {\lambda }}\,,Z_{\tilde {\eta }}\,,T_{\mu '}}

to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in:

v μ = v ν L ν μ . {\displaystyle v^{\mu '}=v^{\nu }L_{\nu }{}^{\mu '}.}

This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor.

Upper and lower indices

Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics.

In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as a i j b j k {\displaystyle a_{ij}b_{jk}} for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained.

Covariant tensor components

A lower index (subscript) indicates covariance of the components with respect to that index:

A α β γ {\displaystyle A_{\alpha \beta \gamma \cdots }}

Contravariant tensor components

An upper index (superscript) indicates contravariance of the components with respect to that index:

A α β γ {\displaystyle A^{\alpha \beta \gamma \cdots }}

Mixed-variance tensor components

A tensor may have both upper and lower indices:

A α β γ δ . {\displaystyle A_{\alpha }{}^{\beta }{}_{\gamma }{}^{\delta \cdots }.}

Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta).

Tensor type and degree

The number of each upper and lower indices of a tensor gives its type: a tensor with p upper and q lower indices is said to be of type (p, q), or to be a type-(p, q) tensor.

The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence, order or rank, although rank is ambiguous). Thus, a tensor of type (p, q) has degree p + q.

Summation convention

The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over:

A α B α α A α B α or A α B α α A α B α . {\displaystyle A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\quad {\text{or}}\quad A^{\alpha }B_{\alpha }\equiv \sum _{\alpha }A^{\alpha }B_{\alpha }\,.}

The operation implied by such a summation is called tensor contraction:

A α B β A α B α α A α B α . {\displaystyle A_{\alpha }B^{\beta }\rightarrow A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\,.}

This summation may occur more than once within a term with a distinct symbol per pair of indices, for example:

A α γ B α C γ β α γ A α γ B α C γ β . {\displaystyle A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\equiv \sum _{\alpha }\sum _{\gamma }A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,.}

Other combinations of repeated indices within a term are considered to be ill-formed, such as

A α α γ {\displaystyle A_{\alpha \alpha }{}^{\gamma }\qquad } (both occurrences of α {\displaystyle \alpha } are lower; A α α γ {\displaystyle A_{\alpha }{}^{\alpha \gamma }} would be fine)
A α γ γ B α C γ β {\displaystyle A_{\alpha \gamma }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }} ( γ {\displaystyle \gamma } occurs twice as a lower index; A α γ γ B α {\displaystyle A_{\alpha \gamma }{}^{\gamma }B^{\alpha }} or A α δ γ B α C γ β {\displaystyle A_{\alpha \delta }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }} would be fine).

The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.

Multi-index notation

If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:

A i 1 i n B i 1 i n j 1 j m C j 1 j m A I B I J C J , {\displaystyle A_{i_{1}\cdots i_{n}}B^{i_{1}\cdots i_{n}j_{1}\cdots j_{m}}C_{j_{1}\cdots j_{m}}\equiv A_{I}B^{IJ}C_{J},}

where I = i1 i2 ⋅⋅⋅ in and J = j1 j2 ⋅⋅⋅ jm.

Sequential summation

A pair of vertical bars | ⋅ | around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices:

A | α β γ | B α β γ = A α β γ B | α β γ | = α < β < γ A α β γ B α β γ {\displaystyle A_{|\alpha \beta \gamma |\cdots }B^{\alpha \beta \gamma \cdots }=A_{\alpha \beta \gamma \cdots }B^{|\alpha \beta \gamma |\cdots }=\sum _{\alpha <\beta <\gamma }A_{\alpha \beta \gamma \cdots }B^{\alpha \beta \gamma \cdots }}

means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example:

A | α β γ | | δ ϵ λ | B α β γ δ ϵ λ | μ ν ζ | C μ ν ζ = α < β < γ   δ < ϵ < < λ   μ < ν < < ζ A α β γ δ ϵ λ B α β γ δ ϵ λ μ ν ζ C μ ν ζ {\displaystyle {\begin{aligned}&A_{|\alpha \beta \gamma |}{}^{|\delta \epsilon \cdots \lambda |}B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda |\mu \nu \cdots \zeta |}C^{\mu \nu \cdots \zeta }\\={}&\sum _{\alpha <\beta <\gamma }~\sum _{\delta <\epsilon <\cdots <\lambda }~\sum _{\mu <\nu <\cdots <\zeta }A_{\alpha \beta \gamma }{}^{\delta \epsilon \cdots \lambda }B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda \mu \nu \cdots \zeta }C^{\mu \nu \cdots \zeta }\end{aligned}}}

When using multi-index notation, an underarrow is placed underneath the block of indices:

A P Q B P Q R C R = P Q R A P Q B P Q R C R {\displaystyle A_{\underset {\rightharpoondown }{P}}{}^{\underset {\rightharpoondown }{Q}}B^{P}{}_{Q{\underset {\rightharpoondown }{R}}}C^{R}=\sum _{\underset {\rightharpoondown }{P}}\sum _{\underset {\rightharpoondown }{Q}}\sum _{\underset {\rightharpoondown }{R}}A_{P}{}^{Q}B^{P}{}_{QR}C^{R}}

where

P = | α β γ | , Q = | δ ϵ λ | , R = | μ ν ζ | {\displaystyle {\underset {\rightharpoondown }{P}}=|\alpha \beta \gamma |\,,\quad {\underset {\rightharpoondown }{Q}}=|\delta \epsilon \cdots \lambda |\,,\quad {\underset {\rightharpoondown }{R}}=|\mu \nu \cdots \zeta |}

Raising and lowering indices

By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa:

B γ β = g γ α A α β and A α β = g α γ B γ β {\displaystyle B^{\gamma }{}_{\beta \cdots }=g^{\gamma \alpha }A_{\alpha \beta \cdots }\quad {\text{and}}\quad A_{\alpha \beta \cdots }=g_{\alpha \gamma }B^{\gamma }{}_{\beta \cdots }}

The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.

Correlations between index positions and invariance

This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.

The Kronecker delta is used, see also below.

Basis transformation Component transformation Invariance
Covector, covariant vector, 1-form ω α ¯ = L β α ¯ ω β {\displaystyle \omega ^{\bar {\alpha }}=L_{\beta }{}^{\bar {\alpha }}\omega ^{\beta }} a α ¯ = a γ L γ α ¯ {\displaystyle a_{\bar {\alpha }}=a_{\gamma }L^{\gamma }{}_{\bar {\alpha }}} a α ¯ ω α ¯ = a γ L γ α ¯ L β α ¯ ω β = a γ δ γ β ω β = a β ω β {\displaystyle a_{\bar {\alpha }}\omega ^{\bar {\alpha }}=a_{\gamma }L^{\gamma }{}_{\bar {\alpha }}L_{\beta }{}^{\bar {\alpha }}\omega ^{\beta }=a_{\gamma }\delta ^{\gamma }{}_{\beta }\omega ^{\beta }=a_{\beta }\omega ^{\beta }}
Vector, contravariant vector e α ¯ = e γ L α ¯ γ {\displaystyle e_{\bar {\alpha }}=e_{\gamma }L_{\bar {\alpha }}{}^{\gamma }} u α ¯ = L α ¯ β u β {\displaystyle u^{\bar {\alpha }}=L^{\bar {\alpha }}{}_{\beta }u^{\beta }} e α ¯ u α ¯ = e γ L α ¯ γ L α ¯ β u β = e γ δ γ β u β = e γ u γ {\displaystyle e_{\bar {\alpha }}u^{\bar {\alpha }}=e_{\gamma }L_{\bar {\alpha }}{}^{\gamma }L^{\bar {\alpha }}{}_{\beta }u^{\beta }=e_{\gamma }\delta ^{\gamma }{}_{\beta }u^{\beta }=e_{\gamma }u^{\gamma }}

General outlines for index notation and operations

Tensors are equal if and only if every corresponding component is equal; e.g., tensor A equals tensor B if and only if

A α β γ = B α β γ {\displaystyle A^{\alpha }{}_{\beta \gamma }=B^{\alpha }{}_{\beta \gamma }}

for all α, β, γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis).

Free and dummy indices

Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices.

A tensor equation represents many ordinary (real-valued) equations

The components of tensors (like A, Bβ etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has n free indices, and if the dimensionality of the underlying vector space is m, the equality represents m equations: each index takes on every value of a specific set of values.

For instance, if

A α B β γ C γ δ + D α β E δ = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }}

is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (α, β, δ), there are 4 = 64 equations. Three of these are:

A 0 B 1 0 C 00 + A 0 B 1 1 C 10 + A 0 B 1 2 C 20 + A 0 B 1 3 C 30 + D 0 1 E 0 = T 0 1 0 A 1 B 0 0 C 00 + A 1 B 0 1 C 10 + A 1 B 0 2 C 20 + A 1 B 0 3 C 30 + D 1 0 E 0 = T 1 0 0 A 1 B 2 0 C 02 + A 1 B 2 1 C 12 + A 1 B 2 2 C 22 + A 1 B 2 3 C 32 + D 1 2 E 2 = T 1 2 2 . {\displaystyle {\begin{aligned}A^{0}B_{1}{}^{0}C_{00}+A^{0}B_{1}{}^{1}C_{10}+A^{0}B_{1}{}^{2}C_{20}+A^{0}B_{1}{}^{3}C_{30}+D^{0}{}_{1}{}E_{0}&=T^{0}{}_{1}{}_{0}\\A^{1}B_{0}{}^{0}C_{00}+A^{1}B_{0}{}^{1}C_{10}+A^{1}B_{0}{}^{2}C_{20}+A^{1}B_{0}{}^{3}C_{30}+D^{1}{}_{0}{}E_{0}&=T^{1}{}_{0}{}_{0}\\A^{1}B_{2}{}^{0}C_{02}+A^{1}B_{2}{}^{1}C_{12}+A^{1}B_{2}{}^{2}C_{22}+A^{1}B_{2}{}^{3}C_{32}+D^{1}{}_{2}{}E_{2}&=T^{1}{}_{2}{}_{2}.\end{aligned}}}

This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.

Indices are replaceable labels

Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is:

A α B β γ C γ δ + D α β E δ A λ B β μ C μ δ + D λ β E δ , {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\rightarrow A^{\lambda }B_{\beta }{}^{\mu }C_{\mu \delta }+D^{\lambda }{}_{\beta }{}E_{\delta }\,,}

whereas an erroneous change is:

A α B β γ C γ δ + D α β E δ A λ B β γ C μ δ + D α β E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\nrightarrow A^{\lambda }B_{\beta }{}^{\gamma }C_{\mu \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\,.}

In the first replacement, λ replaced α and μ replaced γ everywhere, so the expression still has the same meaning. In the second, λ did not fully replace α, and μ did not fully replace γ (incidentally, the contraction on the γ index became a tensor product), which is entirely inconsistent for reasons shown next.

Indices are the same in every term

The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example:

A α B β γ C γ δ + D α δ E β = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\delta }E_{\beta }=T^{\alpha }{}_{\beta }{}_{\delta }}

as for an erroneous expression:

A α B β γ C γ δ + D α β γ E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D_{\alpha }{}_{\beta }{}^{\gamma }E^{\delta }.}

In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, α, β, δ line up throughout and γ occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while β lines up, α and δ do not, and γ appears twice in one term (contraction) and once in another term, which is inconsistent.

Brackets and punctuation used once where implied

When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.

If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets.

Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices.

Symmetric and antisymmetric parts

Symmetric part of tensor

Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing p indices using σ to range over permutations of the numbers 1 to p, one takes a sum over the permutations of those indices ασ(i) for i = 1, 2, 3, ..., p, and then divides by the number of permutations:

A ( α 1 α 2 α p ) α p + 1 α q = 1 p ! σ A α σ ( 1 ) α σ ( p ) α p + 1 α q . {\displaystyle A_{(\alpha _{1}\alpha _{2}\cdots \alpha _{p})\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {1}{p!}}\sum _{\sigma }A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\,.}

For example, two symmetrizing indices mean there are two indices to permute and sum over:

A ( α β ) γ = 1 2 ! ( A α β γ + A β α γ ) {\displaystyle A_{(\alpha \beta )\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }+A_{\beta \alpha \gamma \cdots }\right)}

while for three symmetrizing indices, there are three indices to sum over and permute:

A ( α β γ ) δ = 1 3 ! ( A α β γ δ + A γ α β δ + A β γ α δ + A α γ β δ + A γ β α δ + A β α γ δ ) {\displaystyle A_{(\alpha \beta \gamma )\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }+A_{\alpha \gamma \beta \delta \cdots }+A_{\gamma \beta \alpha \delta \cdots }+A_{\beta \alpha \gamma \delta \cdots }\right)}

The symmetrization is distributive over addition;

A ( α ( B β ) γ + C β ) γ ) = A ( α B β ) γ + A ( α C β ) γ {\displaystyle A_{(\alpha }\left(B_{\beta )\gamma \cdots }+C_{\beta )\gamma \cdots }\right)=A_{(\alpha }B_{\beta )\gamma \cdots }+A_{(\alpha }C_{\beta )\gamma \cdots }}

Indices are not part of the symmetrization when they are:

  • not on the same level, for example;
    A ( α B β γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B^{\beta }{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }+A_{\gamma }B^{\beta }{}_{\alpha }\right)}
  • within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
    A ( α B | β | γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B_{|\beta |}{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }+A_{\gamma }B_{\beta \alpha }\right)}

Here the α and γ indices are symmetrized, β is not.

Antisymmetric or alternating part of tensor

Square brackets, , around multiple indices denotes the antisymmetrized part of the tensor. For p antisymmetrizing indices – the sum over the permutations of those indices ασ(i) multiplied by the signature of the permutation sgn(σ) is taken, then divided by the number of permutations:

A [ α 1 α p ] α p + 1 α q = 1 p ! σ sgn ( σ ) A α σ ( 1 ) α σ ( p ) α p + 1 α q = δ α 1 α p β 1 β p A β 1 β p α p + 1 α q {\displaystyle {\begin{aligned}&A_{\alpha _{p+1}\cdots \alpha _{q}}\\={}&{\dfrac {1}{p!}}\sum _{\sigma }\operatorname {sgn}(\sigma )A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\\={}&\delta _{\alpha _{1}\cdots \alpha _{p}}^{\beta _{1}\dots \beta _{p}}A_{\beta _{1}\cdots \beta _{p}\alpha _{p+1}\cdots \alpha _{q}}\\\end{aligned}}}

where δ
α1⋅⋅⋅αp is the generalized Kronecker delta of degree 2p, with scaling as defined below.

For example, two antisymmetrizing indices imply:

A [ α β ] γ = 1 2 ! ( A α β γ A β α γ ) {\displaystyle A_{\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }-A_{\beta \alpha \gamma \cdots }\right)}

while three antisymmetrizing indices imply:

A [ α β γ ] δ = 1 3 ! ( A α β γ δ + A γ α β δ + A β γ α δ A α γ β δ A γ β α δ A β α γ δ ) {\displaystyle A_{\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }-A_{\alpha \gamma \beta \delta \cdots }-A_{\gamma \beta \alpha \delta \cdots }-A_{\beta \alpha \gamma \delta \cdots }\right)}

as for a more specific example, if F represents the electromagnetic tensor, then the equation

0 = F [ α β , γ ] = 1 3 ! ( F α β , γ + F γ α , β + F β γ , α F β α , γ F α γ , β F γ β , α ) {\displaystyle 0=F_{}={\dfrac {1}{3!}}\left(F_{\alpha \beta ,\gamma }+F_{\gamma \alpha ,\beta }+F_{\beta \gamma ,\alpha }-F_{\beta \alpha ,\gamma }-F_{\alpha \gamma ,\beta }-F_{\gamma \beta ,\alpha }\right)\,}

represents Gauss's law for magnetism and Faraday's law of induction.

As before, the antisymmetrization is distributive over addition;

A [ α ( B β ] γ + C β ] γ ) = A [ α B β ] γ + A [ α C β ] γ {\displaystyle A_{\gamma \cdots }+C_{\beta ]\gamma \cdots }\right)=A_{\gamma \cdots }+A_{\gamma \cdots }}

As with symmetrization, indices are not antisymmetrized when they are:

  • not on the same level, for example;
    A [ α B β γ ] = 1 2 ! ( A α B β γ A γ B β α ) {\displaystyle A_{}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }-A_{\gamma }B^{\beta }{}_{\alpha }\right)}
  • within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
    A [ α B | β | γ ] = 1 2 ! ( A α B β γ A γ B β α ) {\displaystyle A_{}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }-A_{\gamma }B_{\beta \alpha }\right)}

Here the α and γ indices are antisymmetrized, β is not.

Sum of symmetric and antisymmetric parts

Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:

A α β γ = A ( α β ) γ + A [ α β ] γ {\displaystyle A_{\alpha \beta \gamma \cdots }=A_{(\alpha \beta )\gamma \cdots }+A_{\gamma \cdots }}

as can be seen by adding the above expressions for A(αβ)γ⋅⋅⋅ and Aγ⋅⋅⋅. This does not hold for other than two indices.

Differentiation

See also: Four-gradient, d'Alembertian, and Intrinsic derivative

For compactness, derivatives may be indicated by adding indices after a comma or semicolon.

Partial derivative

While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by x, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, Δx, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below.

To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable x, a comma is placed before an appended lower index of the coordinate variable.

A α β , γ = x γ A α β {\displaystyle A_{\alpha \beta \cdots ,\gamma }={\dfrac {\partial }{\partial x^{\gamma }}}A_{\alpha \beta \cdots }}

This may be repeated (without adding further commas):

A α 1 α 2 α p , α p + 1 α q = x α q x α p + 2 x α p + 1 A α 1 α 2 α p . {\displaystyle A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}\,,\,\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {\partial }{\partial x^{\alpha _{q}}}}\cdots {\dfrac {\partial }{\partial x^{\alpha _{p+2}}}}{\dfrac {\partial }{\partial x^{\alpha _{p+1}}}}A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}.}

These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates

x α , γ = δ γ α , {\displaystyle x^{\alpha }{}_{,\gamma }=\delta _{\gamma }^{\alpha },}

where δ is the Kronecker delta.

Covariant derivative

The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon ( ; ) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash ( / ) or in three-dimensional curved space a single vertical bar ( | ).

The covariant derivative of a scalar function, a contravariant vector and a covariant vector are:

f ; β = f , β {\displaystyle f_{;\beta }=f_{,\beta }}
A α ; β = A α , β + Γ α γ β A γ {\displaystyle A^{\alpha }{}_{;\beta }=A^{\alpha }{}_{,\beta }+\Gamma ^{\alpha }{}_{\gamma \beta }A^{\gamma }}
A α ; β = A α , β Γ γ α β A γ , {\displaystyle A_{\alpha ;\beta }=A_{\alpha ,\beta }-\Gamma ^{\gamma }{}_{\alpha \beta }A_{\gamma }\,,}

where Γγβ are the connection coefficients.

For an arbitrary tensor:

T α 1 α r β 1 β s ; γ = T α 1 α r β 1 β s , γ + Γ α 1 δ γ T δ α 2 α r β 1 β s + + Γ α r δ γ T α 1 α r 1 δ β 1 β s Γ δ β 1 γ T α 1 α r δ β 2 β s Γ δ β s γ T α 1 α r β 1 β s 1 δ . {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma }&\\=T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&+\,\Gamma ^{\alpha _{1}}{}_{\delta \gamma }T^{\delta \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\cdots +\Gamma ^{\alpha _{r}}{}_{\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\delta }{}_{\beta _{1}\cdots \beta _{s}}\\&-\,\Gamma ^{\delta }{}_{\beta _{1}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\delta \beta _{2}\cdots \beta _{s}}-\cdots -\Gamma ^{\delta }{}_{\beta _{s}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\delta }\,.\end{aligned}}}

An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol ∇β. For the case of a vector field A:

β A α = A α ; β . {\displaystyle \nabla _{\beta }A^{\alpha }=A^{\alpha }{}_{;\beta }\,.}

The covariant formulation of the directional derivative of any tensor field along a vector v may be expressed as its contraction with the covariant derivative, e.g.:

v γ A α ; γ . {\displaystyle v^{\gamma }A_{\alpha ;\gamma }\,.}

The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly.

This derivative is characterized by the product rule:

( A α β B γ δ ) ; ϵ = A α β ; ϵ B γ δ + A α β B γ δ ; ϵ . {\displaystyle (A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots })_{;\epsilon }=A^{\alpha }{}_{\beta \cdots ;\epsilon }B^{\gamma }{}_{\delta \cdots }+A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots ;\epsilon }\,.}

Connection types

A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection.

A connection is a metric connection when the covariant derivative of the metric tensor vanishes:

g μ ν ; ξ = 0 . {\displaystyle g_{\mu \nu ;\xi }=0\,.}

An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: Tβγ = 0) is a Levi-Civita connection.

The Γβγ for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind.

Exterior derivative

The exterior derivative of a totally antisymmetric type (0, s) tensor field with components Aα1⋅⋅⋅αs (also called a differential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:

( d A ) γ α 1 α s = x [ γ A α 1 α s ] = A [ α 1 α s , γ ] . {\displaystyle (\mathrm {d} A)_{\gamma \alpha _{1}\cdots \alpha _{s}}={\frac {\partial }{\partial x^{}=A_{}.}

This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule.

Lie derivative

The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type (r, s) tensor field T along (the flow of) a contravariant vector field X may be expressed using a coordinate basis as

( L X T ) α 1 α r β 1 β s = X γ T α 1 α r β 1 β s , γ X α 1 , γ T γ α 2 α r β 1 β s X α r , γ T α 1 α r 1 γ β 1 β s + X γ , β 1 T α 1 α r γ β 2 β s + + X γ , β s T α 1 α r β 1 β s 1 γ . {\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}&\\=X^{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&-\,X^{\alpha _{1}}{}_{,\gamma }T^{\gamma \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -X^{\alpha _{r}}{}_{,\gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\gamma }{}_{\beta _{1}\cdots \beta _{s}}\\&+\,X^{\gamma }{}_{,\beta _{1}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\gamma \beta _{2}\cdots \beta _{s}}+\cdots +X^{\gamma }{}_{,\beta _{s}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\gamma }\,.\end{aligned}}}

This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero:

( L X X ) α = X γ X α , γ X α , γ X γ = 0 . {\displaystyle ({\mathcal {L}}_{X}X)^{\alpha }=X^{\gamma }X^{\alpha }{}_{,\gamma }-X^{\alpha }{}_{,\gamma }X^{\gamma }=0\,.}

Notable tensors

Kronecker delta

The Kronecker delta is like the identity matrix when multiplied and contracted:

δ β α A β = A α δ ν μ B μ = B ν . {\displaystyle {\begin{aligned}\delta _{\beta }^{\alpha }\,A^{\beta }&=A^{\alpha }\\\delta _{\nu }^{\mu }\,B_{\mu }&=B_{\nu }.\end{aligned}}}

The components δ
β are the same in any basis and form an invariant tensor of type (1, 1), i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant. Its trace is the dimensionality of the space; for example, in four-dimensional spacetime,

δ ρ ρ = δ 0 0 + δ 1 1 + δ 2 2 + δ 3 3 = 4. {\displaystyle \delta _{\rho }^{\rho }=\delta _{0}^{0}+\delta _{1}^{1}+\delta _{2}^{2}+\delta _{3}^{3}=4.}

The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree 2p may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of p! on the right):

δ β 1 β p α 1 α p = δ β 1 [ α 1 δ β p α p ] , {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}=\delta _{\beta _{1}}^{},}

and acts as an antisymmetrizer on p indices:

δ β 1 β p α 1 α p A β 1 β p = A [ α 1 α p ] . {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}\,A^{\beta _{1}\cdots \beta _{p}}=A^{}.}

Torsion tensor

An affine connection has a torsion tensor Tβγ:

T α β γ = Γ α β γ Γ α γ β γ α β γ , {\displaystyle T^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\beta \gamma }-\Gamma ^{\alpha }{}_{\gamma \beta }-\gamma ^{\alpha }{}_{\beta \gamma },}

where γβγ are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis.

For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations

Γ α β γ = Γ α γ β . {\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\gamma \beta }.}

Riemann curvature tensor

If this tensor is defined as

R ρ σ μ ν = Γ ρ ν σ , μ Γ ρ μ σ , ν + Γ ρ μ λ Γ λ ν σ Γ ρ ν λ Γ λ μ σ , {\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\Gamma ^{\rho }{}_{\nu \sigma ,\mu }-\Gamma ^{\rho }{}_{\mu \sigma ,\nu }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }\,,}

then it is the commutator of the covariant derivative with itself:

A ν ; ρ σ A ν ; σ ρ = A β R β ν ρ σ , {\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }\,,}

since the connection is torsionless, which means that the torsion tensor vanishes.

This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows:

T α 1 α r β 1 β s ; γ δ T α 1 α r β 1 β s ; δ γ = R α 1 ρ γ δ T ρ α 2 α r β 1 β s R α r ρ γ δ T α 1 α r 1 ρ β 1 β s + R σ β 1 γ δ T α 1 α r σ β 2 β s + + R σ β s γ δ T α 1 α r β 1 β s 1 σ {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma \delta }&-T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\delta \gamma }\\&\!\!\!\!\!\!\!\!\!\!=-R^{\alpha _{1}}{}_{\rho \gamma \delta }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -R^{\alpha _{r}}{}_{\rho \gamma \delta }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}\\&+R^{\sigma }{}_{\beta _{1}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}+\cdots +R^{\sigma }{}_{\beta _{s}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\,\end{aligned}}}

which are often referred to as the Ricci identities.

Metric tensor

The metric tensor gαβ is used for lowering indices and gives the length of any space-like curve

length = y 1 y 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{length}}=\int _{y_{1}}^{y_{2}}{\sqrt {g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,}

where γ is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve

duration = t 1 t 2 1 c 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{duration}}=\int _{t_{1}}^{t_{2}}{\sqrt {{\frac {-1}{c^{2}}}g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,}

where γ is any smooth strictly monotone parameterization of the trajectory. See also Line element.

The inverse matrix g of the metric tensor is another important tensor, used for raising indices:

g α β g β γ = δ γ α . {\displaystyle g^{\alpha \beta }g_{\beta \gamma }=\delta _{\gamma }^{\alpha }\,.}

See also

Notes

  1. While the raising and lowering of indices is dependent on a metric tensor, the covariant derivative is only dependent on the connection while the exterior derivative and the Lie derivative are dependent on neither.

References

  1. Synge J.L.; Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. pp. 6–108.
  2. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 85–86, §3.5. ISBN 0-7167-0344-0.
  3. R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4.
  4. Ricci, Gregorio; Levi-Civita, Tullio (March 1900). "Méthodes de calcul différentiel absolu et leurs applications" [Methods of the absolute differential calculus and their applications]. Mathematische Annalen (in French). 54 (1–2). Springer: 125–201. doi:10.1007/BF01454201. S2CID 120009332. Retrieved 19 October 2019.
  5. Schouten, Jan A. (1924). R. Courant (ed.). Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Ricci Calculus – An introduction in the latest methods and problems in multi-dimensional differential geometry). Grundlehren der mathematischen Wissenschaften (in German). Vol. 10. Berlin: Springer Verlag.
  6. Jahnke, Hans Niels (2003). A history of analysis. Providence, RI: American Mathematical Society. p. 244. ISBN 0-8218-2623-9. OCLC 51607350.
  7. "Interview with Shiing Shen Chern" (PDF). Notices of the AMS. 45 (7): 860–5. August 1998.
  8. C. Møller (1952), The Theory of Relativity, p. 234 is an example of a variation: 'Greek indices run from 1 to 3, Latin indices from 1 to 4'
  9. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 67, ISBN 978-1107-602601
  10. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 91. ISBN 0-7167-0344-0.
  11. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 67, ISBN 978-1107-602601
  12. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 61, 202–203, 232. ISBN 0-7167-0344-0.
  13. G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. ISBN 978-0-521-57507-2.
  14. Covariant derivative – Mathworld, Wolfram
  15. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 298, ISBN 978-1107-602601
  16. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 510, §21.5. ISBN 0-7167-0344-0.
  17. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 299, ISBN 978-1107-602601
  18. D. McMahon (2006). Relativity. Demystified. McGraw Hill. p. 67. ISBN 0-07-145545-0.
  19. R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4.
  20. Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds, p. 130
  21. Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds, p. 85
  22. Synge J.L.; Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. pp. 83, p. 107.
  23. P. A. M. Dirac. General Theory of Relativity. pp. 20–21.
  24. Lovelock, David; Hanno Rund (1989). Tensors, Differential Forms, and Variational Principles. p. 84.

Sources

Further reading

External links

Differentiable computing
General
Hardware
Software libraries
Tensors
Glossary of tensor theory
Scope
Mathematics
Notation
Tensor
definitions
Operations
Related
abstractions
Notable tensors
Mathematics
Physics
Mathematicians
Calculus
Precalculus
Limits
Differential calculus
Integral calculus
Vector calculus
Multivariable calculus
Sequences and series
Special functions
and numbers
History of calculus
Lists
Integrals
Miscellaneous topics
Major topics in mathematical analysis
Mathematics portal
Categories: