Misplaced Pages

Pauli matrices

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Pauli operators) Matrices important in quantum mechanics and the study of spin

Wolfgang Pauli (1900–1958), c. 1924. Pauli received the Nobel Prize in physics in 1945, nominated by Albert Einstein, for the Pauli exclusion principle.

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma (σ), they are occasionally denoted by tau (τ) when used in connection with isospin symmetries. σ 1 = σ x = ( 0 1 1 0 ) , σ 2 = σ y = ( 0 i i 0 ) , σ 3 = σ z = ( 1 0 0 1 ) . {\displaystyle {\begin{aligned}\sigma _{1}=\sigma _{x}&={\begin{pmatrix}0&1\\1&0\end{pmatrix}},\\\sigma _{2}=\sigma _{y}&={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},\\\sigma _{3}=\sigma _{z}&={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.\\\end{aligned}}}

These matrices are named after the physicist Wolfgang Pauli. In quantum mechanics, they occur in the Pauli equation, which takes into account the interaction of the spin of a particle with an external electromagnetic field. They also represent the interaction states of two polarization filters for horizontal/vertical polarization, 45 degree polarization (right/left), and circular polarization (right/left).

Each Pauli matrix is Hermitian, and together with the identity matrix I (sometimes considered as the zeroth Pauli matrix σ0 ), the Pauli matrices form a basis of the vector space of 2 × 2 Hermitian matrices over the real numbers, under addition. This means that any 2 × 2 Hermitian matrix can be written in a unique way as a linear combination of Pauli matrices, with all coefficients being real numbers.

The Pauli matrices satisfy the useful product relation: σ i σ j = δ i j + i ϵ i j k σ k . {\displaystyle {\begin{aligned}\sigma _{i}\sigma _{j}=\delta _{ij}+i\epsilon _{ijk}\sigma _{k}.\end{aligned}}}

Hermitian operators represent observables in quantum mechanics, so the Pauli matrices span the space of observables of the complex two-dimensional Hilbert space. In the context of Pauli's work, σk represents the observable corresponding to spin along the kth coordinate axis in three-dimensional Euclidean space R 3 . {\displaystyle \mathbb {R} ^{3}.}

The Pauli matrices (after multiplication by i to make them anti-Hermitian) also generate transformations in the sense of Lie algebras: the matrices 1, 2, 3 form a basis for the real Lie algebra s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} , which exponentiates to the special unitary group SU(2). The algebra generated by the three matrices σ1, σ2, σ3 is isomorphic to the Clifford algebra of R 3 , {\displaystyle \mathbb {R} ^{3},} and the (unital) associative algebra generated by 1, 2, 3 functions identically (is isomorphic) to that of quaternions ( H {\displaystyle \mathbb {H} } ).

Algebraic properties

Cayley table; the entry shows the value of the row times the column.
× σ x {\displaystyle \sigma _{x}} σ y {\displaystyle \sigma _{y}} σ z {\displaystyle \sigma _{z}}
σ x {\displaystyle \sigma _{x}} I {\displaystyle I} i σ z {\displaystyle i\sigma _{z}} i σ y {\displaystyle -i\sigma _{y}}
σ y {\displaystyle \sigma _{y}} i σ z {\displaystyle -i\sigma _{z}} I {\displaystyle I} i σ x {\displaystyle i\sigma _{x}}
σ z {\displaystyle \sigma _{z}} i σ y {\displaystyle i\sigma _{y}} i σ x {\displaystyle -i\sigma _{x}} I {\displaystyle I}

All three of the Pauli matrices can be compacted into a single expression:

σ j = ( δ j 3 δ j 1 i δ j 2 δ j 1 + i δ j 2 δ j 3 ) , {\displaystyle \sigma _{j}={\begin{pmatrix}\delta _{j3}&\delta _{j1}-i\,\delta _{j2}\\\delta _{j1}+i\,\delta _{j2}&-\delta _{j3}\end{pmatrix}},}

where the solution to i = −1 is the "imaginary unit", and δjk is the Kronecker delta, which equals +1 if j = k and 0 otherwise. This expression is useful for "selecting" any one of the matrices numerically by substituting values of j = 1, 2, 3, in turn useful when any of the matrices (but no particular one) is to be used in algebraic manipulations.

The matrices are involutory:

σ 1 2 = σ 2 2 = σ 3 2 = i σ 1 σ 2 σ 3 = ( 1 0 0 1 ) = I , {\displaystyle \sigma _{1}^{2}=\sigma _{2}^{2}=\sigma _{3}^{2}=-i\,\sigma _{1}\sigma _{2}\sigma _{3}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}=I,}

where I is the identity matrix.

The determinants and traces of the Pauli matrices are

det σ j = 1 , tr σ j = 0 , {\displaystyle {\begin{aligned}\det \sigma _{j}&=-1,\\\operatorname {tr} \sigma _{j}&=0,\end{aligned}}}

from which we can deduce that each matrix σj has eigenvalues +1 and −1.

With the inclusion of the identity matrix I (sometimes denoted σ0), the Pauli matrices form an orthogonal basis (in the sense of Hilbert–Schmidt) of the Hilbert space H 2 {\displaystyle {\mathcal {H}}_{2}} of 2 × 2 Hermitian matrices over R {\displaystyle \mathbb {R} } , and the Hilbert space M 2 , 2 ( C ) {\displaystyle {\mathcal {M}}_{2,2}(\mathbb {C} )} of all complex 2 × 2 matrices over C {\displaystyle \mathbb {C} } .

Commutation and anti-commutation relations

Commutation relations

The Pauli matrices obey the following commutation relations:

[ σ j , σ k ] = 2 i l ε j k l σ l , {\displaystyle =2i\sum _{l}\varepsilon _{jkl}\,\sigma _{l},}

where the Levi-Civita symbol εjkl is used.

These commutation relations make the Pauli matrices the generators of a representation of the Lie algebra ( R 3 , × ) s u ( 2 ) s o ( 3 ) . {\displaystyle (\mathbb {R} ^{3},\times )\cong {\mathfrak {su}}(2)\cong {\mathfrak {so}}(3).}

Anticommutation relations

They also satisfy the anticommutation relations:

{ σ j , σ k } = 2 δ j k I , {\displaystyle \{\sigma _{j},\sigma _{k}\}=2\delta _{jk}\,I,}

where { σ j , σ k } {\displaystyle \{\sigma _{j},\sigma _{k}\}} is defined as σ j σ k + σ k σ j , {\displaystyle \sigma _{j}\sigma _{k}+\sigma _{k}\sigma _{j},} and δjk is the Kronecker delta. I denotes the 2 × 2 identity matrix.

These anti-commutation relations make the Pauli matrices the generators of a representation of the Clifford algebra for R 3 , {\displaystyle \mathbb {R} ^{3},} denoted C l 3 ( R ) . {\displaystyle \mathrm {Cl} _{3}(\mathbb {R} ).}

The usual construction of generators σ j k = 1 4 [ σ j , σ k ] {\displaystyle \sigma _{jk}={\tfrac {1}{4}}} of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} using the Clifford algebra recovers the commutation relations above, up to unimportant numerical factors.

A few explicit commutators and anti-commutators are given below as examples:

Commutators Anticommutators
[ σ 1 , σ 1 ] = 0 [ σ 1 , σ 2 ] = 2 i σ 3 [ σ 2 , σ 3 ] = 2 i σ 1 [ σ 3 , σ 1 ] = 2 i σ 2 {\displaystyle {\begin{aligned}\left&=0\\\left&=2i\sigma _{3}\\\left&=2i\sigma _{1}\\\left&=2i\sigma _{2}\end{aligned}}}      { σ 1 , σ 1 } = 2 I { σ 1 , σ 2 } = 0 { σ 2 , σ 3 } = 0 { σ 3 , σ 1 } = 0 {\displaystyle {\begin{aligned}\left\{\sigma _{1},\sigma _{1}\right\}&=2I\\\left\{\sigma _{1},\sigma _{2}\right\}&=0\\\left\{\sigma _{2},\sigma _{3}\right\}&=0\\\left\{\sigma _{3},\sigma _{1}\right\}&=0\end{aligned}}}

Eigenvectors and eigenvalues

Each of the (Hermitian) Pauli matrices has two eigenvalues: +1 and −1. The corresponding normalized eigenvectors are

ψ x + = 1 2 [ 1 1 ] , ψ x = 1 2 [ 1 1 ] , ψ y + = 1 2 [ 1 i ] , ψ y = 1 2 [ 1 i ] , ψ z + = [ 1 0 ] , ψ z = [ 0 1 ] . {\displaystyle {\begin{aligned}\psi _{x+}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}},&\psi _{x-}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\-1\end{bmatrix}},\\\psi _{y+}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\i\end{bmatrix}},&\psi _{y-}&={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\-i\end{bmatrix}},\\\psi _{z+}&={\begin{bmatrix}1\\0\end{bmatrix}},&\psi _{z-}&={\begin{bmatrix}0\\1\end{bmatrix}}.\end{aligned}}}

Pauli vectors

The Pauli vector is defined by σ = σ 1 x ^ 1 + σ 2 x ^ 2 + σ 3 x ^ 3 , {\displaystyle {\vec {\sigma }}=\sigma _{1}{\hat {x}}_{1}+\sigma _{2}{\hat {x}}_{2}+\sigma _{3}{\hat {x}}_{3},} where x ^ 1 {\displaystyle {\hat {x}}_{1}} , x ^ 2 {\displaystyle {\hat {x}}_{2}} , and x ^ 3 {\displaystyle {\hat {x}}_{3}} are an equivalent notation for the more familiar x ^ {\displaystyle {\hat {x}}} , y ^ {\displaystyle {\hat {y}}} , and z ^ {\displaystyle {\hat {z}}} .

The Pauli vector provides a mapping mechanism from a vector basis to a Pauli matrix basis as follows: a σ = k , l a k σ x ^ k x ^ = k a k σ k = ( a 3 a 1 i a 2 a 1 + i a 2 a 3 ) . {\displaystyle {\begin{aligned}{\vec {a}}\cdot {\vec {\sigma }}&=\sum _{k,l}a_{k}\,\sigma _{\ell }\,{\hat {x}}_{k}\cdot {\hat {x}}_{\ell }\\&=\sum _{k}a_{k}\,\sigma _{k}\\&={\begin{pmatrix}a_{3}&a_{1}-ia_{2}\\a_{1}+ia_{2}&-a_{3}\end{pmatrix}}.\end{aligned}}}

More formally, this defines a map from R 3 {\displaystyle \mathbb {R} ^{3}} to the vector space of traceless Hermitian 2 × 2 {\displaystyle 2\times 2} matrices. This map encodes structures of R 3 {\displaystyle \mathbb {R} ^{3}} as a normed vector space and as a Lie algebra (with the cross-product as its Lie bracket) via functions of matrices, making the map an isomorphism of Lie algebras. This makes the Pauli matrices intertwiners from the point of view of representation theory.

Another way to view the Pauli vector is as a 2 × 2 {\displaystyle 2\times 2} Hermitian traceless matrix-valued dual vector, that is, an element of Mat 2 × 2 ( C ) ( R 3 ) {\displaystyle {\text{Mat}}_{2\times 2}(\mathbb {C} )\otimes (\mathbb {R} ^{3})^{*}} that maps a a σ . {\displaystyle {\vec {a}}\mapsto {\vec {a}}\cdot {\vec {\sigma }}.}

Completeness relation

Each component of a {\displaystyle {\vec {a}}} can be recovered from the matrix (see completeness relation below) 1 2 tr ( ( a σ ) σ ) = a . {\displaystyle {\frac {1}{2}}\operatorname {tr} {\Bigl (}{\bigl (}{\vec {a}}\cdot {\vec {\sigma }}{\bigr )}{\vec {\sigma }}{\Bigr )}={\vec {a}}.} This constitutes an inverse to the map a a σ {\displaystyle {\vec {a}}\mapsto {\vec {a}}\cdot {\vec {\sigma }}} , making it manifest that the map is a bijection.

Determinant

The norm is given by the determinant (up to a minus sign) det ( a σ ) = a a = | a | 2 . {\displaystyle \det {\bigl (}{\vec {a}}\cdot {\vec {\sigma }}{\bigr )}=-{\vec {a}}\cdot {\vec {a}}=-|{\vec {a}}|^{2}.} Then, considering the conjugation action of an SU ( 2 ) {\displaystyle {\text{SU}}(2)} matrix U {\displaystyle U} on this space of matrices,

U a σ := U a σ U 1 , {\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}:=U\,{\vec {a}}\cdot {\vec {\sigma }}\,U^{-1},}

we find det ( U a σ ) = det ( a σ ) , {\displaystyle \det(U*{\vec {a}}\cdot {\vec {\sigma }})=\det({\vec {a}}\cdot {\vec {\sigma }}),} and that U a σ {\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}} is Hermitian and traceless. It then makes sense to define U a σ = a σ , {\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}={\vec {a}}'\cdot {\vec {\sigma }},} where a {\displaystyle {\vec {a}}'} has the same norm as a , {\displaystyle {\vec {a}},} and therefore interpret U {\displaystyle U} as a rotation of three-dimensional space. In fact, it turns out that the special restriction on U {\displaystyle U} implies that the rotation is orientation preserving. This allows the definition of a map R : S U ( 2 ) S O ( 3 ) {\displaystyle R:\mathrm {SU} (2)\to \mathrm {SO} (3)} given by

U a σ = a σ =: ( R ( U )   a ) σ , {\displaystyle U*{\vec {a}}\cdot {\vec {\sigma }}={\vec {a}}'\cdot {\vec {\sigma }}=:(R(U)\ {\vec {a}})\cdot {\vec {\sigma }},}

where R ( U ) S O ( 3 ) . {\displaystyle R(U)\in \mathrm {SO} (3).} This map is the concrete realization of the double cover of S O ( 3 ) {\displaystyle \mathrm {SO} (3)} by S U ( 2 ) , {\displaystyle \mathrm {SU} (2),} and therefore shows that SU ( 2 ) S p i n ( 3 ) . {\displaystyle {\text{SU}}(2)\cong \mathrm {Spin} (3).} The components of R ( U ) {\displaystyle R(U)} can be recovered using the tracing process above:

R ( U ) i j = 1 2 tr ( σ i U σ j U 1 ) . {\displaystyle R(U)_{ij}={\frac {1}{2}}\operatorname {tr} \left(\sigma _{i}U\sigma _{j}U^{-1}\right).}

Cross-product

The cross-product is given by the matrix commutator (up to a factor of 2 i {\displaystyle 2i} ) [ a σ , b σ ] = 2 i ( a × b ) σ . {\displaystyle =2i\,({\vec {a}}\times {\vec {b}})\cdot {\vec {\sigma }}.} In fact, the existence of a norm follows from the fact that R 3 {\displaystyle \mathbb {R} ^{3}} is a Lie algebra (see Killing form).

This cross-product can be used to prove the orientation-preserving property of the map above.

Eigenvalues and eigenvectors

The eigenvalues of   a σ   {\displaystyle \ {\vec {a}}\cdot {\vec {\sigma }}\ } are   ± | a | . {\displaystyle \ \pm |{\vec {a}}|.} This follows immediately from tracelessness and explicitly computing the determinant.

More abstractly, without computing the determinant, which requires explicit properties of the Pauli matrices, this follows from   ( a σ ) 2 | a | 2 = 0   , {\displaystyle \ ({\vec {a}}\cdot {\vec {\sigma }})^{2}-|{\vec {a}}|^{2}=0\ ,} since this can be factorised into   ( a σ | a | ) ( a σ + | a | ) = 0. {\displaystyle \ ({\vec {a}}\cdot {\vec {\sigma }}-|{\vec {a}}|)({\vec {a}}\cdot {\vec {\sigma }}+|{\vec {a}}|)=0.} A standard result in linear algebra (a linear map that satisfies a polynomial equation written in distinct linear factors is diagonal) means this implies   a σ   {\displaystyle \ {\vec {a}}\cdot {\vec {\sigma }}\ } is diagonal with possible eigenvalues   ± | a | . {\displaystyle \ \pm |{\vec {a}}|.} The tracelessness of   a σ   {\displaystyle \ {\vec {a}}\cdot {\vec {\sigma }}\ } means it has exactly one of each eigenvalue.

Its normalized eigenvectors are ψ + = 1 2 | a |   ( a 3 + | a | )     [ a 3 + | a | a 1 + i a 2 ] ; ψ = 1 2 | a | ( a 3 + | a | ) [ i a 2 a 1 a 3 + | a | ]   . {\displaystyle \psi _{+}={\frac {1}{{\sqrt {2\left|{\vec {a}}\right|\ (a_{3}+\left|{\vec {a}}\right|)\ }}\ }}{\begin{bmatrix}a_{3}+\left|{\vec {a}}\right|\\a_{1}+ia_{2}\end{bmatrix}};\qquad \psi _{-}={\frac {1}{\sqrt {2|{\vec {a}}|(a_{3}+|{\vec {a}}|)}}}{\begin{bmatrix}ia_{2}-a_{1}\\a_{3}+|{\vec {a}}|\end{bmatrix}}~.} These expressions become singular for a 3 | a | {\displaystyle a_{3}\to -\left|{\vec {a}}\right|} . They can be rescued by letting a = | a | ( ϵ , 0 , ( 1 ϵ 2 / 2 ) ) {\displaystyle {\vec {a}}=\left|{\vec {a}}\right|(\epsilon ,0,-(1-\epsilon ^{2}/2))} and taking the limit ϵ 0 {\displaystyle \epsilon \to 0} , which yields the correct eigenvectors (0,1) and (1,0) of σ z {\displaystyle \sigma _{z}} .

Alternatively, one may use spherical coordinates a = a ( sin ϑ cos φ , sin ϑ sin φ , cos ϑ ) {\displaystyle {\vec {a}}=a(\sin \vartheta \cos \varphi ,\sin \vartheta \sin \varphi ,\cos \vartheta )} to obtain the eigenvectors ψ + = ( cos ( ϑ / 2 ) , sin ( ϑ / 2 ) exp ( i φ ) ) {\displaystyle \psi _{+}=(\cos(\vartheta /2),\sin(\vartheta /2)\exp(i\varphi ))} and ψ = ( sin ( ϑ / 2 ) exp ( i φ ) , cos ( ϑ / 2 ) ) {\displaystyle \psi _{-}=(-\sin(\vartheta /2)\exp(-i\varphi ),\cos(\vartheta /2))} .

Pauli 4-vector

The Pauli 4-vector, used in spinor theory, is written   σ μ   {\displaystyle \ \sigma ^{\mu }\ } with components

σ μ = ( I , σ ) . {\displaystyle \sigma ^{\mu }=(I,{\vec {\sigma }}).}

This defines a map from R 1 , 3 {\displaystyle \mathbb {R} ^{1,3}} to the vector space of Hermitian matrices,

x μ x μ σ μ   , {\displaystyle x_{\mu }\mapsto x_{\mu }\sigma ^{\mu }\ ,}

which also encodes the Minkowski metric (with mostly minus convention) in its determinant:

det ( x μ σ μ ) = η ( x , x ) . {\displaystyle \det(x_{\mu }\sigma ^{\mu })=\eta (x,x).}

This 4-vector also has a completeness relation. It is convenient to define a second Pauli 4-vector

σ ¯ μ = ( I , σ ) . {\displaystyle {\bar {\sigma }}^{\mu }=(I,-{\vec {\sigma }}).}

and allow raising and lowering using the Minkowski metric tensor. The relation can then be written x ν = 1 2 tr ( σ ¯ ν ( x μ σ μ ) ) . {\displaystyle x_{\nu }={\tfrac {1}{2}}\operatorname {tr} {\Bigl (}{\bar {\sigma }}_{\nu }{\bigl (}x_{\mu }\sigma ^{\mu }{\bigr )}{\Bigr )}.}

Similarly to the Pauli 3-vector case, we can find a matrix group that acts as isometries on   R 1 , 3   ; {\displaystyle \ \mathbb {R} ^{1,3}\ ;} in this case the matrix group is   S L ( 2 , C )   , {\displaystyle \ \mathrm {SL} (2,\mathbb {C} )\ ,} and this shows   S L ( 2 , C ) S p i n ( 1 , 3 ) . {\displaystyle \ \mathrm {SL} (2,\mathbb {C} )\cong \mathrm {Spin} (1,3).} Similarly to above, this can be explicitly realized for   S S L ( 2 , C )   {\displaystyle \ S\in \mathrm {SL} (2,\mathbb {C} )\ } with components

Λ ( S ) μ ν = 1 2 tr ( σ ¯ ν S σ μ S ) . {\displaystyle \Lambda (S)^{\mu }{}_{\nu }={\tfrac {1}{2}}\operatorname {tr} \left({\bar {\sigma }}_{\nu }S\sigma ^{\mu }S^{\dagger }\right).}

In fact, the determinant property follows abstractly from trace properties of the   σ μ . {\displaystyle \ \sigma ^{\mu }.} For   2 × 2   {\displaystyle \ 2\times 2\ } matrices, the following identity holds:

det ( A + B ) = det ( A ) + det ( B ) + tr ( A ) tr ( B ) tr ( A B ) . {\displaystyle \det(A+B)=\det(A)+\det(B)+\operatorname {tr} (A)\operatorname {tr} (B)-\operatorname {tr} (AB).}

That is, the 'cross-terms' can be written as traces. When   A , B   {\displaystyle \ A,B\ } are chosen to be different   σ μ   , {\displaystyle \ \sigma ^{\mu }\ ,} the cross-terms vanish. It then follows, now showing summation explicitly, det ( μ x μ σ μ ) = μ det ( x μ σ μ ) . {\textstyle \det \left(\sum _{\mu }x_{\mu }\sigma ^{\mu }\right)=\sum _{\mu }\det \left(x_{\mu }\sigma ^{\mu }\right).} Since the matrices are   2 × 2   , {\displaystyle \ 2\times 2\ ,} this is equal to μ x μ 2 det ( σ μ ) = η ( x , x ) . {\textstyle \sum _{\mu }x_{\mu }^{2}\det(\sigma ^{\mu })=\eta (x,x).}

Relation to dot and cross product

Pauli vectors elegantly map these commutation and anticommutation relations to corresponding vector products. Adding the commutator to the anticommutator gives

[ σ j , σ k ] + { σ j , σ k } = ( σ j σ k σ k σ j ) + ( σ j σ k + σ k σ j ) 2 i ε j k σ + 2 δ j k I = 2 σ j σ k {\displaystyle {\begin{aligned}\left+\{\sigma _{j},\sigma _{k}\}&=(\sigma _{j}\sigma _{k}-\sigma _{k}\sigma _{j})+(\sigma _{j}\sigma _{k}+\sigma _{k}\sigma _{j})\\2i\varepsilon _{jk\ell }\,\sigma _{\ell }+2\delta _{jk}I&=2\sigma _{j}\sigma _{k}\end{aligned}}}

so that,

    σ j σ k = δ j k I + i ε j k σ   .   {\displaystyle ~~\sigma _{j}\sigma _{k}=\delta _{jk}I+i\varepsilon _{jk\ell }\,\sigma _{\ell }~.~}

Contracting each side of the equation with components of two 3-vectors ap and bq (which commute with the Pauli matrices, i.e., apσq = σqap) for each matrix σq and vector component ap (and likewise with bq) yields

    a j b k σ j σ k = a j b k ( i ε j k σ + δ j k I ) a j σ j b k σ k = i ε j k a j b k σ + a j b k δ j k I .   {\displaystyle ~~{\begin{aligned}a_{j}b_{k}\sigma _{j}\sigma _{k}&=a_{j}b_{k}\left(i\varepsilon _{jk\ell }\,\sigma _{\ell }+\delta _{jk}I\right)\\a_{j}\sigma _{j}b_{k}\sigma _{k}&=i\varepsilon _{jk\ell }\,a_{j}b_{k}\sigma _{\ell }+a_{j}b_{k}\delta _{jk}I\end{aligned}}.~}

Finally, translating the index notation for the dot product and cross product results in

    ( a σ ) ( b σ ) = ( a b ) I + i ( a × b ) σ     {\displaystyle ~~{\Bigl (}{\vec {a}}\cdot {\vec {\sigma }}{\Bigr )}{\Bigl (}{\vec {b}}\cdot {\vec {\sigma }}{\Bigr )}={\Bigl (}{\vec {a}}\cdot {\vec {b}}{\Bigr )}\,I+i{\Bigl (}{\vec {a}}\times {\vec {b}}{\Bigr )}\cdot {\vec {\sigma }}~~}

(1)

If i is identified with the pseudoscalar σxσyσz then the right hand side becomes a b + a b {\displaystyle a\cdot b+a\wedge b} , which is also the definition for the product of two vectors in geometric algebra.

If we define the spin operator as J = ⁠ħ/2⁠σ, then J satisfies the commutation relation: J × J = i J {\displaystyle \mathbf {J} \times \mathbf {J} =i\hbar \mathbf {J} } Or equivalently, the Pauli vector satisfies: σ 2 × σ 2 = i σ 2 {\displaystyle {\frac {\vec {\sigma }}{2}}\times {\frac {\vec {\sigma }}{2}}=i{\frac {\vec {\sigma }}{2}}}

Some trace relations

The following traces can be derived using the commutation and anticommutation relations.

tr ( σ j ) = 0 tr ( σ j σ k ) = 2 δ j k tr ( σ j σ k σ ) = 2 i ε j k tr ( σ j σ k σ σ m ) = 2 ( δ j k δ m δ j δ k m + δ j m δ k ) {\displaystyle {\begin{aligned}\operatorname {tr} \left(\sigma _{j}\right)&=0\\\operatorname {tr} \left(\sigma _{j}\sigma _{k}\right)&=2\delta _{jk}\\\operatorname {tr} \left(\sigma _{j}\sigma _{k}\sigma _{\ell }\right)&=2i\varepsilon _{jk\ell }\\\operatorname {tr} \left(\sigma _{j}\sigma _{k}\sigma _{\ell }\sigma _{m}\right)&=2\left(\delta _{jk}\delta _{\ell m}-\delta _{j\ell }\delta _{km}+\delta _{jm}\delta _{k\ell }\right)\end{aligned}}}

If the matrix σ0 = I is also considered, these relationships become

tr ( σ α ) = 2 δ 0 α tr ( σ α σ β ) = 2 δ α β tr ( σ α σ β σ γ ) = 2 ( α β γ ) δ α β δ 0 γ 4 δ 0 α δ 0 β δ 0 γ + 2 i ε 0 α β γ tr ( σ α σ β σ γ σ μ ) = 2 ( δ α β δ γ μ δ α γ δ β μ + δ α μ δ β γ ) + 4 ( δ α γ δ 0 β δ 0 μ + δ β μ δ 0 α δ 0 γ ) 8 δ 0 α δ 0 β δ 0 γ δ 0 μ + 2 i ( α β γ μ ) ε 0 α β γ δ 0 μ {\displaystyle {\begin{aligned}\operatorname {tr} \left(\sigma _{\alpha }\right)&=2\delta _{0\alpha }\\\operatorname {tr} \left(\sigma _{\alpha }\sigma _{\beta }\right)&=2\delta _{\alpha \beta }\\\operatorname {tr} \left(\sigma _{\alpha }\sigma _{\beta }\sigma _{\gamma }\right)&=2\sum _{(\alpha \beta \gamma )}\delta _{\alpha \beta }\delta _{0\gamma }-4\delta _{0\alpha }\delta _{0\beta }\delta _{0\gamma }+2i\varepsilon _{0\alpha \beta \gamma }\\\operatorname {tr} \left(\sigma _{\alpha }\sigma _{\beta }\sigma _{\gamma }\sigma _{\mu }\right)&=2\left(\delta _{\alpha \beta }\delta _{\gamma \mu }-\delta _{\alpha \gamma }\delta _{\beta \mu }+\delta _{\alpha \mu }\delta _{\beta \gamma }\right)+4\left(\delta _{\alpha \gamma }\delta _{0\beta }\delta _{0\mu }+\delta _{\beta \mu }\delta _{0\alpha }\delta _{0\gamma }\right)-8\delta _{0\alpha }\delta _{0\beta }\delta _{0\gamma }\delta _{0\mu }+2i\sum _{(\alpha \beta \gamma \mu )}\varepsilon _{0\alpha \beta \gamma }\delta _{0\mu }\end{aligned}}}

where Greek indices α, β, γ and μ assume values from {0, x, y, z} and the notation ( α ) {\textstyle \sum _{(\alpha \ldots )}} is used to denote the sum over the cyclic permutation of the included indices.

Exponential of a Pauli vector

For

a = a n ^ , | n ^ | = 1 , {\displaystyle {\vec {a}}=a{\hat {n}},\quad |{\hat {n}}|=1,}

one has, for even powers, 2p, p = 0, 1, 2, 3, ...

( n ^ σ ) 2 p = I , {\displaystyle ({\hat {n}}\cdot {\vec {\sigma }})^{2p}=I,}

which can be shown first for the p = 1 case using the anticommutation relations. For convenience, the case p = 0 is taken to be I by convention.

For odd powers, 2q + 1, q = 0, 1, 2, 3, ...

( n ^ σ ) 2 q + 1 = n ^ σ . {\displaystyle \left({\hat {n}}\cdot {\vec {\sigma }}\right)^{2q+1}={\hat {n}}\cdot {\vec {\sigma }}\,.}

Matrix exponentiating, and using the Taylor series for sine and cosine,

e i a ( n ^ σ ) = k = 0 i k [ a ( n ^ σ ) ] k k ! = p = 0 ( 1 ) p ( a n ^ σ ) 2 p ( 2 p ) ! + i q = 0 ( 1 ) q ( a n ^ σ ) 2 q + 1 ( 2 q + 1 ) ! = I p = 0 ( 1 ) p a 2 p ( 2 p ) ! + i ( n ^ σ ) q = 0 ( 1 ) q a 2 q + 1 ( 2 q + 1 ) ! {\displaystyle {\begin{aligned}e^{ia\left({\hat {n}}\cdot {\vec {\sigma }}\right)}&=\sum _{k=0}^{\infty }{\frac {i^{k}\left^{k}}{k!}}\\&=\sum _{p=0}^{\infty }{\frac {(-1)^{p}(a{\hat {n}}\cdot {\vec {\sigma }})^{2p}}{(2p)!}}+i\sum _{q=0}^{\infty }{\frac {(-1)^{q}(a{\hat {n}}\cdot {\vec {\sigma }})^{2q+1}}{(2q+1)!}}\\&=I\sum _{p=0}^{\infty }{\frac {(-1)^{p}a^{2p}}{(2p)!}}+i({\hat {n}}\cdot {\vec {\sigma }})\sum _{q=0}^{\infty }{\frac {(-1)^{q}a^{2q+1}}{(2q+1)!}}\\\end{aligned}}} .

In the last line, the first sum is the cosine, while the second sum is the sine; so, finally,

    e i a ( n ^ σ ) = I cos a + i ( n ^ σ ) sin a     {\displaystyle ~~e^{ia\left({\hat {n}}\cdot {\vec {\sigma }}\right)}=I\cos {a}+i({\hat {n}}\cdot {\vec {\sigma }})\sin {a}~~}

(2)

which is analogous to Euler's formula, extended to quaternions.

Note that

det [ i a ( n ^ σ ) ] = a 2 {\displaystyle \det=a^{2}} ,

while the determinant of the exponential itself is just 1, which makes it the generic group element of SU(2).

A more abstract version of formula (2) for a general 2 × 2 matrix can be found in the article on matrix exponentials. A general version of (2) for an analytic (at a and −a) function is provided by application of Sylvester's formula,

f ( a ( n ^ σ ) ) = I f ( a ) + f ( a ) 2 + n ^ σ f ( a ) f ( a ) 2 . {\displaystyle f(a({\hat {n}}\cdot {\vec {\sigma }}))=I{\frac {f(a)+f(-a)}{2}}+{\hat {n}}\cdot {\vec {\sigma }}{\frac {f(a)-f(-a)}{2}}.}

The group composition law of SU(2)

A straightforward application of formula (2) provides a parameterization of the composition law of the group SU(2). One may directly solve for c in e i a ( n ^ σ ) e i b ( m ^ σ ) = I ( cos a cos b n ^ m ^ sin a sin b ) + i ( n ^ sin a cos b + m ^ sin b cos a n ^ × m ^   sin a sin b ) σ = I cos c + i ( k ^ σ ) sin c = e i c ( k ^ σ ) , {\displaystyle {\begin{aligned}e^{ia\left({\hat {n}}\cdot {\vec {\sigma }}\right)}e^{ib\left({\hat {m}}\cdot {\vec {\sigma }}\right)}&=I\left(\cos a\cos b-{\hat {n}}\cdot {\hat {m}}\sin a\sin b\right)+i\left({\hat {n}}\sin a\cos b+{\hat {m}}\sin b\cos a-{\hat {n}}\times {\hat {m}}~\sin a\sin b\right)\cdot {\vec {\sigma }}\\&=I\cos {c}+i\left({\hat {k}}\cdot {\vec {\sigma }}\right)\sin c\\&=e^{ic\left({\hat {k}}\cdot {\vec {\sigma }}\right)},\end{aligned}}}

which specifies the generic group multiplication, where, manifestly, cos c = cos a cos b n ^ m ^ sin a sin b   , {\displaystyle \cos c=\cos a\cos b-{\hat {n}}\cdot {\hat {m}}\sin a\sin b~,} the spherical law of cosines. Given c, then, k ^ = 1 sin c ( n ^ sin a cos b + m ^ sin b cos a n ^ × m ^ sin a sin b ) . {\displaystyle {\hat {k}}={\frac {1}{\sin c}}\left({\hat {n}}\sin a\cos b+{\hat {m}}\sin b\cos a-{\hat {n}}\times {\hat {m}}\sin a\sin b\right).}

Consequently, the composite rotation parameters in this group element (a closed form of the respective BCH expansion in this case) simply amount to

e i c k ^ σ = exp ( i c sin c ( n ^ sin a cos b + m ^ sin b cos a n ^ × m ^   sin a sin b ) σ ) . {\displaystyle e^{ic{\hat {k}}\cdot {\vec {\sigma }}}=\exp \left(i{\frac {c}{\sin c}}\left({\hat {n}}\sin a\cos b+{\hat {m}}\sin b\cos a-{\hat {n}}\times {\hat {m}}~\sin a\sin b\right)\cdot {\vec {\sigma }}\right).}

(Of course, when n ^ {\displaystyle {\hat {n}}} is parallel to m ^ {\displaystyle {\hat {m}}} , so is k ^ {\displaystyle {\hat {k}}} , and c = a + b.)

See also: Rotation formalisms in three dimensions § Rodrigues vector, and Spinor § Three dimensions

Adjoint action

It is also straightforward to likewise work out the adjoint action on the Pauli vector, namely rotation of any angle a {\displaystyle a} along any axis n ^ {\displaystyle {\hat {n}}} : R n ( a )   σ   R n ( a ) = e i a 2 ( n ^ σ )   σ   e i a 2 ( n ^ σ ) = σ cos ( a ) + n ^ × σ   sin ( a ) + n ^   n ^ σ   ( 1 cos ( a ) )   . {\displaystyle R_{n}(-a)~{\vec {\sigma }}~R_{n}(a)=e^{i{\frac {a}{2}}\left({\hat {n}}\cdot {\vec {\sigma }}\right)}~{\vec {\sigma }}~e^{-i{\frac {a}{2}}\left({\hat {n}}\cdot {\vec {\sigma }}\right)}={\vec {\sigma }}\cos(a)+{\hat {n}}\times {\vec {\sigma }}~\sin(a)+{\hat {n}}~{\hat {n}}\cdot {\vec {\sigma }}~(1-\cos(a))~.}

Taking the dot product of any unit vector with the above formula generates the expression of any single qubit operator under any rotation. For example, it can be shown that R y ( π 2 ) σ x R y ( π 2 ) = x ^ ( y ^ × σ ) = σ z {\textstyle R_{y}{\mathord {\left(-{\frac {\pi }{2}}\right)}}\,\sigma _{x}\,R_{y}{\mathord {\left({\frac {\pi }{2}}\right)}}={\hat {x}}\cdot \left({\hat {y}}\times {\vec {\sigma }}\right)=\sigma _{z}} .

See also: Rodrigues' rotation formula

Completeness relation

An alternative notation that is commonly used for the Pauli matrices is to write the vector index k in the superscript, and the matrix indices as subscripts, so that the element in row α and column β of the k-th Pauli matrix is σ αβ.

In this notation, the completeness relation for the Pauli matrices can be written

σ α β σ γ δ k = 1 3 σ α β k σ γ δ k = 2 δ α δ δ β γ δ α β δ γ δ . {\displaystyle {\vec {\sigma }}_{\alpha \beta }\cdot {\vec {\sigma }}_{\gamma \delta }\equiv \sum _{k=1}^{3}\sigma _{\alpha \beta }^{k}\,\sigma _{\gamma \delta }^{k}=2\,\delta _{\alpha \delta }\,\delta _{\beta \gamma }-\delta _{\alpha \beta }\,\delta _{\gamma \delta }.}
Proof

The fact that the Pauli matrices, along with the identity matrix I, form an orthogonal basis for the Hilbert space of all 2 × 2 complex matrices M 2 , 2 ( C ) {\displaystyle {\mathcal {M}}_{2,2}(\mathbb {C} )} over C {\displaystyle \mathbb {C} } , means that we can express any 2 × 2 complex matrix M as M = c I + k a k σ k {\displaystyle M=c\,I+\sum _{k}a_{k}\,\sigma ^{k}} where c is a complex number, and a is a 3-component, complex vector. It is straightforward to show, using the properties listed above, that tr ( σ j σ k ) = 2 δ j k {\displaystyle \operatorname {tr} \left(\sigma ^{j}\,\sigma ^{k}\right)=2\,\delta _{jk}} where "tr" denotes the trace, and hence that c = 1 2 tr M , a k = 1 2 tr σ k M .     2 M = I tr M + k σ k tr σ k M   , {\displaystyle {\begin{aligned}c&={}{\tfrac {1}{2}}\,\operatorname {tr} \,M\,,{\begin{aligned}&&a_{k}&={\tfrac {1}{2}}\,\operatorname {tr} \,\sigma ^{k}\,M.\end{aligned}}\\\therefore ~~2\,M&=I\,\operatorname {tr} \,M+\sum _{k}\sigma ^{k}\,\operatorname {tr} \,\sigma ^{k}M~,\end{aligned}}} which can be rewritten in terms of matrix indices as 2 M α β = δ α β M γ γ + k σ α β k σ γ δ k M δ γ   , {\displaystyle 2\,M_{\alpha \beta }=\delta _{\alpha \beta }\,M_{\gamma \gamma }+\sum _{k}\sigma _{\alpha \beta }^{k}\,\sigma _{\gamma \delta }^{k}\,M_{\delta \gamma }~,} where summation over the repeated indices is implied γ and δ. Since this is true for any choice of the matrix M, the completeness relation follows as stated above. Q.E.D.

As noted above, it is common to denote the 2 × 2 unit matrix by σ0, so σαβ = δαβ. The completeness relation can alternatively be expressed as k = 0 3 σ α β k σ γ δ k = 2 δ α δ δ β γ   . {\displaystyle \sum _{k=0}^{3}\sigma _{\alpha \beta }^{k}\,\sigma _{\gamma \delta }^{k}=2\,\delta _{\alpha \delta }\,\delta _{\beta \gamma }~.}

The fact that any Hermitian complex 2 × 2 matrices can be expressed in terms of the identity matrix and the Pauli matrices also leads to the Bloch sphere representation of 2 × 2 mixed states’ density matrix, (positive semidefinite 2 × 2 matrices with unit trace. This can be seen by first expressing an arbitrary Hermitian matrix as a real linear combination of {σ0, σ1, σ2, σ3} as above, and then imposing the positive-semidefinite and trace 1 conditions.

For a pure state, in polar coordinates, a = ( sin θ cos ϕ sin θ sin ϕ cos θ ) , {\displaystyle {\vec {a}}={\begin{pmatrix}\sin \theta \cos \phi &\sin \theta \sin \phi &\cos \theta \end{pmatrix}},} the idempotent density matrix 1 2 ( 1 + a σ ) = ( cos 2 ( θ 2 ) e i ϕ sin ( θ 2 ) cos ( θ 2 ) e + i ϕ sin ( θ 2 ) cos ( θ 2 ) sin 2 ( θ 2 ) ) {\displaystyle {\tfrac {1}{2}}\left(\mathbf {1} +{\vec {a}}\cdot {\vec {\sigma }}\right)={\begin{pmatrix}\cos ^{2}\left({\frac {\,\theta \,}{2}}\right)&e^{-i\,\phi }\sin \left({\frac {\,\theta \,}{2}}\right)\cos \left({\frac {\,\theta \,}{2}}\right)\\e^{+i\,\phi }\sin \left({\frac {\,\theta \,}{2}}\right)\cos \left({\frac {\,\theta \,}{2}}\right)&\sin ^{2}\left({\frac {\,\theta \,}{2}}\right)\end{pmatrix}}}

acts on the state eigenvector ( cos ( θ 2 ) e + i ϕ sin ( θ 2 ) ) {\displaystyle {\begin{pmatrix}\cos \left({\frac {\,\theta \,}{2}}\right)&e^{+i\phi }\,\sin \left({\frac {\,\theta \,}{2}}\right)\end{pmatrix}}} with eigenvalue +1, hence it acts like a projection operator.

Relation with the permutation operator

Let Pjk be the transposition (also known as a permutation) between two spins σj and σk living in the tensor product space ⁠ C 2 C 2 {\displaystyle \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}} ⁠,

P j k | σ j σ k = | σ k σ j . {\displaystyle P_{jk}\left|\sigma _{j}\sigma _{k}\right\rangle =\left|\sigma _{k}\sigma _{j}\right\rangle .}

This operator can also be written more explicitly as Dirac's spin exchange operator,

P j k = 1 2 ( σ j σ k + 1 )   . {\displaystyle P_{jk}={\frac {1}{2}}\,\left({\vec {\sigma }}_{j}\cdot {\vec {\sigma }}_{k}+1\right)~.}

Its eigenvalues are therefore 1 or −1. It may thus be utilized as an interaction term in a Hamiltonian, splitting the energy eigenvalues of its symmetric versus antisymmetric eigenstates.

SU(2)

The group SU(2) is the Lie group of unitary 2 × 2 matrices with unit determinant; its Lie algebra is the set of all 2 × 2 anti-Hermitian matrices with trace 0. Direct calculation, as above, shows that the Lie algebra s u 2 {\displaystyle {\mathfrak {su}}_{2}} is the three-dimensional real algebra spanned by the set {k}. In compact notation,

s u ( 2 ) = span { i σ 1 , i σ 2 , i σ 3 } . {\displaystyle {\mathfrak {su}}(2)=\operatorname {span} \{\;i\,\sigma _{1}\,,\;i\,\sigma _{2}\,,\;i\,\sigma _{3}\;\}.}

As a result, each j can be seen as an infinitesimal generator of SU(2). The elements of SU(2) are exponentials of linear combinations of these three generators, and multiply as indicated above in discussing the Pauli vector. Although this suffices to generate SU(2), it is not a proper representation of su(2), as the Pauli eigenvalues are scaled unconventionally. The conventional normalization is λ = ⁠1/2⁠, so that

s u ( 2 ) = span { i σ 1 2 , i σ 2 2 , i σ 3 2 } . {\displaystyle {\mathfrak {su}}(2)=\operatorname {span} \left\{{\frac {\,i\,\sigma _{1}\,}{2}},{\frac {\,i\,\sigma _{2}\,}{2}},{\frac {\,i\,\sigma _{3}\,}{2}}\right\}.}

As SU(2) is a compact group, its Cartan decomposition is trivial.

SO(3)

The Lie algebra s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} is isomorphic to the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} , which corresponds to the Lie group SO(3), the group of rotations in three-dimensional space. In other words, one can say that the j are a realization (and, in fact, the lowest-dimensional realization) of infinitesimal rotations in three-dimensional space. However, even though s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} and s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} are isomorphic as Lie algebras, SU(2) and SO(3) are not isomorphic as Lie groups. SU(2) is actually a double cover of SO(3), meaning that there is a two-to-one group homomorphism from SU(2) to SO(3), see relationship between SO(3) and SU(2).

Quaternions

Main article: Spinor § Three dimensions

The real linear span of {I, 1, 2, 3} is isomorphic to the real algebra of quaternions, H {\displaystyle \mathbb {H} } , represented by the span of the basis vectors { 1 , i , j , k } . {\displaystyle \left\{\;\mathbf {1} ,\,\mathbf {i} ,\,\mathbf {j} ,\,\mathbf {k} \;\right\}.} The isomorphism from H {\displaystyle \mathbb {H} } to this set is given by the following map (notice the reversed signs for the Pauli matrices): 1 I , i σ 2 σ 3 = i σ 1 , j σ 3 σ 1 = i σ 2 , k σ 1 σ 2 = i σ 3 . {\displaystyle \mathbf {1} \mapsto I,\quad \mathbf {i} \mapsto -\sigma _{2}\sigma _{3}=-i\,\sigma _{1},\quad \mathbf {j} \mapsto -\sigma _{3}\sigma _{1}=-i\,\sigma _{2},\quad \mathbf {k} \mapsto -\sigma _{1}\sigma _{2}=-i\,\sigma _{3}.}

Alternatively, the isomorphism can be achieved by a map using the Pauli matrices in reversed order,

1 I , i i σ 3 , j i σ 2 , k i σ 1   . {\displaystyle \mathbf {1} \mapsto I,\quad \mathbf {i} \mapsto i\,\sigma _{3}\,,\quad \mathbf {j} \mapsto i\,\sigma _{2}\,,\quad \mathbf {k} \mapsto i\,\sigma _{1}~.}

As the set of versors U H {\displaystyle \mathbb {H} } forms a group isomorphic to SU(2), U gives yet another way of describing SU(2). The two-to-one homomorphism from SU(2) to SO(3) may be given in terms of the Pauli matrices in this formulation.

Physics

Classical mechanics

Main article: Quaternions and spatial rotation

In classical mechanics, Pauli matrices are useful in the context of the Cayley-Klein parameters. The matrix P corresponding to the position x {\displaystyle {\vec {x}}} of a point in space is defined in terms of the above Pauli vector matrix,

P = x σ = x σ x + y σ y + z σ z . {\displaystyle P={\vec {x}}\cdot {\vec {\sigma }}=x\,\sigma _{x}+y\,\sigma _{y}+z\,\sigma _{z}.}

Consequently, the transformation matrix Qθ for rotations about the x-axis through an angle θ may be written in terms of Pauli matrices and the unit matrix as

Q θ = 1 cos θ 2 + i σ x sin θ 2 . {\displaystyle Q_{\theta }={\boldsymbol {1}}\,\cos {\frac {\theta }{2}}+i\,\sigma _{x}\sin {\frac {\theta }{2}}.}

Similar expressions follow for general Pauli vector rotations as detailed above.

Quantum mechanics

In quantum mechanics, each Pauli matrix is related to an angular momentum operator that corresponds to an observable describing the spin of a spin 1⁄2 particle, in each of the three spatial directions. As an immediate consequence of the Cartan decomposition mentioned above, j are the generators of a projective representation (spin representation) of the rotation group SO(3) acting on non-relativistic particles with spin 1⁄2. The states of the particles are represented as two-component spinors. In the same way, the Pauli matrices are related to the isospin operator.

An interesting property of spin 1⁄2 particles is that they must be rotated by an angle of 4π in order to return to their original configuration. This is due to the two-to-one correspondence between SU(2) and SO(3) mentioned above, and the fact that, although one visualizes spin up/down as the north–south pole on the 2-sphere S, they are actually represented by orthogonal vectors in the two-dimensional complex Hilbert space.

For a spin 1⁄2 particle, the spin operator is given by J = ⁠ħ/2⁠σ, the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher spin systems in three spatial dimensions, for arbitrarily large j, can be calculated using this spin operator and ladder operators. They can be found in Rotation group SO(3) § A note on Lie algebras. The analog formula to the above generalization of Euler's formula for Pauli matrices, the group element in terms of spin matrices, is tractable, but less simple.

Also useful in the quantum mechanics of multiparticle systems, the general Pauli group Gn is defined to consist of all n-fold tensor products of Pauli matrices.

Relativistic quantum mechanics

In relativistic quantum mechanics, the spinors in four dimensions are 4 × 1 (or 1 × 4) matrices. Hence the Pauli matrices or the Sigma matrices operating on these spinors have to be 4 × 4 matrices. They are defined in terms of 2 × 2 Pauli matrices as

Σ k = ( σ k 0 0 σ k ) . {\displaystyle {\mathsf {\Sigma }}_{k}={\begin{pmatrix}{\mathsf {\sigma }}_{k}&0\\0&{\mathsf {\sigma }}_{k}\end{pmatrix}}.}

It follows from this definition that the   Σ k   {\displaystyle \ {\mathsf {\Sigma }}_{k}\ } matrices have the same algebraic properties as the σk matrices.

However, relativistic angular momentum is not a three-vector, but a second order four-tensor. Hence   Σ k   {\displaystyle \ {\mathsf {\Sigma }}_{k}\ } needs to be replaced by Σμν , the generator of Lorentz transformations on spinors. By the antisymmetry of angular momentum, the Σμν are also antisymmetric. Hence there are only six independent matrices.

The first three are the   Σ k ϵ j k Σ j . {\displaystyle \ \Sigma _{k\ell }\equiv \epsilon _{jk\ell }{\mathsf {\Sigma }}_{j}.} The remaining three,   i   Σ 0 k α k   , {\displaystyle \ -i\ \Sigma _{0k}\equiv {\mathsf {\alpha }}_{k}\ ,} where the Dirac αk matrices are defined as

α k = ( 0 σ k σ k 0 ) . {\displaystyle {\mathsf {\alpha }}_{k}={\begin{pmatrix}0&{\mathsf {\sigma }}_{k}\\{\mathsf {\sigma }}_{k}&0\end{pmatrix}}.}

The relativistic spin matrices Σμν are written in compact form in terms of commutator of gamma matrices as

Σ μ ν = i 2 [ γ μ , γ ν ] . {\displaystyle \Sigma _{\mu \nu }={\frac {i}{2}}{\bigl }.}

Quantum information

In quantum information, single-qubit quantum gates are 2 × 2 unitary matrices. The Pauli matrices are some of the most important single-qubit operations. In that context, the Cartan decomposition given above is called the "Z–Y decomposition of a single-qubit gate". Choosing a different Cartan pair gives a similar "X–Y decomposition of a single-qubit gate.

See also

Remarks

  1. This conforms to the convention in mathematics for the matrix exponential, ⟼ exp(). In the convention in physics, σ ⟼ exp(−), hence in it no pre-multiplication by i is necessary to land in SU(2).
  2. The Pauli vector is a formal device. It may be thought of as an element of M 2 ( C ) R 3 {\displaystyle {\mathcal {M}}_{2}(\mathbb {C} )\otimes \mathbb {R} ^{3}} , where the tensor product space is endowed with a mapping : R 3 × ( M 2 ( C ) R 3 ) M 2 ( C ) {\displaystyle \cdot :\mathbb {R} ^{3}\times ({\mathcal {M}}_{2}(\mathbb {C} )\otimes \mathbb {R} ^{3})\to {\mathcal {M}}_{2}(\mathbb {C} )} induced by the dot product on R 3 . {\displaystyle \mathbb {R} ^{3}.}
  3. The relation among a, b, c, n, m, k derived here in the 2 × 2 representation holds for all representations of SU(2), being a group identity. Note that, by virtue of the standard normalization of that group's generators as half the Pauli matrices, the parameters a,b,c correspond to half the rotation angles of the rotation group. That is, the Gibbs formula linked amounts to k ^ tan c / 2 = ( n ^ tan a / 2 + m ^ tan b / 2 m ^ × n ^ tan a / 2   tan b / 2 ) / ( 1 m ^ n ^ tan a / 2   tan b / 2 ) {\displaystyle {\hat {k}}\tan c/2=({\hat {n}}\tan a/2+{\hat {m}}\tan b/2-{\hat {m}}\times {\hat {n}}\tan a/2~\tan b/2)/(1-{\hat {m}}\cdot {\hat {n}}\tan a/2~\tan b/2)} .
  4. Explicitly, in the convention of "right-space matrices into elements of left-space matrices", it is ( 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 )   . {\displaystyle \left({\begin{smallmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1\end{smallmatrix}}\right)~.}

Notes

  1. Gull, S. F.; Lasenby, A. N.; Doran, C. J. L. (January 1993). "Imaginary numbers are not Real – the geometric algebra of spacetime" (PDF). Found. Phys. 23 (9): 1175–1201. Bibcode:1993FoPh...23.1175G. doi:10.1007/BF01883676. S2CID 14670523. Retrieved 5 May 2023 – via geometry.mrao.cam.ac.uk.
  2. See the spinor map.
  3. Nielsen, Michael A.; Chuang, Isaac L. (2000). Quantum Computation and Quantum Information. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-63235-5. OCLC 43641333.
  4. Gibbs, J.W. (1884). "4. Concerning the differential and integral calculus of vectors". Elements of Vector Analysis. New Haven, CT: Tuttle, Moorehouse & Taylor. p. 67. In fact, however, the formula goes back to Olinde Rodrigues (1840), replete with half-angle: Rodrigues, Olinde (1840). "Des lois géometriques qui regissent les déplacements d' un systéme solide dans l' espace, et de la variation des coordonnées provenant de ces déplacement considérées indépendant des causes qui peuvent les produire" (PDF). J. Math. Pures Appl. 5: 380–440.
  5. Nakahara, Mikio (2003). Geometry, Topology, and Physics (2nd ed.). CRC Press. p. xxii. ISBN 978-0-7503-0606-5 – via Google Books.
  6. ^ Goldstein, Herbert (1959). Classical Mechanics. Addison-Wesley. pp. 109–118.
  7. Curtright, T L; Fairlie, D B; Zachos, C K (2014). "A compact formula for rotations as spin matrix polynomials". SIGMA. 10: 084. arXiv:1402.3541. Bibcode:2014SIGMA..10..084C. doi:10.3842/SIGMA.2014.084. S2CID 18776942.

References

Matrix classes
Explicitly constrained entries
Constant
Conditions on eigenvalues or eigenvectors
Satisfying conditions on products or inverses
With specific applications
Used in statistics
Used in graph theory
Used in science and engineering
Related terms
Categories: