Misplaced Pages

Matrix exponential

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Matrix exponentiation) Matrix operation generalizing exponentiation of scalar numbers

In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

Let X be an n×n real or complex matrix. The exponential of X, denoted by e or exp(X), is the n×n matrix given by the power series

e X = k = 0 1 k ! X k {\displaystyle e^{X}=\sum _{k=0}^{\infty }{\frac {1}{k!}}X^{k}}

where X 0 {\displaystyle X^{0}} is defined to be the identity matrix I {\displaystyle I} with the same dimensions as X {\displaystyle X} . The series always converges, so the exponential of X is well-defined.

Equivalently, e X = lim k ( I + X k ) k {\displaystyle e^{X}=\lim _{k\rightarrow \infty }\left(I+{\frac {X}{k}}\right)^{k}}

where I is the n×n identity matrix.

When X is an n×n diagonal matrix then exp(X) will be an n×n diagonal matrix with each diagonal element equal to the ordinary exponential applied to the corresponding diagonal element of X.

Properties

Elementary properties

Let X and Y be n×n complex matrices and let a and b be arbitrary complex numbers. We denote the n×n identity matrix by I and the zero matrix by 0. The matrix exponential satisfies the following properties.

We begin with the properties that are immediate consequences of the definition as a power series:

The next key result is this one:

  • If X Y = Y X {\displaystyle XY=YX} then e X e Y = e X + Y {\displaystyle e^{X}e^{Y}=e^{X+Y}} .

The proof of this identity is the same as the standard power-series argument for the corresponding identity for the exponential of real numbers. That is to say, as long as X {\displaystyle X} and Y {\displaystyle Y} commute, it makes no difference to the argument whether X {\displaystyle X} and Y {\displaystyle Y} are numbers or matrices. It is important to note that this identity typically does not hold if X {\displaystyle X} and Y {\displaystyle Y} do not commute (see Golden-Thompson inequality below).

Consequences of the preceding identity are the following:

  • ee = e
  • ee = I

Using the above results, we can easily verify the following claims. If X is symmetric then e is also symmetric, and if X is skew-symmetric then e is orthogonal. If X is Hermitian then e is also Hermitian, and if X is skew-Hermitian then e is unitary.

Finally, a Laplace transform of matrix exponentials amounts to the resolvent, 0 e t s e t X d t = ( s I X ) 1 {\displaystyle \int _{0}^{\infty }e^{-ts}e^{tX}\,dt=(sI-X)^{-1}} for all sufficiently large positive values of s.

Linear differential equation systems

Main article: Matrix differential equation

One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations. The solution of d d t y ( t ) = A y ( t ) , y ( 0 ) = y 0 , {\displaystyle {\frac {d}{dt}}y(t)=Ay(t),\quad y(0)=y_{0},} where A is a constant matrix and y is a column vector, is given by y ( t ) = e A t y 0 . {\displaystyle y(t)=e^{At}y_{0}.}

The matrix exponential can also be used to solve the inhomogeneous equation d d t y ( t ) = A y ( t ) + z ( t ) , y ( 0 ) = y 0 . {\displaystyle {\frac {d}{dt}}y(t)=Ay(t)+z(t),\quad y(0)=y_{0}.} See the section on applications below for examples.

There is no closed-form solution for differential equations of the form d d t y ( t ) = A ( t ) y ( t ) , y ( 0 ) = y 0 , {\displaystyle {\frac {d}{dt}}y(t)=A(t)\,y(t),\quad y(0)=y_{0},} where A is not constant, but the Magnus series gives the solution as an infinite sum.

The determinant of the matrix exponential

By Jacobi's formula, for any complex square matrix the following trace identity holds:

det ( e A ) = e tr ( A )   . {\displaystyle \det \left(e^{A}\right)=e^{\operatorname {tr} (A)}~.}

In addition to providing a computational tool, this formula demonstrates that a matrix exponential is always an invertible matrix. This follows from the fact that the right hand side of the above equation is always non-zero, and so det(e) ≠ 0, which implies that e must be invertible.

In the real-valued case, the formula also exhibits the map exp : M n ( R ) G L ( n , R ) {\displaystyle \exp \colon M_{n}(\mathbb {R} )\to \mathrm {GL} (n,\mathbb {R} )} to not be surjective, in contrast to the complex case mentioned earlier. This follows from the fact that, for real-valued matrices, the right-hand side of the formula is always positive, while there exist invertible matrices with a negative determinant.

Real symmetric matrices

The matrix exponential of a real symmetric matrix is positive definite. Let S {\displaystyle S} be an n×n real symmetric matrix and x R n {\displaystyle x\in \mathbb {R} ^{n}} a column vector. Using the elementary properties of the matrix exponential and of symmetric matrices, we have:

x T e S x = x T e S / 2 e S / 2 x = x T ( e S / 2 ) T e S / 2 x = ( e S / 2 x ) T e S / 2 x = e S / 2 x 2 0. {\displaystyle x^{T}e^{S}x=x^{T}e^{S/2}e^{S/2}x=x^{T}(e^{S/2})^{T}e^{S/2}x=(e^{S/2}x)^{T}e^{S/2}x=\lVert e^{S/2}x\rVert ^{2}\geq 0.}

Since e S / 2 {\displaystyle e^{S/2}} is invertible, the equality only holds for x = 0 {\displaystyle x=0} , and we have x T e S x > 0 {\displaystyle x^{T}e^{S}x>0} for all non-zero x {\displaystyle x} . Hence e S {\displaystyle e^{S}} is positive definite.

The exponential of sums

For any real numbers (scalars) x and y we know that the exponential function satisfies e = e e. The same is true for commuting matrices. If matrices X and Y commute (meaning that XY = YX), then, e X + Y = e X e Y . {\displaystyle e^{X+Y}=e^{X}e^{Y}.}

However, for matrices that do not commute the above equality does not necessarily hold.

The Lie product formula

Even if X and Y do not commute, the exponential e can be computed by the Lie product formula e X + Y = lim k ( e 1 k X e 1 k Y ) k . {\displaystyle e^{X+Y}=\lim _{k\to \infty }\left(e^{{\frac {1}{k}}X}e^{{\frac {1}{k}}Y}\right)^{k}.}

Using a large finite k to approximate the above is basis of the Suzuki-Trotter expansion, often used in numerical time evolution.

The Baker–Campbell–Hausdorff formula

In the other direction, if X and Y are sufficiently small (but not necessarily commuting) matrices, we have e X e Y = e Z , {\displaystyle e^{X}e^{Y}=e^{Z},} where Z may be computed as a series in commutators of X and Y by means of the Baker–Campbell–Hausdorff formula: Z = X + Y + 1 2 [ X , Y ] + 1 12 [ X , [ X , Y ] ] 1 12 [ Y , [ X , Y ] ] + , {\displaystyle Z=X+Y+{\frac {1}{2}}+{\frac {1}{12}}]-{\frac {1}{12}}]+\cdots ,} where the remaining terms are all iterated commutators involving X and Y. If X and Y commute, then all the commutators are zero and we have simply Z = X + Y.

Inequalities for exponentials of Hermitian matrices

Main article: Golden–Thompson inequality

For Hermitian matrices there is a notable theorem related to the trace of matrix exponentials.

If A and B are Hermitian matrices, then tr exp ( A + B ) tr [ exp ( A ) exp ( B ) ] . {\displaystyle \operatorname {tr} \exp(A+B)\leq \operatorname {tr} \left.}

There is no requirement of commutativity. There are counterexamples to show that the Golden–Thompson inequality cannot be extended to three matrices – and, in any event, tr(exp(A)exp(B)exp(C)) is not guaranteed to be real for Hermitian A, B, C. However, Lieb proved that it can be generalized to three matrices if we modify the expression as follows tr exp ( A + B + C ) 0 d t tr [ e A ( e B + t ) 1 e C ( e B + t ) 1 ] . {\displaystyle \operatorname {tr} \exp(A+B+C)\leq \int _{0}^{\infty }\mathrm {d} t\,\operatorname {tr} \left.}

The exponential map

The exponential of a matrix is always an invertible matrix. The inverse matrix of e is given by e. This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map exp : M n ( C ) G L ( n , C ) {\displaystyle \exp \colon M_{n}(\mathbb {C} )\to \mathrm {GL} (n,\mathbb {C} )} from the space of all n×n matrices to the general linear group of degree n, i.e. the group of all n×n invertible matrices. In fact, this map is surjective which means that every invertible matrix can be written as the exponential of some other matrix (for this, it is essential to consider the field C of complex numbers and not R).

For any two matrices X and Y, e X + Y e X Y e X e Y , {\displaystyle \left\|e^{X+Y}-e^{X}\right\|\leq \|Y\|e^{\|X\|}e^{\|Y\|},}

where ‖ · ‖ denotes an arbitrary matrix norm. It follows that the exponential map is continuous and Lipschitz continuous on compact subsets of Mn(C).

The map t e t X , t R {\displaystyle t\mapsto e^{tX},\qquad t\in \mathbb {R} } defines a smooth curve in the general linear group which passes through the identity element at t = 0.

In fact, this gives a one-parameter subgroup of the general linear group since e t X e s X = e ( t + s ) X . {\displaystyle e^{tX}e^{sX}=e^{(t+s)X}.}

The derivative of this curve (or tangent vector) at a point t is given by

d d t e t X = X e t X = e t X X . {\displaystyle {\frac {d}{dt}}e^{tX}=Xe^{tX}=e^{tX}X.} (1)

The derivative at t = 0 is just the matrix X, which is to say that X generates this one-parameter subgroup.

More generally, for a generic t-dependent exponent, X(t),

d d t e X ( t ) = 0 1 e α X ( t ) d X ( t ) d t e ( 1 α ) X ( t ) d α   . {\displaystyle {\frac {d}{dt}}e^{X(t)}=\int _{0}^{1}e^{\alpha X(t)}{\frac {dX(t)}{dt}}e^{(1-\alpha )X(t)}\,d\alpha ~.}

Taking the above expression e outside the integral sign and expanding the integrand with the help of the Hadamard lemma one can obtain the following useful expression for the derivative of the matrix exponent, ( d d t e X ( t ) ) e X ( t ) = d d t X ( t ) + 1 2 ! [ X ( t ) , d d t X ( t ) ] + 1 3 ! [ X ( t ) , [ X ( t ) , d d t X ( t ) ] ] + {\displaystyle \left({\frac {d}{dt}}e^{X(t)}\right)e^{-X(t)}={\frac {d}{dt}}X(t)+{\frac {1}{2!}}\left+{\frac {1}{3!}}\left\right]+\cdots }

The coefficients in the expression above are different from what appears in the exponential. For a closed form, see derivative of the exponential map.

Directional derivatives when restricted to Hermitian matrices

Let X {\displaystyle X} be a n × n {\displaystyle n\times n} Hermitian matrix with distinct eigenvalues. Let X = E diag ( Λ ) E {\displaystyle X=E{\textrm {diag}}(\Lambda )E^{*}} be its eigen-decomposition where E {\displaystyle E} is a unitary matrix whose columns are the eigenvectors of X {\displaystyle X} , E {\displaystyle E^{*}} is its conjugate transpose, and Λ = ( λ 1 , , λ n ) {\displaystyle \Lambda =\left(\lambda _{1},\ldots ,\lambda _{n}\right)} the vector of corresponding eigenvalues. Then, for any n × n {\displaystyle n\times n} Hermitian matrix V {\displaystyle V} , the directional derivative of exp : X e X {\displaystyle \exp :X\to e^{X}} at X {\displaystyle X} in the direction V {\displaystyle V} is D exp ( X ) [ V ] lim ϵ 0 1 ϵ ( e X + ϵ V e X ) = E ( G V ¯ ) E {\displaystyle D\exp(X)\triangleq \lim _{\epsilon \to 0}{\frac {1}{\epsilon }}\left(\displaystyle e^{X+\epsilon V}-e^{X}\right)=E(G\odot {\bar {V}})E^{*}} where V ¯ = E V E {\displaystyle {\bar {V}}=E^{*}VE} , the operator {\displaystyle \odot } denotes the Hadamard product, and, for all 1 i , j n {\displaystyle 1\leq i,j\leq n} , the matrix G {\displaystyle G} is defined as G i , j = { e λ i e λ j λ i λ j  if  i j , e λ i  otherwise . {\displaystyle G_{i,j}=\left\{{\begin{aligned}&{\frac {e^{\lambda _{i}}-e^{\lambda _{j}}}{\lambda _{i}-\lambda _{j}}}&{\text{ if }}i\neq j,\\&e^{\lambda _{i}}&{\text{ otherwise}}.\\\end{aligned}}\right.} In addition, for any n × n {\displaystyle n\times n} Hermitian matrix U {\displaystyle U} , the second directional derivative in directions U {\displaystyle U} and V {\displaystyle V} is D 2 exp ( X ) [ U , V ] lim ϵ u 0 lim ϵ v 0 1 4 ϵ u ϵ v ( e X + ϵ u U + ϵ v V e X ϵ u U + ϵ v V e X + ϵ u U ϵ v V + e X ϵ u U ϵ v V ) = E F ( U , V ) E {\displaystyle D^{2}\exp(X)\triangleq \lim _{\epsilon _{u}\to 0}\lim _{\epsilon _{v}\to 0}{\frac {1}{4\epsilon _{u}\epsilon _{v}}}\left(\displaystyle e^{X+\epsilon _{u}U+\epsilon _{v}V}-e^{X-\epsilon _{u}U+\epsilon _{v}V}-e^{X+\epsilon _{u}U-\epsilon _{v}V}+e^{X-\epsilon _{u}U-\epsilon _{v}V}\right)=EF(U,V)E^{*}} where the matrix-valued function F {\displaystyle F} is defined, for all 1 i , j n {\displaystyle 1\leq i,j\leq n} , as F ( U , V ) i , j = k = 1 n ϕ i , j , k ( U ¯ i k V ¯ j k + V ¯ i k U ¯ j k ) {\displaystyle F(U,V)_{i,j}=\sum _{k=1}^{n}\phi _{i,j,k}({\bar {U}}_{ik}{\bar {V}}_{jk}^{*}+{\bar {V}}_{ik}{\bar {U}}_{jk}^{*})} with ϕ i , j , k = { G i k G j k λ i λ j  if  i j , G i i G i k λ i λ k  if  i = j  and  k i , G i i 2  if  i = j = k . {\displaystyle \phi _{i,j,k}=\left\{{\begin{aligned}&{\frac {G_{ik}-G_{jk}}{\lambda _{i}-\lambda _{j}}}&{\text{ if }}i\neq j,\\&{\frac {G_{ii}-G_{ik}}{\lambda _{i}-\lambda _{k}}}&{\text{ if }}i=j{\text{ and }}k\neq i,\\&{\frac {G_{ii}}{2}}&{\text{ if }}i=j=k.\\\end{aligned}}\right.}

Computing the matrix exponential

Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis. Matlab, GNU Octave, R, and SciPy all use the Padé approximant. In this section, we discuss methods that are applicable in principle to any matrix, and which can be carried out explicitly for small matrices. Subsequent sections describe methods suitable for numerical evaluation on large matrices.

Diagonalizable case

If a matrix is diagonal: A = [ a 1 0 0 0 a 2 0 0 0 a n ] , {\displaystyle A={\begin{bmatrix}a_{1}&0&\cdots &0\\0&a_{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{n}\end{bmatrix}},} then its exponential can be obtained by exponentiating each entry on the main diagonal: e A = [ e a 1 0 0 0 e a 2 0 0 0 e a n ] . {\displaystyle e^{A}={\begin{bmatrix}e^{a_{1}}&0&\cdots &0\\0&e^{a_{2}}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &e^{a_{n}}\end{bmatrix}}.}

This result also allows one to exponentiate diagonalizable matrices. If

A = UDU

and D is diagonal, then

e = UeU.

Application of Sylvester's formula yields the same result. (To see this, note that addition and multiplication, hence also exponentiation, of diagonal matrices is equivalent to element-wise addition and multiplication, and hence exponentiation; in particular, the "one-dimensional" exponentiation is felt element-wise for the diagonal case.)

Example : Diagonalizable

For example, the matrix A = [ 1 4 1 1 ] {\displaystyle A={\begin{bmatrix}1&4\\1&1\\\end{bmatrix}}} can be diagonalized as [ 2 2 1 1 ] [ 1 0 0 3 ] [ 2 2 1 1 ] 1 . {\displaystyle {\begin{bmatrix}-2&2\\1&1\\\end{bmatrix}}{\begin{bmatrix}-1&0\\0&3\\\end{bmatrix}}{\begin{bmatrix}-2&2\\1&1\\\end{bmatrix}}^{-1}.}

Thus, e A = [ 2 2 1 1 ] e [ 1 0 0 3 ] [ 2 2 1 1 ] 1 = [ 2 2 1 1 ] [ 1 e 0 0 e 3 ] [ 2 2 1 1 ] 1 = [ e 4 + 1 2 e e 4 1 e e 4 1 4 e e 4 + 1 2 e ] . {\displaystyle e^{A}={\begin{bmatrix}-2&2\\1&1\\\end{bmatrix}}e^{\begin{bmatrix}-1&0\\0&3\\\end{bmatrix}}{\begin{bmatrix}-2&2\\1&1\\\end{bmatrix}}^{-1}={\begin{bmatrix}-2&2\\1&1\\\end{bmatrix}}{\begin{bmatrix}{\frac {1}{e}}&0\\0&e^{3}\\\end{bmatrix}}{\begin{bmatrix}-2&2\\1&1\\\end{bmatrix}}^{-1}={\begin{bmatrix}{\frac {e^{4}+1}{2e}}&{\frac {e^{4}-1}{e}}\\{\frac {e^{4}-1}{4e}}&{\frac {e^{4}+1}{2e}}\\\end{bmatrix}}.}

Nilpotent case

A matrix N is nilpotent if N = 0 for some integer q. In this case, the matrix exponential e can be computed directly from the series expansion, as the series terminates after a finite number of terms:

e N = I + N + 1 2 N 2 + 1 6 N 3 + + 1 ( q 1 ) ! N q 1   . {\displaystyle e^{N}=I+N+{\frac {1}{2}}N^{2}+{\frac {1}{6}}N^{3}+\cdots +{\frac {1}{(q-1)!}}N^{q-1}~.}

Since the series has a finite number of steps, it is a matrix polynomial, which can be computed efficiently.

General case

Using the Jordan–Chevalley decomposition

By the Jordan–Chevalley decomposition, any n × n {\displaystyle n\times n} matrix X with complex entries can be expressed as X = A + N {\displaystyle X=A+N} where

  • A is diagonalizable
  • N is nilpotent
  • A commutes with N

This means that we can compute the exponential of X by reducing to the previous two cases: e X = e A + N = e A e N . {\displaystyle e^{X}=e^{A+N}=e^{A}e^{N}.}

Note that we need the commutativity of A and N for the last step to work.

Using the Jordan canonical form

A closely related method is, if the field is algebraically closed, to work with the Jordan form of X. Suppose that X = PJP where J is the Jordan form of X. Then e X = P e J P 1 . {\displaystyle e^{X}=Pe^{J}P^{-1}.}

Also, since J = J a 1 ( λ 1 ) J a 2 ( λ 2 ) J a n ( λ n ) , e J = exp ( J a 1 ( λ 1 ) J a 2 ( λ 2 ) J a n ( λ n ) ) = exp ( J a 1 ( λ 1 ) ) exp ( J a 2 ( λ 2 ) ) exp ( J a n ( λ n ) ) . {\displaystyle {\begin{aligned}J&=J_{a_{1}}(\lambda _{1})\oplus J_{a_{2}}(\lambda _{2})\oplus \cdots \oplus J_{a_{n}}(\lambda _{n}),\\e^{J}&=\exp {\big (}J_{a_{1}}(\lambda _{1})\oplus J_{a_{2}}(\lambda _{2})\oplus \cdots \oplus J_{a_{n}}(\lambda _{n}){\big )}\\&=\exp {\big (}J_{a_{1}}(\lambda _{1}){\big )}\oplus \exp {\big (}J_{a_{2}}(\lambda _{2}){\big )}\oplus \cdots \oplus \exp {\big (}J_{a_{n}}(\lambda _{n}){\big )}.\end{aligned}}}

Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form J a ( λ ) = λ I + N e J a ( λ ) = e λ I + N = e λ e N . {\displaystyle {\begin{aligned}&&J_{a}(\lambda )&=\lambda I+N\\&\Rightarrow &e^{J_{a}(\lambda )}&=e^{\lambda I+N}=e^{\lambda }e^{N}.\end{aligned}}}

where N is a special nilpotent matrix. The matrix exponential of J is then given by e J = e λ 1 e N a 1 e λ 2 e N a 2 e λ n e N a n {\displaystyle e^{J}=e^{\lambda _{1}}e^{N_{a_{1}}}\oplus e^{\lambda _{2}}e^{N_{a_{2}}}\oplus \cdots \oplus e^{\lambda _{n}}e^{N_{a_{n}}}}

Projection case

If P is a projection matrix (i.e. is idempotent: P = P), its matrix exponential is:

e = I + (e − 1)P.

Deriving this by expansion of the exponential function, each power of P reduces to P which becomes a common factor of the sum: e P = k = 0 P k k ! = I + ( k = 1 1 k ! ) P = I + ( e 1 ) P   . {\displaystyle e^{P}=\sum _{k=0}^{\infty }{\frac {P^{k}}{k!}}=I+\left(\sum _{k=1}^{\infty }{\frac {1}{k!}}\right)P=I+(e-1)P~.}

Rotation case

For a simple rotation in which the perpendicular unit vectors a and b specify a plane, the rotation matrix R can be expressed in terms of a similar exponential function involving a generator G and angle θ. G = b a T a b T P = G 2 = a a T + b b T P 2 = P P G = G = G P   , {\displaystyle {\begin{aligned}G&=\mathbf {ba} ^{\mathsf {T}}-\mathbf {ab} ^{\mathsf {T}}&P&=-G^{2}=\mathbf {aa} ^{\mathsf {T}}+\mathbf {bb} ^{\mathsf {T}}\\P^{2}&=P&PG&=G=GP~,\end{aligned}}} R ( θ ) = e G θ = I + G sin ( θ ) + G 2 ( 1 cos ( θ ) ) = I P + P cos ( θ ) + G sin ( θ )   . {\displaystyle {\begin{aligned}R\left(\theta \right)=e^{G\theta }&=I+G\sin(\theta )+G^{2}(1-\cos(\theta ))\\&=I-P+P\cos(\theta )+G\sin(\theta )~.\\\end{aligned}}}

The formula for the exponential results from reducing the powers of G in the series expansion and identifying the respective series coefficients of G and G with −cos(θ) and sin(θ) respectively. The second expression here for e is the same as the expression for R(θ) in the article containing the derivation of the generator, R(θ) = e.

In two dimensions, if a = [ 1 0 ] {\displaystyle a=\left} and b = [ 0 1 ] {\displaystyle b=\left} , then G = [ 0 1 1 0 ] {\displaystyle G=\left} , G 2 = [ 1 0 0 1 ] {\displaystyle G^{2}=\left} , and R ( θ ) = [ cos ( θ ) sin ( θ ) sin ( θ ) cos ( θ ) ] = I cos ( θ ) + G sin ( θ ) {\displaystyle R(\theta )={\begin{bmatrix}\cos(\theta )&-\sin(\theta )\\\sin(\theta )&\cos(\theta )\end{bmatrix}}=I\cos(\theta )+G\sin(\theta )} reduces to the standard matrix for a plane rotation.

The matrix P = −G projects a vector onto the ab-plane and the rotation only affects this part of the vector. An example illustrating this is a rotation of 30° = π/6 in the plane spanned by a and b,

a = [ 1 0 0 ] b = 1 5 [ 0 1 2 ] {\displaystyle {\begin{aligned}\mathbf {a} &={\begin{bmatrix}1\\0\\0\\\end{bmatrix}}&\mathbf {b} &={\frac {1}{\sqrt {5}}}{\begin{bmatrix}0\\1\\2\\\end{bmatrix}}\end{aligned}}} G = 1 5 [ 0 1 2 1 0 0 2 0 0 ] P = G 2 = 1 5 [ 5 0 0 0 1 2 0 2 4 ] P [ 1 2 3 ] = 1 5 [ 5 8 16 ] = a + 8 5 b R ( π 6 ) = 1 10 [ 5 3 5 2 5 5 8 + 3 4 + 2 3 2 5 4 + 2 3 2 + 4 3 ] {\displaystyle {\begin{aligned}G={\frac {1}{\sqrt {5}}}&{\begin{bmatrix}0&-1&-2\\1&0&0\\2&0&0\\\end{bmatrix}}&P=-G^{2}&={\frac {1}{5}}{\begin{bmatrix}5&0&0\\0&1&2\\0&2&4\\\end{bmatrix}}\\P{\begin{bmatrix}1\\2\\3\\\end{bmatrix}}={\frac {1}{5}}&{\begin{bmatrix}5\\8\\16\\\end{bmatrix}}=\mathbf {a} +{\frac {8}{\sqrt {5}}}\mathbf {b} &R\left({\frac {\pi }{6}}\right)&={\frac {1}{10}}{\begin{bmatrix}5{\sqrt {3}}&-{\sqrt {5}}&-2{\sqrt {5}}\\{\sqrt {5}}&8+{\sqrt {3}}&-4+2{\sqrt {3}}\\2{\sqrt {5}}&-4+2{\sqrt {3}}&2+4{\sqrt {3}}\\\end{bmatrix}}\\\end{aligned}}}

Let N = I - P, so N = N and its products with P and G are zero. This will allow us to evaluate powers of R.

R ( π 6 ) = N + P 3 2 + G 1 2 R ( π 6 ) 2 = N + P 1 2 + G 3 2 R ( π 6 ) 3 = N + G R ( π 6 ) 6 = N P R ( π 6 ) 12 = N + P = I {\displaystyle {\begin{aligned}R\left({\frac {\pi }{6}}\right)&=N+P{\frac {\sqrt {3}}{2}}+G{\frac {1}{2}}\\R\left({\frac {\pi }{6}}\right)^{2}&=N+P{\frac {1}{2}}+G{\frac {\sqrt {3}}{2}}\\R\left({\frac {\pi }{6}}\right)^{3}&=N+G\\R\left({\frac {\pi }{6}}\right)^{6}&=N-P\\R\left({\frac {\pi }{6}}\right)^{12}&=N+P=I\\\end{aligned}}}

Further information: Rodrigues' rotation formula and Axis–angle representation § Exponential map from so(3) to SO(3)

Evaluation by Laurent series

By virtue of the Cayley–Hamilton theorem the matrix exponential is expressible as a polynomial of order n−1.

If P and Qt are nonzero polynomials in one variable, such that P(A) = 0, and if the meromorphic function f ( z ) = e t z Q t ( z ) P ( z ) {\displaystyle f(z)={\frac {e^{tz}-Q_{t}(z)}{P(z)}}} is entire, then e t A = Q t ( A ) . {\displaystyle e^{tA}=Q_{t}(A).} To prove this, multiply the first of the two above equalities by P(z) and replace z by A.

Such a polynomial Qt(z) can be found as follows−see Sylvester's formula. Letting a be a root of P, Qa,t(z) is solved from the product of P by the principal part of the Laurent series of f at a: It is proportional to the relevant Frobenius covariant. Then the sum St of the Qa,t, where a runs over all the roots of P, can be taken as a particular Qt. All the other Qt will be obtained by adding a multiple of P to St(z). In particular, St(z), the Lagrange-Sylvester polynomial, is the only Qt whose degree is less than that of P.

Example: Consider the case of an arbitrary 2×2 matrix, A := [ a b c d ] . {\displaystyle A:={\begin{bmatrix}a&b\\c&d\end{bmatrix}}.}

The exponential matrix e, by virtue of the Cayley–Hamilton theorem, must be of the form e t A = s 0 ( t ) I + s 1 ( t ) A . {\displaystyle e^{tA}=s_{0}(t)\,I+s_{1}(t)\,A.}

(For any complex number z and any C-algebra B, we denote again by z the product of z by the unit of B.)

Let α and β be the roots of the characteristic polynomial of A, P ( z ) = z 2 ( a + d )   z + a d b c = ( z α ) ( z β )   . {\displaystyle P(z)=z^{2}-(a+d)\ z+ad-bc=(z-\alpha )(z-\beta )~.}

Then we have S t ( z ) = e α t z β α β + e β t z α β α   , {\displaystyle S_{t}(z)=e^{\alpha t}{\frac {z-\beta }{\alpha -\beta }}+e^{\beta t}{\frac {z-\alpha }{\beta -\alpha }}~,} hence s 0 ( t ) = α e β t β e α t α β , s 1 ( t ) = e α t e β t α β {\displaystyle {\begin{aligned}s_{0}(t)&={\frac {\alpha \,e^{\beta t}-\beta \,e^{\alpha t}}{\alpha -\beta }},&s_{1}(t)&={\frac {e^{\alpha t}-e^{\beta t}}{\alpha -\beta }}\end{aligned}}}

if αβ; while, if α = β, S t ( z ) = e α t ( 1 + t ( z α ) )   , {\displaystyle S_{t}(z)=e^{\alpha t}(1+t(z-\alpha ))~,}

so that s 0 ( t ) = ( 1 α t ) e α t , s 1 ( t ) = t e α t   . {\displaystyle {\begin{aligned}s_{0}(t)&=(1-\alpha \,t)\,e^{\alpha t},&s_{1}(t)&=t\,e^{\alpha t}~.\end{aligned}}}

Defining s α + β 2 = tr A 2   , q α β 2 = ± det ( A s I ) , {\displaystyle {\begin{aligned}s&\equiv {\frac {\alpha +\beta }{2}}={\frac {\operatorname {tr} A}{2}}~,&q&\equiv {\frac {\alpha -\beta }{2}}=\pm {\sqrt {-\det \left(A-sI\right)}},\end{aligned}}}

we have s 0 ( t ) = e s t ( cosh ( q t ) s sinh ( q t ) q ) , s 1 ( t ) = e s t sinh ( q t ) q , {\displaystyle {\begin{aligned}s_{0}(t)&=e^{st}\left(\cosh(qt)-s{\frac {\sinh(qt)}{q}}\right),&s_{1}(t)&=e^{st}{\frac {\sinh(qt)}{q}},\end{aligned}}}

where sin(qt)/q is 0 if t = 0, and t if q = 0.

Thus,

e t A = e s t ( ( cosh ( q t ) s sinh ( q t ) q )   I   + sinh ( q t ) q A )   . {\displaystyle e^{tA}=e^{st}\left(\left(\cosh(qt)-s{\frac {\sinh(qt)}{q}}\right)~I~+{\frac {\sinh(qt)}{q}}A\right)~.}

Thus, as indicated above, the matrix A having decomposed into the sum of two mutually commuting pieces, the traceful piece and the traceless piece, A = s I + ( A s I )   , {\displaystyle A=sI+(A-sI)~,}

the matrix exponential reduces to a plain product of the exponentials of the two respective pieces. This is a formula often used in physics, as it amounts to the analog of Euler's formula for Pauli spin matrices, that is rotations of the doublet representation of the group SU(2).

The polynomial St can also be given the following "interpolation" characterization. Define et(z) ≡ e, and n ≡ deg P. Then St(z) is the unique degree < n polynomial which satisfies St(a) = et(a) whenever k is less than the multiplicity of a as a root of P. We assume, as we obviously can, that P is the minimal polynomial of A. We further assume that A is a diagonalizable matrix. In particular, the roots of P are simple, and the "interpolation" characterization indicates that St is given by the Lagrange interpolation formula, so it is the Lagrange−Sylvester polynomial.

At the other extreme, if P = (z - a), then S t = e a t   k = 0 n 1   t k k !   ( z a ) k   . {\displaystyle S_{t}=e^{at}\ \sum _{k=0}^{n-1}\ {\frac {t^{k}}{k!}}\ (z-a)^{k}~.}

The simplest case not covered by the above observations is when P = ( z a ) 2 ( z b ) {\displaystyle P=(z-a)^{2}\,(z-b)} with ab, which yields S t = e a t   z b a b   ( 1 + ( t + 1 b a ) ( z a ) ) + e b t   ( z a ) 2 ( b a ) 2 . {\displaystyle S_{t}=e^{at}\ {\frac {z-b}{a-b}}\ \left(1+\left(t+{\frac {1}{b-a}}\right)(z-a)\right)+e^{bt}\ {\frac {(z-a)^{2}}{(b-a)^{2}}}.}

Evaluation by implementation of Sylvester's formula

A practical, expedited computation of the above reduces to the following rapid steps. Recall from above that an n×n matrix exp(tA) amounts to a linear combination of the first n−1 powers of A by the Cayley–Hamilton theorem. For diagonalizable matrices, as illustrated above, e.g. in the 2×2 case, Sylvester's formula yields exp(tA) = Bα exp() + Bβ exp(), where the Bs are the Frobenius covariants of A.

It is easiest, however, to simply solve for these Bs directly, by evaluating this expression and its first derivative at t = 0, in terms of A and I, to find the same answer as above.

But this simple procedure also works for defective matrices, in a generalization due to Buchheim. This is illustrated here for a 4×4 example of a matrix which is not diagonalizable, and the Bs are not projection matrices.

Consider A = [ 1 1 0 0 0 1 1 0 0 0 1 1 8 0 0 1 2 1 2 ]   , {\displaystyle A={\begin{bmatrix}1&1&0&0\\0&1&1&0\\0&0&1&-{\frac {1}{8}}\\0&0&{\frac {1}{2}}&{\frac {1}{2}}\end{bmatrix}}~,} with eigenvalues λ1 = 3/4 and λ2 = 1, each with a multiplicity of two.

Consider the exponential of each eigenvalue multiplied by t, exp(λit). Multiply each exponentiated eigenvalue by the corresponding undetermined coefficient matrix Bi. If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but now multiplying by an extra factor of t for each repetition, to ensure linear independence.

(If one eigenvalue had a multiplicity of three, then there would be the three terms: B i 1 e λ i t ,   B i 2 t e λ i t ,   B i 3 t 2 e λ i t {\displaystyle B_{i_{1}}e^{\lambda _{i}t},~B_{i_{2}}te^{\lambda _{i}t},~B_{i_{3}}t^{2}e^{\lambda _{i}t}} . By contrast, when all eigenvalues are distinct, the Bs are just the Frobenius covariants, and solving for them as below just amounts to the inversion of the Vandermonde matrix of these 4 eigenvalues.)

Sum all such terms, here four such, e A t = B 1 1 e λ 1 t + B 1 2 t e λ 1 t + B 2 1 e λ 2 t + B 2 2 t e λ 2 t , e A t = B 1 1 e 3 4 t + B 1 2 t e 3 4 t + B 2 1 e 1 t + B 2 2 t e 1 t   . {\displaystyle {\begin{aligned}e^{At}&=B_{1_{1}}e^{\lambda _{1}t}+B_{1_{2}}te^{\lambda _{1}t}+B_{2_{1}}e^{\lambda _{2}t}+B_{2_{2}}te^{\lambda _{2}t},\\e^{At}&=B_{1_{1}}e^{{\frac {3}{4}}t}+B_{1_{2}}te^{{\frac {3}{4}}t}+B_{2_{1}}e^{1t}+B_{2_{2}}te^{1t}~.\end{aligned}}}

To solve for all of the unknown matrices B in terms of the first three powers of A and the identity, one needs four equations, the above one providing one such at t = 0. Further, differentiate it with respect to t, A e A t = 3 4 B 1 1 e 3 4 t + ( 3 4 t + 1 ) B 1 2 e 3 4 t + 1 B 2 1 e 1 t + ( 1 t + 1 ) B 2 2 e 1 t   , {\displaystyle Ae^{At}={\frac {3}{4}}B_{1_{1}}e^{{\frac {3}{4}}t}+\left({\frac {3}{4}}t+1\right)B_{1_{2}}e^{{\frac {3}{4}}t}+1B_{2_{1}}e^{1t}+\left(1t+1\right)B_{2_{2}}e^{1t}~,}

and again, A 2 e A t = ( 3 4 ) 2 B 1 1 e 3 4 t + ( ( 3 4 ) 2 t + ( 3 4 + 1 3 4 ) ) B 1 2 e 3 4 t + B 2 1 e 1 t + ( 1 2 t + ( 1 + 1 1 ) ) B 2 2 e 1 t = ( 3 4 ) 2 B 1 1 e 3 4 t + ( ( 3 4 ) 2 t + 3 2 ) B 1 2 e 3 4 t + B 2 1 e t + ( t + 2 ) B 2 2 e t   , {\displaystyle {\begin{aligned}A^{2}e^{At}&=\left({\frac {3}{4}}\right)^{2}B_{1_{1}}e^{{\frac {3}{4}}t}+\left(\left({\frac {3}{4}}\right)^{2}t+\left({\frac {3}{4}}+1\cdot {\frac {3}{4}}\right)\right)B_{1_{2}}e^{{\frac {3}{4}}t}+B_{2_{1}}e^{1t}+\left(1^{2}t+(1+1\cdot 1)\right)B_{2_{2}}e^{1t}\\&=\left({\frac {3}{4}}\right)^{2}B_{1_{1}}e^{{\frac {3}{4}}t}+\left(\left({\frac {3}{4}}\right)^{2}t+{\frac {3}{2}}\right)B_{1_{2}}e^{{\frac {3}{4}}t}+B_{2_{1}}e^{t}+\left(t+2\right)B_{2_{2}}e^{t}~,\end{aligned}}}

and once more, A 3 e A t = ( 3 4 ) 3 B 1 1 e 3 4 t + ( ( 3 4 ) 3 t + ( ( 3 4 ) 2 + ( 3 2 ) 3 4 ) ) B 1 2 e 3 4 t + B 2 1 e 1 t + ( 1 3 t + ( 1 + 2 ) 1 ) B 2 2 e 1 t = ( 3 4 ) 3 B 1 1 e 3 4 t + ( ( 3 4 ) 3 t + 27 16 ) B 1 2 e 3 4 t + B 2 1 e t + ( t + 3 1 ) B 2 2 e t   . {\displaystyle {\begin{aligned}A^{3}e^{At}&=\left({\frac {3}{4}}\right)^{3}B_{1_{1}}e^{{\frac {3}{4}}t}+\left(\left({\frac {3}{4}}\right)^{3}t+\left(\left({\frac {3}{4}}\right)^{2}+\left({\frac {3}{2}}\right)\cdot {\frac {3}{4}}\right)\right)B_{1_{2}}e^{{\frac {3}{4}}t}+B_{2_{1}}e^{1t}+\left(1^{3}t+(1+2)\cdot 1\right)B_{2_{2}}e^{1t}\\&=\left({\frac {3}{4}}\right)^{3}B_{1_{1}}e^{{\frac {3}{4}}t}\!+\left(\left({\frac {3}{4}}\right)^{3}t\!+{\frac {27}{16}}\right)B_{1_{2}}e^{{\frac {3}{4}}t}\!+B_{2_{1}}e^{t}\!+\left(t+3\cdot 1\right)B_{2_{2}}e^{t}~.\end{aligned}}}

(In the general case, n−1 derivatives need be taken.)

Setting t = 0 in these four equations, the four coefficient matrices Bs may now be solved for, I = B 1 1 + B 2 1 A = 3 4 B 1 1 + B 1 2 + B 2 1 + B 2 2 A 2 = ( 3 4 ) 2 B 1 1 + 3 2 B 1 2 + B 2 1 + 2 B 2 2 A 3 = ( 3 4 ) 3 B 1 1 + 27 16 B 1 2 + B 2 1 + 3 B 2 2   , {\displaystyle {\begin{aligned}I&=B_{1_{1}}+B_{2_{1}}\\A&={\frac {3}{4}}B_{1_{1}}+B_{1_{2}}+B_{2_{1}}+B_{2_{2}}\\A^{2}&=\left({\frac {3}{4}}\right)^{2}B_{1_{1}}+{\frac {3}{2}}B_{1_{2}}+B_{2_{1}}+2B_{2_{2}}\\A^{3}&=\left({\frac {3}{4}}\right)^{3}B_{1_{1}}+{\frac {27}{16}}B_{1_{2}}+B_{2_{1}}+3B_{2_{2}}~,\end{aligned}}}

to yield B 1 1 = 128 A 3 366 A 2 + 288 A 80 I B 1 2 = 16 A 3 44 A 2 + 40 A 12 I B 2 1 = 128 A 3 + 366 A 2 288 A + 80 I B 2 2 = 16 A 3 40 A 2 + 33 A 9 I   . {\displaystyle {\begin{aligned}B_{1_{1}}&=128A^{3}-366A^{2}+288A-80I\\B_{1_{2}}&=16A^{3}-44A^{2}+40A-12I\\B_{2_{1}}&=-128A^{3}+366A^{2}-288A+80I\\B_{2_{2}}&=16A^{3}-40A^{2}+33A-9I~.\end{aligned}}}

Substituting with the value for A yields the coefficient matrices B 1 1 = [ 0 0 48 16 0 0 8 2 0 0 1 0 0 0 0 1 ] B 1 2 = [ 0 0 4 2 0 0 1 1 2 0 0 1 4 1 8 0 0 1 2 1 4 ] B 2 1 = [ 1 0 48 16 0 1 8 2 0 0 0 0 0 0 0 0 ] B 2 2 = [ 0 1 8 2 0 0 0 0 0 0 0 0 0 0 0 0 ] {\displaystyle {\begin{aligned}B_{1_{1}}&={\begin{bmatrix}0&0&48&-16\\0&0&-8&2\\0&0&1&0\\0&0&0&1\end{bmatrix}}\\B_{1_{2}}&={\begin{bmatrix}0&0&4&-2\\0&0&-1&{\frac {1}{2}}\\0&0&{\frac {1}{4}}&-{\frac {1}{8}}\\0&0&{\frac {1}{2}}&-{\frac {1}{4}}\end{bmatrix}}\\B_{2_{1}}&={\begin{bmatrix}1&0&-48&16\\0&1&8&-2\\0&0&0&0\\0&0&0&0\end{bmatrix}}\\B_{2_{2}}&={\begin{bmatrix}0&1&8&-2\\0&0&0&0\\0&0&0&0\\0&0&0&0\end{bmatrix}}\end{aligned}}}

so the final answer is e t A = [ e t t e t ( 8 t 48 ) e t + ( 4 t + 48 ) e 3 4 t ( 16 2 t ) e t + ( 2 t 16 ) e 3 4 t 0 e t 8 e t + ( t 8 ) e 3 4 t 2 e t + t + 4 2 e 3 4 t 0 0 t + 4 4 e 3 4 t t 8 e 3 4 t 0 0 t 2 e 3 4 t t 4 4 e 3 4 t   . ] {\displaystyle e^{tA}={\begin{bmatrix}e^{t}&te^{t}&\left(8t-48\right)e^{t}\!+\left(4t+48\right)e^{{\frac {3}{4}}t}&\left(16-2\,t\right)e^{t}\!+\left(-2t-16\right)e^{{\frac {3}{4}}t}\\0&e^{t}&8e^{t}\!+\left(-t-8\right)e^{{\frac {3}{4}}t}&-2e^{t}+{\frac {t+4}{2}}e^{{\frac {3}{4}}t}\\0&0&{\frac {t+4}{4}}e^{{\frac {3}{4}}t}&-{\frac {t}{8}}e^{{\frac {3}{4}}t}\\0&0&{\frac {t}{2}}e^{{\frac {3}{4}}t}&-{\frac {t-4}{4}}e^{{\frac {3}{4}}t}~.\end{bmatrix}}}

The procedure is much shorter than Putzer's algorithm sometimes utilized in such cases.

See also: Derivative of the exponential map

Illustrations

Suppose that we want to compute the exponential of B = [ 21 17 6 5 1 6 4 4 16 ] . {\displaystyle B={\begin{bmatrix}21&17&6\\-5&-1&-6\\4&4&16\end{bmatrix}}.}

Its Jordan form is J = P 1 B P = [ 4 0 0 0 16 1 0 0 16 ] , {\displaystyle J=P^{-1}BP={\begin{bmatrix}4&0&0\\0&16&1\\0&0&16\end{bmatrix}},} where the matrix P is given by P = [ 1 4 2 5 4 1 4 2 1 4 0 4 0 ] . {\displaystyle P={\begin{bmatrix}-{\frac {1}{4}}&2&{\frac {5}{4}}\\{\frac {1}{4}}&-2&-{\frac {1}{4}}\\0&4&0\end{bmatrix}}.}

Let us first calculate exp(J). We have J = J 1 ( 4 ) J 2 ( 16 ) {\displaystyle J=J_{1}(4)\oplus J_{2}(16)}

The exponential of a 1×1 matrix is just the exponential of the one entry of the matrix, so exp(J1(4)) = . The exponential of J2(16) can be calculated by the formula e = e e mentioned above; this yields

exp ( [ 16 1 0 16 ] ) = e 16 exp ( [ 0 1 0 0 ] ) = = e 16 ( [ 1 0 0 1 ] + [ 0 1 0 0 ] + 1 2 ! [ 0 0 0 0 ] + ) = [ e 16 e 16 0 e 16 ] . {\displaystyle {\begin{aligned}&\exp \left({\begin{bmatrix}16&1\\0&16\end{bmatrix}}\right)=e^{16}\exp \left({\begin{bmatrix}0&1\\0&0\end{bmatrix}}\right)=\\{}={}&e^{16}\left({\begin{bmatrix}1&0\\0&1\end{bmatrix}}+{\begin{bmatrix}0&1\\0&0\end{bmatrix}}+{1 \over 2!}{\begin{bmatrix}0&0\\0&0\end{bmatrix}}+\cdots {}\right)={\begin{bmatrix}e^{16}&e^{16}\\0&e^{16}\end{bmatrix}}.\end{aligned}}}

Therefore, the exponential of the original matrix B is exp ( B ) = P exp ( J ) P 1 = P [ e 4 0 0 0 e 16 e 16 0 0 e 16 ] P 1 = 1 4 [ 13 e 16 e 4 13 e 16 5 e 4 2 e 16 2 e 4 9 e 16 + e 4 9 e 16 + 5 e 4 2 e 16 + 2 e 4 16 e 16 16 e 16 4 e 16 ] . {\displaystyle {\begin{aligned}\exp(B)&=P\exp(J)P^{-1}=P{\begin{bmatrix}e^{4}&0&0\\0&e^{16}&e^{16}\\0&0&e^{16}\end{bmatrix}}P^{-1}\\&={1 \over 4}{\begin{bmatrix}13e^{16}-e^{4}&13e^{16}-5e^{4}&2e^{16}-2e^{4}\\-9e^{16}+e^{4}&-9e^{16}+5e^{4}&-2e^{16}+2e^{4}\\16e^{16}&16e^{16}&4e^{16}\end{bmatrix}}.\end{aligned}}}

Applications

Linear differential equations

The matrix exponential has applications to systems of linear differential equations. (See also matrix differential equation.) Recall from earlier in this article that a homogeneous differential equation of the form y = A y {\displaystyle \mathbf {y} '=A\mathbf {y} } has solution e y(0).

If we consider the vector y ( t ) = [ y 1 ( t ) y n ( t ) ]   , {\displaystyle \mathbf {y} (t)={\begin{bmatrix}y_{1}(t)\\\vdots \\y_{n}(t)\end{bmatrix}}~,} we can express a system of inhomogeneous coupled linear differential equations as y ( t ) = A y ( t ) + b ( t ) . {\displaystyle \mathbf {y} '(t)=A\mathbf {y} (t)+\mathbf {b} (t).} Making an ansatz to use an integrating factor of e and multiplying throughout, yields e A t y e A t A y = e A t b e A t y A e A t y = e A t b d d t ( e A t y ) = e A t b   . {\displaystyle {\begin{aligned}&&e^{-At}\mathbf {y} '-e^{-At}A\mathbf {y} &=e^{-At}\mathbf {b} \\&\Rightarrow &e^{-At}\mathbf {y} '-Ae^{-At}\mathbf {y} &=e^{-At}\mathbf {b} \\&\Rightarrow &{\frac {d}{dt}}\left(e^{-At}\mathbf {y} \right)&=e^{-At}\mathbf {b} ~.\end{aligned}}}

The second step is possible due to the fact that, if AB = BA, then eB = Be. So, calculating e leads to the solution to the system, by simply integrating the third step with respect to t.

A solution to this can be obtained by integrating and multiplying by e A t {\displaystyle e^{{\textbf {A}}t}} to eliminate the exponent in the LHS. Notice that while e A t {\displaystyle e^{{\textbf {A}}t}} is a matrix, given that it is a matrix exponential, we can say that e A t e A t = I {\displaystyle e^{{\textbf {A}}t}e^{-{\textbf {A}}t}=I} . In other words, exp A t = exp ( A t ) 1 {\displaystyle \exp {{\textbf {A}}t}=\exp {{(-{\textbf {A}}t)}^{-1}}} .

Example (homogeneous)

Consider the system x = 2 x y + z y = 3 y 1 z z = 2 x + y + 3 z   . {\displaystyle {\begin{matrix}x'&=&2x&-y&+z\\y'&=&&3y&-1z\\z'&=&2x&+y&+3z\end{matrix}}~.}

The associated defective matrix is A = [ 2 1 1 0 3 1 2 1 3 ]   . {\displaystyle A={\begin{bmatrix}2&-1&1\\0&3&-1\\2&1&3\end{bmatrix}}~.}

The matrix exponential is e t A = 1 2 [ e 2 t ( 1 + e 2 t 2 t ) 2 t e 2 t e 2 t ( 1 + e 2 t ) e 2 t ( 1 + e 2 t 2 t ) 2 ( t + 1 ) e 2 t e 2 t ( 1 + e 2 t ) e 2 t ( 1 + e 2 t + 2 t ) 2 t e 2 t e 2 t ( 1 + e 2 t ) ]   , {\displaystyle e^{tA}={\frac {1}{2}}{\begin{bmatrix}e^{2t}\left(1+e^{2t}-2t\right)&-2te^{2t}&e^{2t}\left(-1+e^{2t}\right)\\-e^{2t}\left(-1+e^{2t}-2t\right)&2(t+1)e^{2t}&-e^{2t}\left(-1+e^{2t}\right)\\e^{2t}\left(-1+e^{2t}+2t\right)&2te^{2t}&e^{2t}\left(1+e^{2t}\right)\end{bmatrix}}~,}

so that the general solution of the homogeneous system is [ x y z ] = x ( 0 ) 2 [ e 2 t ( 1 + e 2 t 2 t ) e 2 t ( 1 + e 2 t 2 t ) e 2 t ( 1 + e 2 t + 2 t ) ] + y ( 0 ) 2 [ 2 t e 2 t 2 ( t + 1 ) e 2 t 2 t e 2 t ] + z ( 0 ) 2 [ e 2 t ( 1 + e 2 t ) e 2 t ( 1 + e 2 t ) e 2 t ( 1 + e 2 t ) ]   , {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}={\frac {x(0)}{2}}{\begin{bmatrix}e^{2t}\left(1+e^{2t}-2t\right)\\-e^{2t}\left(-1+e^{2t}-2t\right)\\e^{2t}\left(-1+e^{2t}+2t\right)\end{bmatrix}}+{\frac {y(0)}{2}}{\begin{bmatrix}-2te^{2t}\\2(t+1)e^{2t}\\2te^{2t}\end{bmatrix}}+{\frac {z(0)}{2}}{\begin{bmatrix}e^{2t}\left(-1+e^{2t}\right)\\-e^{2t}\left(-1+e^{2t}\right)\\e^{2t}\left(1+e^{2t}\right)\end{bmatrix}}~,}

amounting to 2 x = x ( 0 ) e 2 t ( 1 + e 2 t 2 t ) + y ( 0 ) ( 2 t e 2 t ) + z ( 0 ) e 2 t ( 1 + e 2 t ) 2 y = x ( 0 ) ( e 2 t ) ( 1 + e 2 t 2 t ) + y ( 0 ) 2 ( t + 1 ) e 2 t + z ( 0 ) ( e 2 t ) ( 1 + e 2 t ) 2 z = x ( 0 ) e 2 t ( 1 + e 2 t + 2 t ) + y ( 0 ) 2 t e 2 t + z ( 0 ) e 2 t ( 1 + e 2 t )   . {\displaystyle {\begin{aligned}2x&=x(0)e^{2t}\left(1+e^{2t}-2t\right)+y(0)\left(-2te^{2t}\right)+z(0)e^{2t}\left(-1+e^{2t}\right)\\2y&=x(0)\left(-e^{2t}\right)\left(-1+e^{2t}-2t\right)+y(0)2(t+1)e^{2t}+z(0)\left(-e^{2t}\right)\left(-1+e^{2t}\right)\\2z&=x(0)e^{2t}\left(-1+e^{2t}+2t\right)+y(0)2te^{2t}+z(0)e^{2t}\left(1+e^{2t}\right)~.\end{aligned}}}

Example (inhomogeneous)

Consider now the inhomogeneous system x = 2 x y + z + e 2 t y = 3 y z z = 2 x + y + 3 z + e 2 t   . {\displaystyle {\begin{matrix}x'&=&2x&-&y&+&z&+&e^{2t}\\y'&=&&&3y&-&z&\\z'&=&2x&+&y&+&3z&+&e^{2t}\end{matrix}}~.}

We again have A = [ 2 1 1 0 3 1 2 1 3 ]   , {\displaystyle A=\left~,}

and b = e 2 t [ 1 0 1 ] . {\displaystyle \mathbf {b} =e^{2t}{\begin{bmatrix}1\\0\\1\end{bmatrix}}.}

From before, we already have the general solution to the homogeneous equation. Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, we now only need find the particular solution.

We have, by above, y p = e t A 0 t e ( u ) A [ e 2 u 0 e 2 u ] d u + e t A c = e t A 0 t [ 2 e u 2 u e 2 u 2 u e 2 u 0 2 e u + 2 ( u + 1 ) e 2 u 2 ( u + 1 ) e 2 u 0 2 u e 2 u 2 u e 2 u 2 e u ] [ e 2 u 0 e 2 u ] d u + e t A c = e t A 0 t [ e 2 u ( 2 e u 2 u e 2 u ) e 2 u ( 2 e u + 2 ( 1 + u ) e 2 u ) 2 e 3 u + 2 u e 4 u ] d u + e t A c = e t A [ 1 24 e 3 t ( 3 e t ( 4 t 1 ) 16 ) 1 24 e 3 t ( 3 e t ( 4 t + 4 ) 16 ) 1 24 e 3 t ( 3 e t ( 4 t 1 ) 16 ) ] + [ 2 e t 2 t e 2 t 2 t e 2 t 0 2 e t + 2 ( t + 1 ) e 2 t 2 ( t + 1 ) e 2 t 0 2 t e 2 t 2 t e 2 t 2 e t ] [ c 1 c 2 c 3 ]   , {\displaystyle {\begin{aligned}\mathbf {y} _{p}&=e^{tA}\int _{0}^{t}e^{(-u)A}{\begin{bmatrix}e^{2u}\\0\\e^{2u}\end{bmatrix}}\,du+e^{tA}\mathbf {c} \\&=e^{tA}\int _{0}^{t}{\begin{bmatrix}2e^{u}-2ue^{2u}&-2ue^{2u}&0\\-2e^{u}+2(u+1)e^{2u}&2(u+1)e^{2u}&0\\2ue^{2u}&2ue^{2u}&2e^{u}\end{bmatrix}}{\begin{bmatrix}e^{2u}\\0\\e^{2u}\end{bmatrix}}\,du+e^{tA}\mathbf {c} \\&=e^{tA}\int _{0}^{t}{\begin{bmatrix}e^{2u}\left(2e^{u}-2ue^{2u}\right)\\e^{2u}\left(-2e^{u}+2(1+u)e^{2u}\right)\\2e^{3u}+2ue^{4u}\end{bmatrix}}\,du+e^{tA}\mathbf {c} \\&=e^{tA}{\begin{bmatrix}-{1 \over 24}e^{3t}\left(3e^{t}(4t-1)-16\right)\\{1 \over 24}e^{3t}\left(3e^{t}(4t+4)-16\right)\\{1 \over 24}e^{3t}\left(3e^{t}(4t-1)-16\right)\end{bmatrix}}+{\begin{bmatrix}2e^{t}-2te^{2t}&-2te^{2t}&0\\-2e^{t}+2(t+1)e^{2t}&2(t+1)e^{2t}&0\\2te^{2t}&2te^{2t}&2e^{t}\end{bmatrix}}{\begin{bmatrix}c_{1}\\c_{2}\\c_{3}\end{bmatrix}}~,\end{aligned}}} which could be further simplified to get the requisite particular solution determined through variation of parameters. Note c = yp(0). For more rigor, see the following generalization.

Inhomogeneous case generalization: variation of parameters

For the inhomogeneous case, we can use integrating factors (a method akin to variation of parameters). We seek a particular solution of the form yp(t) = exp(tA) z(t), y p ( t ) = ( e t A ) z ( t ) + e t A z ( t ) = A e t A z ( t ) + e t A z ( t ) = A y p ( t ) + e t A z ( t )   . {\displaystyle {\begin{aligned}\mathbf {y} _{p}'(t)&=\left(e^{tA}\right)'\mathbf {z} (t)+e^{tA}\mathbf {z} '(t)\\&=Ae^{tA}\mathbf {z} (t)+e^{tA}\mathbf {z} '(t)\\&=A\mathbf {y} _{p}(t)+e^{tA}\mathbf {z} '(t)~.\end{aligned}}}

For yp to be a solution, e t A z ( t ) = b ( t ) z ( t ) = ( e t A ) 1 b ( t ) z ( t ) = 0 t e u A b ( u ) d u + c   . {\displaystyle {\begin{aligned}e^{tA}\mathbf {z} '(t)&=\mathbf {b} (t)\\\mathbf {z} '(t)&=\left(e^{tA}\right)^{-1}\mathbf {b} (t)\\\mathbf {z} (t)&=\int _{0}^{t}e^{-uA}\mathbf {b} (u)\,du+\mathbf {c} ~.\end{aligned}}}

Thus, y p ( t ) = e t A 0 t e u A b ( u ) d u + e t A c = 0 t e ( t u ) A b ( u ) d u + e t A c   , {\displaystyle {\begin{aligned}\mathbf {y} _{p}(t)&=e^{tA}\int _{0}^{t}e^{-uA}\mathbf {b} (u)\,du+e^{tA}\mathbf {c} \\&=\int _{0}^{t}e^{(t-u)A}\mathbf {b} (u)\,du+e^{tA}\mathbf {c} ~,\end{aligned}}} where c is determined by the initial conditions of the problem.

More precisely, consider the equation Y A   Y = F ( t ) {\displaystyle Y'-A\ Y=F(t)}

with the initial condition Y(t0) = Y0, where

  • A is an n by n complex matrix,
  • F is a continuous function from some open interval I to C,
  • t 0 {\displaystyle t_{0}} is a point of I, and
  • Y 0 {\displaystyle Y_{0}} is a vector of C.

Left-multiplying the above displayed equality by e yields Y ( t ) = e ( t t 0 ) A   Y 0 + t 0 t e ( t x ) A   F ( x )   d x   . {\displaystyle Y(t)=e^{(t-t_{0})A}\ Y_{0}+\int _{t_{0}}^{t}e^{(t-x)A}\ F(x)\ dx~.}

We claim that the solution to the equation P ( d / d t )   y = f ( t ) {\displaystyle P(d/dt)\ y=f(t)}

with the initial conditions y ( k ) ( t 0 ) = y k {\displaystyle y^{(k)}(t_{0})=y_{k}} for 0 ≤ k < n is y ( t ) = k = 0 n 1   y k   s k ( t t 0 ) + t 0 t s n 1 ( t x )   f ( x )   d x   , {\displaystyle y(t)=\sum _{k=0}^{n-1}\ y_{k}\ s_{k}(t-t_{0})+\int _{t_{0}}^{t}s_{n-1}(t-x)\ f(x)\ dx~,}

where the notation is as follows:

  • P C [ X ] {\displaystyle P\in \mathbb {C} } is a monic polynomial of degree n > 0,
  • f is a continuous complex valued function defined on some open interval I,
  • t 0 {\displaystyle t_{0}} is a point of I,
  • y k {\displaystyle y_{k}} is a complex number, and

sk(t) is the coefficient of X k {\displaystyle X^{k}} in the polynomial denoted by S t C [ X ] {\displaystyle S_{t}\in \mathbb {C} } in Subsection Evaluation by Laurent series above.

To justify this claim, we transform our order n scalar equation into an order one vector equation by the usual reduction to a first order system. Our vector equation takes the form d Y d t A   Y = F ( t ) , Y ( t 0 ) = Y 0 , {\displaystyle {\frac {dY}{dt}}-A\ Y=F(t),\quad Y(t_{0})=Y_{0},} where A is the transpose companion matrix of P. We solve this equation as explained above, computing the matrix exponentials by the observation made in Subsection Evaluation by implementation of Sylvester's formula above.

In the case n = 2 we get the following statement. The solution to y ( α + β )   y + α β   y = f ( t ) , y ( t 0 ) = y 0 , y ( t 0 ) = y 1 {\displaystyle y''-(\alpha +\beta )\ y'+\alpha \,\beta \ y=f(t),\quad y(t_{0})=y_{0},\quad y'(t_{0})=y_{1}}

is y ( t ) = y 0   s 0 ( t t 0 ) + y 1   s 1 ( t t 0 ) + t 0 t s 1 ( t x ) f ( x )   d x , {\displaystyle y(t)=y_{0}\ s_{0}(t-t_{0})+y_{1}\ s_{1}(t-t_{0})+\int _{t_{0}}^{t}s_{1}(t-x)\,f(x)\ dx,}

where the functions s0 and s1 are as in Subsection Evaluation by Laurent series above.

Matrix-matrix exponentials

The matrix exponential of another matrix (matrix-matrix exponential), is defined as X Y = e log ( X ) Y {\displaystyle X^{Y}=e^{\log(X)\cdot Y}} Y X = e Y log ( X ) {\displaystyle ^{Y}\!X=e^{Y\cdot \log(X)}} for any normal and non-singular n×n matrix X, and any complex n×n matrix Y.

For matrix-matrix exponentials, there is a distinction between the left exponential X and the right exponential X, because the multiplication operator for matrix-to-matrix is not commutative. Moreover,

  • If X is normal and non-singular, then X and X have the same set of eigenvalues.
  • If X is normal and non-singular, Y is normal, and XY = YX, then X = X.
  • If X is normal and non-singular, and X, Y, Z commute with each other, then X = X·X and X = X·X.

See also

References

  1. Hall 2015 Equation 2.1
  2. Hall 2015 Proposition 2.3
  3. Hall 2015 Theorem 2.12
  4. Hall 2015 Theorem 2.11
  5. Hall 2015 Chapter 5
  6. Bhatia, R. (1997). Matrix Analysis. Graduate Texts in Mathematics. Vol. 169. Springer. ISBN 978-0-387-94846-1.
  7. Lieb, Elliott H. (1973). "Convex trace functions and the Wigner–Yanase–Dyson conjecture". Advances in Mathematics. 11 (3): 267–288. doi:10.1016/0001-8708(73)90011-X.
  8. H. Epstein (1973). "Remarks on two theorems of E. Lieb". Communications in Mathematical Physics. 31 (4): 317–325. Bibcode:1973CMaPh..31..317E. doi:10.1007/BF01646492. S2CID 120096681.
  9. Hall 2015 Exercises 2.9 and 2.10
  10. R. M. Wilcox (1967). "Exponential Operators and Parameter Differentiation in Quantum Physics". Journal of Mathematical Physics. 8 (4): 962–982. Bibcode:1967JMP.....8..962W. doi:10.1063/1.1705306.
  11. Hall 2015 Theorem 5.4
  12. Lewis, Adrian S.; Sendov, Hristo S. (2001). "Twice differentiable spectral functions" (PDF). SIAM Journal on Matrix Analysis and Applications. 23 (2): 368–386. doi:10.1137/S089547980036838X. See Theorem 3.3.
  13. ^ Deledalle, Charles-Alban; Denis, Loïc; Tupin, Florence (2022). "Speckle reduction in matrix-log domain for synthetic aperture radar imaging". Journal of Mathematical Imaging and Vision. 64 (3): 298–320. Bibcode:2022JMIV...64..298D. doi:10.1007/s10851-022-01067-1. See Propositions 1 and 2.
  14. "Matrix exponential – MATLAB expm – MathWorks Deutschland". Mathworks.de. 2011-04-30. Retrieved 2013-06-05.
  15. "GNU Octave – Functions of a Matrix". Network-theory.co.uk. 2007-01-11. Archived from the original on 2015-05-29. Retrieved 2013-06-05.
  16. "R - pkg {Matrix}: Matrix Exponential". 2005-02-28. Retrieved 2023-07-17.
  17. "scipy.linalg.expm function documentation". The SciPy Community. 2015-01-18. Retrieved 2015-05-29.
  18. See Hall 2015 Section 2.2
  19. in a Euclidean space
  20. Weyl, Hermann (1952). Space Time Matter. Dover. p. 142. ISBN 978-0-486-60267-7.
  21. Bjorken, James D.; Drell, Sidney D. (1964). Relativistic Quantum Mechanics. McGraw-Hill. p. 22.
  22. Rinehart, R. F. (1955). "The equivalence of definitions of a matric function". The American Mathematical Monthly, 62 (6), 395-414.
  23. This can be generalized; in general, the exponential of Jn(a) is an upper triangular matrix with e/0! on the main diagonal, e/1! on the one above, e/2! on the next one, and so on.
  24. Ignacio Barradas and Joel E. Cohen (1994). "Iterated Exponentiation, Matrix-Matrix Exponentiation, and Entropy" (PDF). Academic Press, Inc. Archived from the original (PDF) on 2009-06-26.

External links

Matrix classes
Explicitly constrained entries
Constant
Conditions on eigenvalues or eigenvectors
Satisfying conditions on products or inverses
With specific applications
Used in statistics
Used in graph theory
Used in science and engineering
Related terms
Categories: