Misplaced Pages

Baker–Campbell–Hausdorff formula

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Baker-Campbell-Hausdorff formula) Formula in Lie theory

In mathematics, the Baker–Campbell–Hausdorff formula gives the value of Z {\displaystyle Z} that solves the equation e X e Y = e Z {\displaystyle e^{X}e^{Y}=e^{Z}} for possibly noncommutative X and Y in the Lie algebra of a Lie group. There are various ways of writing the formula, but all ultimately yield an expression for Z {\displaystyle Z} in Lie algebraic terms, that is, as a formal series (not necessarily convergent) in X {\displaystyle X} and Y {\displaystyle Y} and iterated commutators thereof. The first few terms of this series are: Z = X + Y + 1 2 [ X , Y ] + 1 12 [ X , [ X , Y ] ] 1 12 [ Y , [ X , Y ] ] + , {\displaystyle Z=X+Y+{\frac {1}{2}}+{\frac {1}{12}}]-{\frac {1}{12}}]+\cdots \,,} where " {\displaystyle \cdots } " indicates terms involving higher commutators of X {\displaystyle X} and Y {\displaystyle Y} . If X {\displaystyle X} and Y {\displaystyle Y} are sufficiently small elements of the Lie algebra g {\displaystyle {\mathfrak {g}}} of a Lie group G {\displaystyle G} , the series is convergent. Meanwhile, every element g {\displaystyle g} sufficiently close to the identity in G {\displaystyle G} can be expressed as g = e X {\displaystyle g=e^{X}} for a small X {\displaystyle X} in g {\displaystyle {\mathfrak {g}}} . Thus, we can say that near the identity the group multiplication in G {\displaystyle G} —written as e X e Y = e Z {\displaystyle e^{X}e^{Y}=e^{Z}} —can be expressed in purely Lie algebraic terms. The Baker–Campbell–Hausdorff formula can be used to give comparatively simple proofs of deep results in the Lie group–Lie algebra correspondence.

If X {\displaystyle X} and Y {\displaystyle Y} are sufficiently small n × n {\displaystyle n\times n} matrices, then Z {\displaystyle Z} can be computed as the logarithm of e X e Y {\displaystyle e^{X}e^{Y}} , where the exponentials and the logarithm can be computed as power series. The point of the Baker–Campbell–Hausdorff formula is then the highly nonobvious claim that Z := log ( e X e Y ) {\displaystyle Z:=\log \left(e^{X}e^{Y}\right)} can be expressed as a series in repeated commutators of X {\displaystyle X} and Y {\displaystyle Y} .

Modern expositions of the formula can be found in, among other places, the books of Rossmann and Hall.

History

The formula is named after Henry Frederick Baker, John Edward Campbell, and Felix Hausdorff who stated its qualitative form, i.e. that only commutators and commutators of commutators, ad infinitum, are needed to express the solution. An earlier statement of the form was adumbrated by Friedrich Schur in 1890 where a convergent power series is given, with terms recursively defined. This qualitative form is what is used in the most important applications, such as the relatively accessible proofs of the Lie correspondence and in quantum field theory. Following Schur, it was noted in print by Campbell (1897); elaborated by Henri Poincaré (1899) and Baker (1902); and systematized geometrically, and linked to the Jacobi identity by Hausdorff (1906). The first actual explicit formula, with all numerical coefficients, is due to Eugene Dynkin (1947). The history of the formula is described in detail in the article of Achilles and Bonfiglioli and in the book of Bonfiglioli and Fulci.

Explicit forms

For many purposes, it is only necessary to know that an expansion for Z {\displaystyle Z} in terms of iterated commutators of X {\displaystyle X} and Y {\displaystyle Y} exists; the exact coefficients are often irrelevant. (See, for example, the discussion of the relationship between Lie group and Lie algebra homomorphisms in Section 5.2 of Hall's book, where the precise coefficients play no role in the argument.) A remarkably direct existence proof was given by Martin Eichler, see also the "Existence results" section below.

In other cases, one may need detailed information about Z {\displaystyle Z} and it is therefore desirable to compute Z {\displaystyle Z} as explicitly as possible. Numerous formulas exist; we will describe two of the main ones (Dynkin's formula and the integral formula of Poincaré) in this section.

Dynkin's formula

Let G be a Lie group with Lie algebra g {\displaystyle {\mathfrak {g}}} . Let exp : g G {\displaystyle \exp :{\mathfrak {g}}\to G} be the exponential map. The following general combinatorial formula was introduced by Eugene Dynkin (1947), log ( exp X exp Y ) = n = 1 ( 1 ) n 1 n r 1 + s 1 > 0 r n + s n > 0 [ X r 1 Y s 1 X r 2 Y s 2 X r n Y s n ] ( j = 1 n ( r j + s j ) ) i = 1 n r i ! s i ! , {\displaystyle \log(\exp X\exp Y)=\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}}{n}}\sum _{\begin{smallmatrix}r_{1}+s_{1}>0\\\vdots \\r_{n}+s_{n}>0\end{smallmatrix}}{\frac {}{\left(\sum _{j=1}^{n}(r_{j}+s_{j})\right)\cdot \prod _{i=1}^{n}r_{i}!s_{i}!}},} where the sum is performed over all nonnegative values of s i {\displaystyle s_{i}} and r i {\displaystyle r_{i}} , and the following notation has been used: [ X r 1 Y s 1 X r n Y s n ] = [ X , [ X , [ X r 1 , [ Y , [ Y , [ Y s 1 , [ X , [ X , [ X r n , [ Y , [ Y , Y s n ] ] ] ] {\displaystyle =]\dotsm ]]} with the understanding that  := X.

The series is not convergent in general; it is convergent (and the stated formula is valid) for all sufficiently small X {\displaystyle X} and Y {\displaystyle Y} . Since = 0, the term is zero if s n > 1 {\displaystyle s_{n}>1} or if s n = 0 {\displaystyle s_{n}=0} and r n > 1 {\displaystyle r_{n}>1} .

The first few terms are well-known, with all higher-order terms involving and commutator nestings thereof (thus in the Lie algebra):

Z ( X , Y ) = log ( exp X exp Y ) = X + Y + 1 2 [ X , Y ] + 1 12 ( [ X , [ X , Y ] ] + [ Y , [ Y , X ] ] ) 1 24 [ Y , [ X , [ X , Y ] ] ] 1 720 ( [ Y , [ Y , [ Y , [ Y , X ] ] ] ] + [ X , [ X , [ X , [ X , Y ] ] ] ] ) + 1 360 ( [ X , [ Y , [ Y , [ Y , X ] ] ] ] + [ Y , [ X , [ X , [ X , Y ] ] ] ] ) + 1 120 ( [ Y , [ X , [ Y , [ X , Y ] ] ] ] + [ X , [ Y , [ X , [ Y , X ] ] ] ] ) + 1 240 ( [ X , [ Y , [ X , [ Y , [ X , Y ] ] ] ] ] ) + 1 720 ( [ X , [ Y , [ X , [ X , [ X , Y ] ] ] ] ] [ X , [ X , [ Y , [ Y , [ X , Y ] ] ] ] ] ) + 1 1440 ( [ X , [ Y , [ Y , [ Y , [ X , Y ] ] ] ] ] [ X , [ X , [ Y , [ X , [ X , Y ] ] ] ] ] ) + {\displaystyle {\begin{aligned}Z(X,Y)&=\log(\exp X\exp Y)\\&{}=X+Y+{\frac {1}{2}}+{\frac {1}{12}}\left(]+]\right)\\&{}\quad -{\frac {1}{24}}]]\\&{}\quad -{\frac {1}{720}}\left(]]]+]]]\right)\\&{}\quad +{\frac {1}{360}}\left(]]]+]]]\right)\\&{}\quad +{\frac {1}{120}}\left(]]]+]]]\right)\\&{}\quad +{\frac {1}{240}}\left(]]]]\right)\\&{}\quad +{\frac {1}{720}}\left(]]]]-]]]]\right)\\&{}\quad +{\frac {1}{1440}}\left(]]]]-]]]]\right)+\cdots \end{aligned}}}

The above lists all summands of order 6 or lower (i.e. those containing 6 or fewer X's and Y's). The XY (anti-)/symmetry in alternating orders of the expansion, follows from Z(Y, X) = −Z(−X, −Y). A complete elementary proof of this formula can be found in the article on the derivative of the exponential map.

An integral formula

There are numerous other expressions for Z {\displaystyle Z} , many of which are used in the physics literature. A popular integral formula is log ( e X e Y ) = X + ( 0 1 ψ ( e ad X   e t ad Y ) d t ) Y , {\displaystyle \log \left(e^{X}e^{Y}\right)=X+\left(\int _{0}^{1}\psi \left(e^{\operatorname {ad} _{X}}~e^{t\operatorname {ad} _{Y}}\right)dt\right)Y,} involving the generating function for the Bernoulli numbers, ψ ( x )   = def   x log x x 1 = 1 n = 1 ( 1 x ) n n ( n + 1 )   , {\displaystyle \psi (x)~{\stackrel {\text{def}}{=}}~{\frac {x\log x}{x-1}}=1-\sum _{n=1}^{\infty }{(1-x)^{n} \over n(n+1)}~,} utilized by Poincaré and Hausdorff.

Matrix Lie group illustration

For a matrix Lie group G GL ( n , R ) {\displaystyle G\subset {\mbox{GL}}(n,\mathbb {R} )} the Lie algebra is the tangent space of the identity I, and the commutator is simply = XYYX; the exponential map is the standard exponential map of matrices, exp X = e X = n = 0 X n n ! . {\displaystyle \exp X=e^{X}=\sum _{n=0}^{\infty }{\frac {X^{n}}{n!}}.}

When one solves for Z in e Z = e X e Y , {\displaystyle e^{Z}=e^{X}e^{Y},} using the series expansions for exp and log one obtains a simpler formula: Z = n > 0 ( 1 ) n 1 n 1 i n r i + s i > 0 X r 1 Y s 1 X r n Y s n r 1 ! s 1 ! r n ! s n ! , X + Y < log 2 , Z < log 2. {\displaystyle Z=\sum _{n>0}{\frac {(-1)^{n-1}}{n}}\sum _{\stackrel {r_{i}+s_{i}>0}{1\leq i\leq n}}{\frac {X^{r_{1}}Y^{s_{1}}\cdots X^{r_{n}}Y^{s_{n}}}{r_{1}!s_{1}!\cdots r_{n}!s_{n}!}},\quad \|X\|+\|Y\|<\log 2,\|Z\|<\log 2.} The first, second, third, and fourth order terms are:

  • z 1 = X + Y {\displaystyle z_{1}=X+Y}
  • z 2 = 1 2 ( X Y Y X ) {\displaystyle z_{2}={\frac {1}{2}}(XY-YX)}
  • z 3 = 1 12 ( X 2 Y + X Y 2 2 X Y X + Y 2 X + Y X 2 2 Y X Y ) {\displaystyle z_{3}={\frac {1}{12}}\left(X^{2}Y+XY^{2}-2XYX+Y^{2}X+YX^{2}-2YXY\right)}
  • z 4 = 1 24 ( X 2 Y 2 2 X Y X Y Y 2 X 2 + 2 Y X Y X ) . {\displaystyle z_{4}={\frac {1}{24}}\left(X^{2}Y^{2}-2XYXY-Y^{2}X^{2}+2YXYX\right).}

The formulas for the various z j {\displaystyle z_{j}} 's is not the Baker–Campbell–Hausdorff formula. Rather, the Baker–Campbell–Hausdorff formula is one of various expressions for z j {\displaystyle z_{j}} 's in terms of repeated commutators of X {\displaystyle X} and Y {\displaystyle Y} . The point is that it is far from obvious that it is possible to express each z j {\displaystyle z_{j}} in terms of commutators. (The reader is invited, for example, to verify by direct computation that z 3 {\displaystyle z_{3}} is expressible as a linear combination of the two nontrivial third-order commutators of X {\displaystyle X} and Y {\displaystyle Y} , namely [ X , [ X , Y ] ] {\displaystyle ]} and [ Y , [ X , Y ] ] {\displaystyle ]} .) The general result that each z j {\displaystyle z_{j}} is expressible as a combination of commutators was shown in an elegant, recursive way by Eichler.

A consequence of the Baker–Campbell–Hausdorff formula is the following result about the trace: tr log ( e X e Y ) = tr X + tr Y . {\displaystyle \operatorname {tr} \log \left(e^{X}e^{Y}\right)=\operatorname {tr} X+\operatorname {tr} Y.} That is to say, since each z j {\displaystyle z_{j}} with j 2 {\displaystyle j\geq 2} is expressible as a linear combination of commutators, the trace of each such terms is zero.

Questions of convergence

Suppose X {\displaystyle X} and Y {\displaystyle Y} are the following matrices in the Lie algebra s l ( 2 ; C ) {\displaystyle {\mathfrak {sl}}(2;\mathbb {C} )} (the space of 2 × 2 {\displaystyle 2\times 2} matrices with trace zero): X = ( 0 i π i π 0 ) ; Y = ( 0 1 0 0 ) . {\displaystyle X={\begin{pmatrix}0&i\pi \\i\pi &0\end{pmatrix}};\quad Y={\begin{pmatrix}0&1\\0&0\end{pmatrix}}.} Then e X e Y = ( 1 0 0 1 ) ( 1 1 0 1 ) = ( 1 1 0 1 ) . {\displaystyle e^{X}e^{Y}={\begin{pmatrix}-1&0\\0&-1\end{pmatrix}}{\begin{pmatrix}1&1\\0&1\end{pmatrix}}={\begin{pmatrix}-1&-1\\0&-1\end{pmatrix}}.} It is then not hard to show that there does not exist a matrix Z {\displaystyle Z} in sl ( 2 ; C ) {\displaystyle \operatorname {sl} (2;\mathbb {C} )} with e X e Y = e Z {\displaystyle e^{X}e^{Y}=e^{Z}} . (Similar examples may be found in the article of Wei.)

This simple example illustrates that the various versions of the Baker–Campbell–Hausdorff formula, which give expressions for Z in terms of iterated Lie-brackets of X and Y, describe formal power series whose convergence is not guaranteed. Thus, if one wants Z to be an actual element of the Lie algebra containing X and Y (as opposed to a formal power series), one has to assume that X and Y are small. Thus, the conclusion that the product operation on a Lie group is determined by the Lie algebra is only a local statement. Indeed, the result cannot be global, because globally one can have nonisomorphic Lie groups with isomorphic Lie algebras.

Concretely, if working with a matrix Lie algebra and {\displaystyle \|\cdot \|} is a given submultiplicative matrix norm, convergence is guaranteed if X + Y < ln 2 2 . {\displaystyle \|X\|+\|Y\|<{\frac {\ln 2}{2}}.}

Special cases

If X {\displaystyle X} and Y {\displaystyle Y} commute, that is [ X , Y ] = 0 {\displaystyle =0} , the Baker–Campbell–Hausdorff formula reduces to e X e Y = e X + Y {\displaystyle e^{X}e^{Y}=e^{X+Y}} .

Another case assumes that [ X , Y ] {\displaystyle } commutes with both X {\displaystyle X} and Y {\displaystyle Y} , as for the nilpotent Heisenberg group. Then the formula reduces to its first three terms.

Theorem () — If X {\displaystyle X} and Y {\displaystyle Y} commute with their commutator, [ X , [ X , Y ] ] = [ Y , [ X , Y ] ] = 0 {\displaystyle ]=]=0} , then e X e Y = e X + Y + 1 2 [ X , Y ] {\displaystyle e^{X}e^{Y}=e^{X+Y+{\frac {1}{2}}}} .

This is the degenerate case used routinely in quantum mechanics, as illustrated below and is sometimes known as the disentangling theorem. In this case, there are no smallness restrictions on X {\displaystyle X} and Y {\displaystyle Y} . This result is behind the "exponentiated commutation relations" that enter into the Stone–von Neumann theorem. A simple proof of this identity is given below.

Another useful form of the general formula emphasizes expansion in terms of Y and uses the adjoint mapping notation ad X ( Y ) = [ X , Y ] {\displaystyle \operatorname {ad} _{X}(Y)=} : log ( exp X exp Y ) = X + ad X 1 e ad X   Y + O ( Y 2 ) = X + ad X / 2 ( 1 + coth ad X / 2 )   Y + O ( Y 2 ) , {\displaystyle \log(\exp X\exp Y)=X+{\frac {\operatorname {ad} _{X}}{1-e^{-\operatorname {ad} _{X}}}}~Y+O\left(Y^{2}\right)=X+\operatorname {ad} _{X/2}(1+\coth \operatorname {ad} _{X/2})~Y+O\left(Y^{2}\right),} which is evident from the integral formula above. (The coefficients of the nested commutators with a single Y {\displaystyle Y} are normalized Bernoulli numbers.)

Now assume that the commutator is a multiple of Y {\displaystyle Y} , so that [ X , Y ] = s Y {\displaystyle =sY} . Then all iterated commutators will be multiples of Y {\displaystyle Y} , and no quadratic or higher terms in Y {\displaystyle Y} appear. Thus, the O ( Y 2 ) {\displaystyle O\left(Y^{2}\right)} term above vanishes and we obtain:

Theorem () — If [ X , Y ] = s Y {\displaystyle =sY} , where s {\displaystyle s} is a complex number with s 2 π i n {\displaystyle s\neq 2\pi in} for all integers n {\displaystyle n} , then we have e X e Y = exp ( X + s 1 e s Y ) . {\displaystyle e^{X}e^{Y}=\exp \left(X+{\frac {s}{1-e^{-s}}}Y\right).}

Again, in this case there are no smallness restriction on X {\displaystyle X} and Y {\displaystyle Y} . The restriction on s {\displaystyle s} guarantees that the expression on the right side makes sense. (When s = 0 {\displaystyle s=0} we may interpret lim s 0 s / ( 1 e s ) = 1 {\textstyle \lim _{s\to 0}s/(1-e^{-s})=1} .) We also obtain a simple "braiding identity": e X e Y = e exp ( s ) Y e X , {\displaystyle e^{X}e^{Y}=e^{\exp(s)Y}e^{X},} which may be written as an adjoint dilation: e X e Y e X = e exp ( s ) Y . {\displaystyle e^{X}e^{Y}e^{-X}=e^{\exp(s)\,Y}.}

Existence results

If X {\displaystyle X} and Y {\displaystyle Y} are matrices, one can compute Z := log ( e X e Y ) {\displaystyle Z:=\log \left(e^{X}e^{Y}\right)} using the power series for the exponential and logarithm, with convergence of the series if X {\displaystyle X} and Y {\displaystyle Y} are sufficiently small. It is natural to collect together all terms where the total degree in X {\displaystyle X} and Y {\displaystyle Y} equals a fixed number k {\displaystyle k} , giving an expression z k {\displaystyle z_{k}} . (See the section "Matrix Lie group illustration" above for formulas for the first several z k {\displaystyle z_{k}} 's.) A remarkably direct and concise, recursive proof that each z k {\displaystyle z_{k}} is expressible in terms of repeated commutators of X {\displaystyle X} and Y {\displaystyle Y} was given by Martin Eichler.

Alternatively, we can give an existence argument as follows. The Baker–Campbell–Hausdorff formula implies that if X and Y are in some Lie algebra g , {\displaystyle {\mathfrak {g}},} defined over any field of characteristic 0 like R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } , then Z = log ( exp ( X ) exp ( Y ) ) , {\displaystyle Z=\log(\exp(X)\exp(Y)),} can formally be written as an infinite sum of elements of g {\displaystyle {\mathfrak {g}}} . [This infinite series may or may not converge, so it need not define an actual element Z in g {\displaystyle {\mathfrak {g}}} .] For many applications, the mere assurance of the existence of this formal expression is sufficient, and an explicit expression for this infinite sum is not needed. This is for instance the case in the Lorentzian construction of a Lie group representation from a Lie algebra representation. Existence can be seen as follows.

We consider the ring S = R [ [ X , Y ] ] {\displaystyle S=\mathbb {R} ]} of all non-commuting formal power series with real coefficients in the non-commuting variables X and Y. There is a ring homomorphism from S to the tensor product of S with S over R, Δ : S S S , {\displaystyle \Delta \colon S\to S\otimes S,} called the coproduct, such that Δ ( X ) = X 1 + 1 X {\displaystyle \Delta (X)=X\otimes 1+1\otimes X} and Δ ( Y ) = Y 1 + 1 Y . {\displaystyle \Delta (Y)=Y\otimes 1+1\otimes Y.} (The definition of Δ is extended to the other elements of S by requiring R-linearity, multiplicativity and infinite additivity.)

One can then verify the following properties:

  • The map exp, defined by its standard Taylor series, is a bijection between the set of elements of S with constant term 0 and the set of elements of S with constant term 1; the inverse of exp is log
  • r = exp ( s ) {\displaystyle r=\exp(s)} is grouplike (this means Δ ( r ) = r r {\displaystyle \Delta (r)=r\otimes r} ) if and only if s is primitive (this means Δ ( s ) = s 1 + 1 s {\displaystyle \Delta (s)=s\otimes 1+1\otimes s} ).
  • The grouplike elements form a group under multiplication.
  • The primitive elements are exactly the formal infinite sums of elements of the Lie algebra generated by X and Y, where the Lie bracket is given by the commutator [ U , V ] = U V V U {\displaystyle =UV-VU} . (Friedrichs' theorem)

The existence of the Campbell–Baker–Hausdorff formula can now be seen as follows: The elements X and Y are primitive, so exp ( X ) {\displaystyle \exp(X)} and exp ( Y ) {\displaystyle \exp(Y)} are grouplike; so their product exp ( X ) exp ( Y ) {\displaystyle \exp(X)\exp(Y)} is also grouplike; so its logarithm log ( exp ( X ) exp ( Y ) ) {\displaystyle \log(\exp(X)\exp(Y))} is primitive; and hence can be written as an infinite sum of elements of the Lie algebra generated by X and Y.

The universal enveloping algebra of the free Lie algebra generated by X and Y is isomorphic to the algebra of all non-commuting polynomials in X and Y. In common with all universal enveloping algebras, it has a natural structure of a Hopf algebra, with a coproduct Δ. The ring S used above is just a completion of this Hopf algebra.

Zassenhaus formula

A related combinatoric expansion that is useful in dual applications is e t ( X + Y ) = e t X   e t Y   e t 2 2 [ X , Y ]   e t 3 6 ( 2 [ Y , [ X , Y ] ] + [ X , [ X , Y ] ] )   e t 4 24 ( [ [ [ X , Y ] , X ] , X ] + 3 [ [ [ X , Y ] , X ] , Y ] + 3 [ [ [ X , Y ] , Y ] , Y ] ) {\displaystyle e^{t(X+Y)}=e^{tX}~e^{tY}~e^{-{\frac {t^{2}}{2}}}~e^{{\frac {t^{3}}{6}}(2]+])}~e^{{\frac {-t^{4}}{24}}(,X],X]+3,X],Y]+3,Y],Y])}\cdots } where the exponents of higher order in t are likewise nested commutators, i.e., homogeneous Lie polynomials. These exponents, Cn in exp(−tX) exp(t(X+Y)) = Πn exp(t Cn), follow recursively by application of the above BCH expansion.

As a corollary of this, the Suzuki–Trotter decomposition follows.

Campbell identity

The following identity (Campbell 1897) leads to a special case of the Baker–Campbell–Hausdorff formula. Let G be a matrix Lie group and g its corresponding Lie algebra. Let adX be the linear operator on g defined by adX Y = = XYYX for some fixed Xg. (The adjoint endomorphism encountered above.) Denote with AdA for fixed AG the linear transformation of g given by AdAY = AYA.

A standard combinatorial lemma which is utilized in producing the above explicit expansions is given by

Lemma (Campbell 1897) —  Ad e X = e ad X , {\displaystyle \operatorname {Ad} _{e^{X}}=e^{\operatorname {ad} _{X}},} so, explicitly, Ad e X Y = e X Y e X = e ad X Y = Y + [ X , Y ] + 1 2 ! [ X , [ X , Y ] ] + 1 3 ! [ X , [ X , [ X , Y ] ] ] + . {\displaystyle \operatorname {Ad} _{e^{X}}Y=e^{X}Ye^{-X}=e^{\operatorname {ad} _{X}}Y=Y+\left+{\frac {1}{2!}}]+{\frac {1}{3!}}]]+\cdots .}

This is a particularly useful formula which is commonly used to conduct unitary transforms in quantum mechanics. By defining the iterated commutator, [ ( X ) n , Y ] [ X , [ X , [ X n  times  , Y ] ] ] , [ ( X ) 0 , Y ] Y , {\displaystyle \equiv \underbrace {]\dotsb ],\quad \equiv Y,} we can write this formula more compactly as, e X Y e X = n = 0 [ ( X ) n , Y ] n ! . {\displaystyle e^{X}Ye^{-X}=\sum _{n=0}^{\infty }{\frac {}{n!}}.}

Proof

Evaluate the derivative with respect to s of f (s)Ye Y e, solution of the resulting differential equation and evaluation at s = 1, d d s f ( s ) Y = d d s ( e s X Y e s X ) = X e s X Y e s X e s X Y e s X X = ad X ( e s X Y e s X ) {\displaystyle {\frac {d}{ds}}f(s)Y={\frac {d}{ds}}\left(e^{sX}Ye^{-sX}\right)=Xe^{sX}Ye^{-sX}-e^{sX}Ye^{-sX}X=\operatorname {ad} _{X}(e^{sX}Ye^{-sX})} or f ( s ) = ad X f ( s ) , f ( 0 ) = 1 f ( s ) = e s ad X . {\displaystyle f'(s)=\operatorname {ad} _{X}f(s),\qquad f(0)=1\qquad \Longrightarrow \qquad f(s)=e^{s\operatorname {ad} _{X}}.}

An application of the identity

For central, i.e., commuting with both X and Y, e s X Y e s X = Y + s [ X , Y ]   . {\displaystyle e^{sX}Ye^{-sX}=Y+s~.} Consequently, for g(s) ≡ e e, it follows that d g d s = ( X + e s X Y e s X ) g ( s ) = ( X + Y + s [ X , Y ] )   g ( s )   , {\displaystyle {\frac {dg}{ds}}={\Bigl (}X+e^{sX}Ye^{-sX}{\Bigr )}g(s)=(X+Y+s)~g(s)~,} whose solution is g ( s ) = e s ( X + Y ) + s 2 2 [ X , Y ]   . {\displaystyle g(s)=e^{s(X+Y)+{\frac {s^{2}}{2}}}~.} Taking s = 1 {\displaystyle s=1} gives one of the special cases of the Baker–Campbell–Hausdorff formula described above: e X e Y = e X + Y + 1 2 [ X , Y ]   . {\displaystyle e^{X}e^{Y}=e^{X+Y+{\frac {1}{2}}}~.}

More generally, for non-central , we have e X e Y e X = e e X Y e X = e e ad X Y , {\displaystyle e^{X}e^{Y}e^{-X}=e^{e^{X}Ye^{-X}}=e^{e^{{\text{ad}}_{X}}Y},} which can be written as the following braiding identity: e X e Y = e ( Y + [ X , Y ] + 1 2 ! [ X , [ X , Y ] ] + 1 3 ! [ X , [ X , [ X , Y ] ] ] + )   e X . {\displaystyle e^{X}e^{Y}=e^{(Y+\left+{\frac {1}{2!}}]+{\frac {1}{3!}}]]+\cdots )}~e^{X}.}

Infinitesimal case

Main article: Derivative of the exponential map

A particularly useful variant of the above is the infinitesimal form. This is commonly written as e X d e X = d X 1 2 ! [ X , d X ] + 1 3 ! [ X , [ X , d X ] ] 1 4 ! [ X , [ X , [ X , d X ] ] ] + {\displaystyle e^{-X}de^{X}=dX-{\frac {1}{2!}}\left+{\frac {1}{3!}}]-{\frac {1}{4!}}]]+\cdots } This variation is commonly used to write coordinates and vielbeins as pullbacks of the metric on a Lie group.

For example, writing X = X i e i {\displaystyle X=X^{i}e_{i}} for some functions X i {\displaystyle X^{i}} and a basis e i {\displaystyle e_{i}} for the Lie algebra, one readily computes that e X d e X = d X i e i 1 2 ! X i d X j f i j k e k + 1 3 ! X i X j d X k f j k l f i l m e m , {\displaystyle e^{-X}de^{X}=dX^{i}e_{i}-{\frac {1}{2!}}X^{i}dX^{j}{f_{ij}}^{k}e_{k}+{\frac {1}{3!}}X^{i}X^{j}dX^{k}{f_{jk}}^{l}{f_{il}}^{m}e_{m}-\cdots ,} for [ e i , e j ] = f i j k e k {\displaystyle ={f_{ij}}^{k}e_{k}} the structure constants of the Lie algebra.

The series can be written more compactly (cf. main article) as e X d e X = e i W i j d X j , {\displaystyle e^{-X}de^{X}=e_{i}{W^{i}}_{j}dX^{j},} with the infinite series W = n = 0 ( 1 ) n M n ( n + 1 ) ! = ( I e M ) M 1 . {\displaystyle W=\sum _{n=0}^{\infty }{\frac {(-1)^{n}M^{n}}{(n+1)!}}=(I-e^{-M})M^{-1}.} Here, M is a matrix whose matrix elements are M j k = X i f i j k {\displaystyle {M_{j}}^{k}=X^{i}{f_{ij}}^{k}} .

The usefulness of this expression comes from the fact that the matrix M is a vielbein. Thus, given some map N G {\displaystyle N\to G} from some manifold N to some manifold G, the metric tensor on the manifold N can be written as the pullback of the metric tensor B m n {\displaystyle B_{mn}} on the Lie group G, g i j = W i m W j n B m n . {\displaystyle g_{ij}={W_{i}}^{m}{W_{j}}^{n}B_{mn}.} The metric tensor B m n {\displaystyle B_{mn}} on the Lie group is the Cartan metric, the Killing form. For N a (pseudo-)Riemannian manifold, the metric is a (pseudo-)Riemannian metric.

Application in quantum mechanics

A special case of the Baker–Campbell–Hausdorff formula is useful in quantum mechanics and especially quantum optics, where X and Y are Hilbert space operators, generating the Heisenberg Lie algebra. Specifically, the position and momentum operators in quantum mechanics, usually denoted X {\displaystyle X} and P {\displaystyle P} , satisfy the canonical commutation relation: [ X , P ] = i I {\displaystyle =i\hbar I} where I {\displaystyle I} is the identity operator. It follows that X {\displaystyle X} and P {\displaystyle P} commute with their commutator. Thus, if we formally applied a special case of the Baker–Campbell–Hausdorff formula (even though X {\displaystyle X} and P {\displaystyle P} are unbounded operators and not matrices), we would conclude that e i a X e i b P = e i ( a X + b P a b 2 ) . {\displaystyle e^{iaX}e^{ibP}=e^{i\left(aX+bP-{\frac {ab\hbar }{2}}\right)}.} This "exponentiated commutation relation" does indeed hold, and forms the basis of the Stone–von Neumann theorem. Further, e i ( a X + b P ) = e i a X / 2 e i b P e i a X / 2 . {\displaystyle e^{i(aX+bP)}=e^{iaX/2}e^{ibP}e^{iaX/2}.}


A related application is the annihilation and creation operators, â and â. Their commutator = −I is central, that is, it commutes with both â and â. As indicated above, the expansion then collapses to the semi-trivial degenerate form: e v a ^ v a ^ = e v a ^ e v a ^ e | v | 2 / 2 , {\displaystyle e^{v{\hat {a}}^{\dagger }-v^{*}{\hat {a}}}=e^{v{\hat {a}}^{\dagger }}e^{-v^{*}{\hat {a}}}e^{-|v|^{2}/2},} where v is just a complex number.

This example illustrates the resolution of the displacement operator, exp(), into exponentials of annihilation and creation operators and scalars.

This degenerate Baker–Campbell–Hausdorff formula then displays the product of two displacement operators as another displacement operator (up to a phase factor), with the resultant displacement equal to the sum of the two displacements, e v a ^ v a ^ e u a ^ u a ^ = e ( v + u ) a ^ ( v + u ) a ^ e ( v u u v ) / 2 , {\displaystyle e^{v{\hat {a}}^{\dagger }-v^{*}{\hat {a}}}e^{u{\hat {a}}^{\dagger }-u^{*}{\hat {a}}}=e^{(v+u){\hat {a}}^{\dagger }-(v^{*}+u^{*}){\hat {a}}}e^{(vu^{*}-uv^{*})/2},} since the Heisenberg group they provide a representation of is nilpotent. The degenerate Baker–Campbell–Hausdorff formula is frequently used in quantum field theory as well.

See also

Notes

  1. Recall ψ ( e y ) = n = 0 B n   y n / n ! , {\displaystyle \psi (e^{y})=\sum _{n=0}^{\infty }B_{n}~y^{n}/n!,} for the Bernoulli numbers, B0 = 1, B1 = 1/2, B2 = 1/6, B4 = −1/30, ...
  2. Rossmann 2002 Equation (2) Section 1.3. For matrix Lie algebras over the fields R and C, the convergence criterion is that the log series converges for both sides of e = ee. This is guaranteed whenever ‖X‖ + ‖Y‖ < log 2, ‖Z‖ < log 2 in the Hilbert–Schmidt norm. Convergence may occur on a larger domain. See Rossmann 2002 p. 24.

References

  1. Rossmann 2002
  2. ^ Hall 2015
  3. F. Schur (1890), "Neue Begründung der Theorie der endlichen Transformationsgruppen," Mathematische Annalen, 35 (1890), 161–197. online copy
  4. see, e.g., Shlomo Sternberg, Lie Algebras (2004) Harvard University. (cf p 10.)
  5. John Edward Campbell, Proceedings of the London Mathematical Society 28 (1897) 381–390; (cf pp386-7 for the eponymous lemma); J. Campbell, Proceedings of the London Mathematical Society 29 (1898) 14–32.
  6. Henri Poincaré, Comptes rendus de l'Académie des Sciences 128 (1899) 1065–1069; Transactions of the Cambridge Philosophical Society 18 (1899) 220–255. online
  7. Henry Frederick Baker, Proceedings of the London Mathematical Society (1) 34 (1902) 347–360; H. Baker, Proceedings of the London Mathematical Society (1) 35 (1903) 333–374; H. Baker, Proceedings of the London Mathematical Society (Ser 2) 3 (1905) 24–47.
  8. Felix Hausdorff, "Die symbolische Exponentialformel in der Gruppentheorie", Ber Verh Saechs Akad Wiss Leipzig 58 (1906) 19–48.
  9. Rossmann 2002 p. 23
  10. Achilles & Bonfiglioli 2012
  11. Bonfiglioli & Fulci 2012
  12. ^ Eichler, Martin (1968). "A new proof of the Baker-Campbell-Hausdorff formula". Journal of the Mathematical Society of Japan. 20 (1–2): 23–25. doi:10.2969/jmsj/02010023.
  13. ^ Nathan Jacobson, Lie Algebras, John Wiley & Sons, 1966.
  14. ^ Dynkin, Eugene Borisovich (1947). "Вычисление коэффициентов в формуле Campbell–Hausdorff" [Calculation of the coefficients in the Campbell–Hausdorff formula]. Doklady Akademii Nauk SSSR (in Russian). 57: 323–326.
  15. A.A. Sagle & R.E. Walde, "Introduction to Lie Groups and Lie Algebras", Academic Press, New York, 1973. ISBN 0-12-614550-4.
  16. ^ Magnus, Wilhelm (1954). "On the exponential solution of differential equations for a linear operator". Communications on Pure and Applied Mathematics. 7 (4): 649–673. doi:10.1002/cpa.3160070404.
  17. Suzuki, Masuo (1985). "Decomposition formulas of exponential operators and Lie exponentials with some applications to quantum mechanics and statistical physics". Journal of Mathematical Physics. 26 (4): 601–612. Bibcode:1985JMP....26..601S. doi:10.1063/1.526596.; Veltman, M, 't Hooft, G & de Wit, B (2007), Appendix D.
  18. ^ W. Miller, Symmetry Groups and their Applications, Academic Press, New York, 1972, pp 159–161. ISBN 0-12-497460-0
  19. Hall 2015 Theorem 5.3
  20. Hall 2015 Example 3.41
  21. Wei, James (October 1963). "Note on the Global Validity of the Baker-Hausdorff and Magnus Theorems". Journal of Mathematical Physics. 4 (10): 1337–1341. Bibcode:1963JMP.....4.1337W. doi:10.1063/1.1703910.
  22. Biagi, Stefano; Bonfiglioli, Andrea; Matone, Marco (2018). "On the Baker-Campbell-Hausdorff Theorem: non-convergence and prolongation issues". Linear and Multilinear Algebra. 68 (7): 1310–1328. arXiv:1805.10089. doi:10.1080/03081087.2018.1540534. ISSN 0308-1087. S2CID 53585331.
  23. Hall 2015 Theorem 5.1
  24. Gerry, Christopher; Knight, Peter (2005). Introductory Quantum Optics (1st ed.). Cambridge University Press. p. 49. ISBN 978-0-521-52735-4.
  25. Hall 2015 Exercise 5.5
  26. Hall 2015 Section 5.7
  27. Casas, F.; Murua, A.; Nadinic, M. (2012). "Efficient computation of the Zassenhaus formula". Computer Physics Communications. 183 (11): 2386–2391. arXiv:1204.0389. Bibcode:2012CoPhC.183.2386C. doi:10.1016/j.cpc.2012.06.006. S2CID 2704520.
  28. Hall 2015 Proposition 3.35
  29. Rossmann 2002 p. 15
  30. L. Mandel, E. Wolf Optical Coherence and Quantum Optics (Cambridge 1995).
  31. Greiner & Reinhardt 1996 See pp 27-29 for a detailed proof of the above lemma.

Bibliography

External links

Categories: