Misplaced Pages

Characteristic polynomial

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Secular determinant)

Polynomial whose roots are the eigenvalues of a matrix This article is about the characteristic polynomial of a matrix or of an endomorphism of vector spaces. For the characteristic polynomial of a matroid, see Matroid. For that of a graded poset, see Graded poset.

In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis (that is, the characteristic polynomial does not depend on the choice of a basis). The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero.

In spectral graph theory, the characteristic polynomial of a graph is the characteristic polynomial of its adjacency matrix.

Motivation

In linear algebra, eigenvalues and eigenvectors play a fundamental role, since, given a linear transformation, an eigenvector is a vector whose direction is not changed by the transformation, and the corresponding eigenvalue is the measure of the resulting change of magnitude of the vector.

More precisely, suppose the transformation is represented by a square matrix A . {\displaystyle A.} Then an eigenvector v {\displaystyle \mathbf {v} } and the corresponding eigenvalue λ {\displaystyle \lambda } must satisfy the equation A v = λ v , {\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,} or, equivalently (since λ v = λ I v {\displaystyle \lambda \mathbf {v} =\lambda I\mathbf {v} } ), ( λ I A ) v = 0 {\displaystyle (\lambda I-A)\mathbf {v} =\mathbf {0} } where I {\displaystyle I} is the identity matrix, and v 0 {\displaystyle \mathbf {v} \neq \mathbf {0} } (although the zero vector satisfies this equation for every λ , {\displaystyle \lambda ,} it is not considered an eigenvector).

It follows that the matrix ( λ I A ) {\displaystyle (\lambda I-A)} must be singular, and its determinant det ( λ I A ) = 0 {\displaystyle \det(\lambda I-A)=0} must be zero.

In other words, the eigenvalues of A are the roots of det ( x I A ) , {\displaystyle \det(xI-A),} which is a monic polynomial in x of degree n if A is a n×n matrix. This polynomial is the characteristic polynomial of A.

Formal definition

Consider an n × n {\displaystyle n\times n} matrix A . {\displaystyle A.} The characteristic polynomial of A , {\displaystyle A,} denoted by p A ( t ) , {\displaystyle p_{A}(t),} is the polynomial defined by p A ( t ) = det ( t I A ) {\displaystyle p_{A}(t)=\det(tI-A)} where I {\displaystyle I} denotes the n × n {\displaystyle n\times n} identity matrix.

Some authors define the characteristic polynomial to be det ( A t I ) . {\displaystyle \det(A-tI).} That polynomial differs from the one defined here by a sign ( 1 ) n , {\displaystyle (-1)^{n},} so it makes no difference for properties like having as roots the eigenvalues of A {\displaystyle A} ; however the definition above always gives a monic polynomial, whereas the alternative definition is monic only when n {\displaystyle n} is even.

Examples

To compute the characteristic polynomial of the matrix A = ( 2 1 1 0 ) . {\displaystyle A={\begin{pmatrix}2&1\\-1&0\end{pmatrix}}.} the determinant of the following is computed: t I A = ( t 2 1 1 t 0 ) {\displaystyle tI-A={\begin{pmatrix}t-2&-1\\1&t-0\end{pmatrix}}} and found to be ( t 2 ) t 1 ( 1 ) = t 2 2 t + 1 , {\displaystyle (t-2)t-1(-1)=t^{2}-2t+1\,\!,} the characteristic polynomial of A . {\displaystyle A.}

Another example uses hyperbolic functions of a hyperbolic angle φ. For the matrix take A = ( cosh ( φ ) sinh ( φ ) sinh ( φ ) cosh ( φ ) ) . {\displaystyle A={\begin{pmatrix}\cosh(\varphi )&\sinh(\varphi )\\\sinh(\varphi )&\cosh(\varphi )\end{pmatrix}}.} Its characteristic polynomial is det ( t I A ) = ( t cosh ( φ ) ) 2 sinh 2 ( φ ) = t 2 2 t   cosh ( φ ) + 1 = ( t e φ ) ( t e φ ) . {\displaystyle \det(tI-A)=(t-\cosh(\varphi ))^{2}-\sinh ^{2}(\varphi )=t^{2}-2t\ \cosh(\varphi )+1=(t-e^{\varphi })(t-e^{-\varphi }).}

Properties

The characteristic polynomial p A ( t ) {\displaystyle p_{A}(t)} of a n × n {\displaystyle n\times n} matrix is monic (its leading coefficient is 1 {\displaystyle 1} ) and its degree is n . {\displaystyle n.} The most important fact about the characteristic polynomial was already mentioned in the motivational paragraph: the eigenvalues of A {\displaystyle A} are precisely the roots of p A ( t ) {\displaystyle p_{A}(t)} (this also holds for the minimal polynomial of A , {\displaystyle A,} but its degree may be less than n {\displaystyle n} ). All coefficients of the characteristic polynomial are polynomial expressions in the entries of the matrix. In particular its constant coefficient of t 0 {\displaystyle t^{0}} is det ( A ) = ( 1 ) n det ( A ) , {\displaystyle \det(-A)=(-1)^{n}\det(A),} the coefficient of t n {\displaystyle t^{n}} is one, and the coefficient of t n 1 {\displaystyle t^{n-1}} is tr(−A) = −tr(A), where tr(A) is the trace of A . {\displaystyle A.} (The signs given here correspond to the formal definition given in the previous section; for the alternative definition these would instead be det ( A ) {\displaystyle \det(A)} and (−1)tr(A) respectively.)

For a 2 × 2 {\displaystyle 2\times 2} matrix A , {\displaystyle A,} the characteristic polynomial is thus given by t 2 tr ( A ) t + det ( A ) . {\displaystyle t^{2}-\operatorname {tr} (A)t+\det(A).}

Using the language of exterior algebra, the characteristic polynomial of an n × n {\displaystyle n\times n} matrix A {\displaystyle A} may be expressed as p A ( t ) = k = 0 n t n k ( 1 ) k tr ( k A ) {\displaystyle p_{A}(t)=\sum _{k=0}^{n}t^{n-k}(-1)^{k}\operatorname {tr} \left(\textstyle \bigwedge ^{k}A\right)} where tr ( k A ) {\textstyle \operatorname {tr} \left(\bigwedge ^{k}A\right)} is the trace of the k {\displaystyle k} th exterior power of A , {\displaystyle A,} which has dimension ( n k ) . {\textstyle {\binom {n}{k}}.} This trace may be computed as the sum of all principal minors of A {\displaystyle A} of size k . {\displaystyle k.} The recursive Faddeev–LeVerrier algorithm computes these coefficients more efficiently.

When the characteristic of the field of the coefficients is 0 , {\displaystyle 0,} each such trace may alternatively be computed as a single determinant, that of the k × k {\displaystyle k\times k} matrix, tr ( k A ) = 1 k ! | tr A k 1 0 0 tr A 2 tr A k 2 0 tr A k 1 tr A k 2 1 tr A k tr A k 1 tr A |   . {\displaystyle \operatorname {tr} \left(\textstyle \bigwedge ^{k}A\right)={\frac {1}{k!}}{\begin{vmatrix}\operatorname {tr} A&k-1&0&\cdots &0\\\operatorname {tr} A^{2}&\operatorname {tr} A&k-2&\cdots &0\\\vdots &\vdots &&\ddots &\vdots \\\operatorname {tr} A^{k-1}&\operatorname {tr} A^{k-2}&&\cdots &1\\\operatorname {tr} A^{k}&\operatorname {tr} A^{k-1}&&\cdots &\operatorname {tr} A\end{vmatrix}}~.}

The Cayley–Hamilton theorem states that replacing t {\displaystyle t} by A {\displaystyle A} in the characteristic polynomial (interpreting the resulting powers as matrix powers, and the constant term c {\displaystyle c} as c {\displaystyle c} times the identity matrix) yields the zero matrix. Informally speaking, every matrix satisfies its own characteristic equation. This statement is equivalent to saying that the minimal polynomial of A {\displaystyle A} divides the characteristic polynomial of A . {\displaystyle A.}

Two similar matrices have the same characteristic polynomial. The converse however is not true in general: two matrices with the same characteristic polynomial need not be similar.

The matrix A {\displaystyle A} and its transpose have the same characteristic polynomial. A {\displaystyle A} is similar to a triangular matrix if and only if its characteristic polynomial can be completely factored into linear factors over K {\displaystyle K} (the same is true with the minimal polynomial instead of the characteristic polynomial). In this case A {\displaystyle A} is similar to a matrix in Jordan normal form.

Characteristic polynomial of a product of two matrices

If A {\displaystyle A} and B {\displaystyle B} are two square n × n {\displaystyle n\times n} matrices then characteristic polynomials of A B {\displaystyle AB} and B A {\displaystyle BA} coincide: p A B ( t ) = p B A ( t ) . {\displaystyle p_{AB}(t)=p_{BA}(t).\,}

When A {\displaystyle A} is non-singular this result follows from the fact that A B {\displaystyle AB} and B A {\displaystyle BA} are similar: B A = A 1 ( A B ) A . {\displaystyle BA=A^{-1}(AB)A.}

For the case where both A {\displaystyle A} and B {\displaystyle B} are singular, the desired identity is an equality between polynomials in t {\displaystyle t} and the coefficients of the matrices. Thus, to prove this equality, it suffices to prove that it is verified on a non-empty open subset (for the usual topology, or, more generally, for the Zariski topology) of the space of all the coefficients. As the non-singular matrices form such an open subset of the space of all matrices, this proves the result.

More generally, if A {\displaystyle A} is a matrix of order m × n {\displaystyle m\times n} and B {\displaystyle B} is a matrix of order n × m , {\displaystyle n\times m,} then A B {\displaystyle AB} is m × m {\displaystyle m\times m} and B A {\displaystyle BA} is n × n {\displaystyle n\times n} matrix, and one has p B A ( t ) = t n m p A B ( t ) . {\displaystyle p_{BA}(t)=t^{n-m}p_{AB}(t).\,}

To prove this, one may suppose n > m , {\displaystyle n>m,} by exchanging, if needed, A {\displaystyle A} and B . {\displaystyle B.} Then, by bordering A {\displaystyle A} on the bottom by n m {\displaystyle n-m} rows of zeros, and B {\displaystyle B} on the right, by, n m {\displaystyle n-m} columns of zeros, one gets two n × n {\displaystyle n\times n} matrices A {\displaystyle A^{\prime }} and B {\displaystyle B^{\prime }} such that B A = B A {\displaystyle B^{\prime }A^{\prime }=BA} and A B {\displaystyle A^{\prime }B^{\prime }} is equal to A B {\displaystyle AB} bordered by n m {\displaystyle n-m} rows and columns of zeros. The result follows from the case of square matrices, by comparing the characteristic polynomials of A B {\displaystyle A^{\prime }B^{\prime }} and A B . {\displaystyle AB.}

Characteristic polynomial of A

If λ {\displaystyle \lambda } is an eigenvalue of a square matrix A {\displaystyle A} with eigenvector v , {\displaystyle \mathbf {v} ,} then λ k {\displaystyle \lambda ^{k}} is an eigenvalue of A k {\displaystyle A^{k}} because A k v = A k 1 A v = λ A k 1 v = = λ k v . {\displaystyle A^{k}{\textbf {v}}=A^{k-1}A{\textbf {v}}=\lambda A^{k-1}{\textbf {v}}=\dots =\lambda ^{k}{\textbf {v}}.}

The multiplicities can be shown to agree as well, and this generalizes to any polynomial in place of x k {\displaystyle x^{k}} :

Theorem —  Let A {\displaystyle A} be a square n × n {\displaystyle n\times n} matrix and let f ( t ) {\displaystyle f(t)} be a polynomial. If the characteristic polynomial of A {\displaystyle A} has a factorization p A ( t ) = ( t λ 1 ) ( t λ 2 ) ( t λ n ) {\displaystyle p_{A}(t)=(t-\lambda _{1})(t-\lambda _{2})\cdots (t-\lambda _{n})} then the characteristic polynomial of the matrix f ( A ) {\displaystyle f(A)} is given by p f ( A ) ( t ) = ( t f ( λ 1 ) ) ( t f ( λ 2 ) ) ( t f ( λ n ) ) . {\displaystyle p_{f(A)}(t)=(t-f(\lambda _{1}))(t-f(\lambda _{2}))\cdots (t-f(\lambda _{n})).}

That is, the algebraic multiplicity of λ {\displaystyle \lambda } in f ( A ) {\displaystyle f(A)} equals the sum of algebraic multiplicities of λ {\displaystyle \lambda '} in A {\displaystyle A} over λ {\displaystyle \lambda '} such that f ( λ ) = λ . {\displaystyle f(\lambda ')=\lambda .} In particular, tr ( f ( A ) ) = i = 1 n f ( λ i ) {\displaystyle \operatorname {tr} (f(A))=\textstyle \sum _{i=1}^{n}f(\lambda _{i})} and det ( f ( A ) ) = i = 1 n f ( λ i ) . {\displaystyle \operatorname {det} (f(A))=\textstyle \prod _{i=1}^{n}f(\lambda _{i}).} Here a polynomial f ( t ) = t 3 + 1 , {\displaystyle f(t)=t^{3}+1,} for example, is evaluated on a matrix A {\displaystyle A} simply as f ( A ) = A 3 + I . {\displaystyle f(A)=A^{3}+I.}

The theorem applies to matrices and polynomials over any field or commutative ring. However, the assumption that p A ( t ) {\displaystyle p_{A}(t)} has a factorization into linear factors is not always true, unless the matrix is over an algebraically closed field such as the complex numbers.

Proof

This proof only applies to matrices and polynomials over complex numbers (or any algebraically closed field). In that case, the characteristic polynomial of any square matrix can be always factorized as p A ( t ) = ( t λ 1 ) ( t λ 2 ) ( t λ n ) {\displaystyle p_{A}(t)=\left(t-\lambda _{1}\right)\left(t-\lambda _{2}\right)\cdots \left(t-\lambda _{n}\right)} where λ 1 , λ 2 , , λ n {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}} are the eigenvalues of A , {\displaystyle A,} possibly repeated. Moreover, the Jordan decomposition theorem guarantees that any square matrix A {\displaystyle A} can be decomposed as A = S 1 U S , {\displaystyle A=S^{-1}US,} where S {\displaystyle S} is an invertible matrix and U {\displaystyle U} is upper triangular with λ 1 , , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} on the diagonal (with each eigenvalue repeated according to its algebraic multiplicity). (The Jordan normal form has stronger properties, but these are sufficient; alternatively the Schur decomposition can be used, which is less popular but somewhat easier to prove).

Let f ( t ) = i α i t i . {\textstyle f(t)=\sum _{i}\alpha _{i}t^{i}.} Then f ( A ) = α i ( S 1 U S ) i = α i S 1 U S S 1 U S S 1 U S = α i S 1 U i S = S 1 ( α i U i ) S = S 1 f ( U ) S . {\displaystyle f(A)=\textstyle \sum \alpha _{i}(S^{-1}US)^{i}=\textstyle \sum \alpha _{i}S^{-1}USS^{-1}US\cdots S^{-1}US=\textstyle \sum \alpha _{i}S^{-1}U^{i}S=S^{-1}(\textstyle \sum \alpha _{i}U^{i})S=S^{-1}f(U)S.} For an upper triangular matrix U {\displaystyle U} with diagonal λ 1 , , λ n , {\displaystyle \lambda _{1},\dots ,\lambda _{n},} the matrix U i {\displaystyle U^{i}} is upper triangular with diagonal λ 1 i , , λ n i {\displaystyle \lambda _{1}^{i},\dots ,\lambda _{n}^{i}} in U i , {\displaystyle U^{i},} and hence f ( U ) {\displaystyle f(U)} is upper triangular with diagonal f ( λ 1 ) , , f ( λ n ) . {\displaystyle f\left(\lambda _{1}\right),\dots ,f\left(\lambda _{n}\right).} Therefore, the eigenvalues of f ( U ) {\displaystyle f(U)} are f ( λ 1 ) , , f ( λ n ) . {\displaystyle f(\lambda _{1}),\dots ,f(\lambda _{n}).} Since f ( A ) = S 1 f ( U ) S {\displaystyle f(A)=S^{-1}f(U)S} is similar to f ( U ) , {\displaystyle f(U),} it has the same eigenvalues, with the same algebraic multiplicities.

Secular function and secular equation

Secular function

The term secular function has been used for what is now called characteristic polynomial (in some literature the term secular function is still used). The term comes from the fact that the characteristic polynomial was used to calculate secular perturbations (on a time scale of a century, that is, slow compared to annual motion) of planetary orbits, according to Lagrange's theory of oscillations.

Secular equation

Secular equation may have several meanings.

  • In linear algebra it is sometimes used in place of characteristic equation.
  • In astronomy it is the algebraic or numerical expression of the magnitude of the inequalities in a planet's motion that remain after the inequalities of a short period have been allowed for.
  • In molecular orbital calculations relating to the energy of the electron and its wave function it is also used instead of the characteristic equation.

For general associative algebras

The above definition of the characteristic polynomial of a matrix A M n ( F ) {\displaystyle A\in M_{n}(F)} with entries in a field F {\displaystyle F} generalizes without any changes to the case when F {\displaystyle F} is just a commutative ring. Garibaldi (2004) defines the characteristic polynomial for elements of an arbitrary finite-dimensional (associative, but not necessarily commutative) algebra over a field F {\displaystyle F} and proves the standard properties of the characteristic polynomial in this generality.

See also

References

  1. Guillemin, Ernst (1953). Introductory Circuit Theory. Wiley. pp. 366, 541. ISBN 0471330663.
  2. Forsythe, George E.; Motzkin, Theodore (January 1952). "An Extension of Gauss' Transformation for Improving the Condition of Systems of Linear Equations" (PDF). Mathematics of Computation. 6 (37): 18–34. doi:10.1090/S0025-5718-1952-0048162-0. Retrieved 3 October 2020.
  3. Frank, Evelyn (1946). "On the zeros of polynomials with complex coefficients". Bulletin of the American Mathematical Society. 52 (2): 144–157. doi:10.1090/S0002-9904-1946-08526-2.
  4. "Characteristic Polynomial of a Graph – Wolfram MathWorld". Retrieved August 26, 2011.
  5. Steven Roman (1992). Advanced linear algebra (2 ed.). Springer. p. 137. ISBN 3540978372.
  6. Theorem 4 in these lecture notes
  7. Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. pp. 108–109, Section 2.4.2. ISBN 978-0-521-54823-6.
  8. Lang, Serge (1993). Algebra. New York: Springer. p.567, Theorem 3.10. ISBN 978-1-4613-0041-0. OCLC 852792828.
  9. "secular equation". Retrieved January 21, 2010.
Categories: