Misplaced Pages

Symmetric polynomial

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Symmetric polynomials) Polynomial invariant under variable permutations This article is about individual symmetric polynomials. For the ring of symmetric polynomials, see ring of symmetric functions.

In mathematics, a symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn).

Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials. Indeed, a theorem called the fundamental theorem of symmetric polynomials states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials. This implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.

Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory.

Examples

The following polynomials in two variables X1 and X2 are symmetric:

X 1 3 + X 2 3 7 {\displaystyle X_{1}^{3}+X_{2}^{3}-7}
4 X 1 2 X 2 2 + X 1 3 X 2 + X 1 X 2 3 + ( X 1 + X 2 ) 4 {\displaystyle 4X_{1}^{2}X_{2}^{2}+X_{1}^{3}X_{2}+X_{1}X_{2}^{3}+(X_{1}+X_{2})^{4}}

as is the following polynomial in three variables X1, X2, X3:

X 1 X 2 X 3 2 X 1 X 2 2 X 1 X 3 2 X 2 X 3 {\displaystyle X_{1}X_{2}X_{3}-2X_{1}X_{2}-2X_{1}X_{3}-2X_{2}X_{3}}

There are many ways to make specific symmetric polynomials in any number of variables (see the various types below). An example of a somewhat different flavor is

1 i < j n ( X i X j ) 2 {\displaystyle \prod _{1\leq i<j\leq n}(X_{i}-X_{j})^{2}}

where first a polynomial is constructed that changes sign under every exchange of variables, and taking the square renders it completely symmetric (if the variables represent the roots of a monic polynomial, this polynomial gives its discriminant).

On the other hand, the polynomial in two variables

X 1 X 2 {\displaystyle X_{1}-X_{2}}

is not symmetric, since if one exchanges X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} one gets a different polynomial, X 2 X 1 {\displaystyle X_{2}-X_{1}} . Similarly in three variables

X 1 4 X 2 2 X 3 + X 1 X 2 4 X 3 2 + X 1 2 X 2 X 3 4 {\displaystyle X_{1}^{4}X_{2}^{2}X_{3}+X_{1}X_{2}^{4}X_{3}^{2}+X_{1}^{2}X_{2}X_{3}^{4}}

has only symmetry under cyclic permutations of the three variables, which is not sufficient to be a symmetric polynomial. However, the following is symmetric:

X 1 4 X 2 2 X 3 + X 1 X 2 4 X 3 2 + X 1 2 X 2 X 3 4 + X 1 4 X 2 X 3 2 + X 1 X 2 2 X 3 4 + X 1 2 X 2 4 X 3 {\displaystyle X_{1}^{4}X_{2}^{2}X_{3}+X_{1}X_{2}^{4}X_{3}^{2}+X_{1}^{2}X_{2}X_{3}^{4}+X_{1}^{4}X_{2}X_{3}^{2}+X_{1}X_{2}^{2}X_{3}^{4}+X_{1}^{2}X_{2}^{4}X_{3}}

Applications

Galois theory

Main article: Galois theory

One context in which symmetric polynomial functions occur is in the study of monic univariate polynomials of degree n having n roots in a given field. These n roots determine the polynomial, and when they are considered as independent variables, the coefficients of the polynomial are symmetric polynomial functions of the roots. Moreover the fundamental theorem of symmetric polynomials implies that a polynomial function f of the n roots can be expressed as (another) polynomial function of the coefficients of the polynomial determined by the roots if and only if f is given by a symmetric polynomial.

This yields the approach to solving polynomial equations by inverting this map, "breaking" the symmetry – given the coefficients of the polynomial (the elementary symmetric polynomials in the roots), how can one recover the roots? This leads to studying solutions of polynomials using the permutation group of the roots, originally in the form of Lagrange resolvents, later developed in Galois theory.

Relation with the roots of a monic univariate polynomial

Consider a monic polynomial in t of degree n

P = t n + a n 1 t n 1 + + a 2 t 2 + a 1 t + a 0 {\displaystyle P=t^{n}+a_{n-1}t^{n-1}+\cdots +a_{2}t^{2}+a_{1}t+a_{0}}

with coefficients ai in some field K. There exist n roots x1,...,xn of P in some possibly larger field (for instance if K is the field of real numbers, the roots will exist in the field of complex numbers); some of the roots might be equal, but the fact that one has all roots is expressed by the relation

P = t n + a n 1 t n 1 + + a 2 t 2 + a 1 t + a 0 = ( t x 1 ) ( t x 2 ) ( t x n ) . {\displaystyle P=t^{n}+a_{n-1}t^{n-1}+\cdots +a_{2}t^{2}+a_{1}t+a_{0}=(t-x_{1})(t-x_{2})\cdots (t-x_{n}).}

By comparing coefficients one finds that

a n 1 = x 1 x 2 x n a n 2 = x 1 x 2 + x 1 x 3 + + x 2 x 3 + + x n 1 x n = 1 i < j n x i x j   a 1 = ( 1 ) n 1 ( x 2 x 3 x n + x 1 x 3 x 4 x n + + x 1 x 2 x n 2 x n + x 1 x 2 x n 1 ) = ( 1 ) n 1 i = 1 n j i x j a 0 = ( 1 ) n x 1 x 2 x n . {\displaystyle {\begin{aligned}a_{n-1}&=-x_{1}-x_{2}-\cdots -x_{n}\\a_{n-2}&=x_{1}x_{2}+x_{1}x_{3}+\cdots +x_{2}x_{3}+\cdots +x_{n-1}x_{n}=\textstyle \sum _{1\leq i<j\leq n}x_{i}x_{j}\\&{}\ \,\vdots \\a_{1}&=(-1)^{n-1}(x_{2}x_{3}\cdots x_{n}+x_{1}x_{3}x_{4}\cdots x_{n}+\cdots +x_{1}x_{2}\cdots x_{n-2}x_{n}+x_{1}x_{2}\cdots x_{n-1})=\textstyle (-1)^{n-1}\sum _{i=1}^{n}\prod _{j\neq i}x_{j}\\a_{0}&=(-1)^{n}x_{1}x_{2}\cdots x_{n}.\end{aligned}}}

These are in fact just instances of Vieta's formulas. They show that all coefficients of the polynomial are given in terms of the roots by a symmetric polynomial expression: although for a given polynomial P there may be qualitative differences between the roots (like lying in the base field K or not, being simple or multiple roots), none of this affects the way the roots occur in these expressions.

Now one may change the point of view, by taking the roots rather than the coefficients as basic parameters for describing P, and considering them as indeterminates rather than as constants in an appropriate field; the coefficients ai then become just the particular symmetric polynomials given by the above equations. Those polynomials, without the sign ( 1 ) n i {\displaystyle (-1)^{n-i}} , are known as the elementary symmetric polynomials in x1, ..., xn. A basic fact, known as the fundamental theorem of symmetric polynomials, states that any symmetric polynomial in n variables can be given by a polynomial expression in terms of these elementary symmetric polynomials. It follows that any symmetric polynomial expression in the roots of a monic polynomial can be expressed as a polynomial in the coefficients of the polynomial, and in particular that its value lies in the base field K that contains those coefficients. Thus, when working only with such symmetric polynomial expressions in the roots, it is unnecessary to know anything particular about those roots, or to compute in any larger field than K in which those roots may lie. In fact the values of the roots themselves become rather irrelevant, and the necessary relations between coefficients and symmetric polynomial expressions can be found by computations in terms of symmetric polynomials only. An example of such relations are Newton's identities, which express the sum of any fixed power of the roots in terms of the elementary symmetric polynomials.

Special kinds of symmetric polynomials

There are a few types of symmetric polynomials in the variables X1, X2, ..., Xn that are fundamental.

Elementary symmetric polynomials

Main article: Elementary symmetric polynomial

For each nonnegative integer k, the elementary symmetric polynomial ek(X1, ..., Xn) is the sum of all distinct products of k distinct variables. (Some authors denote it by σk instead.) For k = 0 there is only the empty product so e0(X1, ..., Xn) = 1, while for k > n, no products at all can be formed, so ek(X1, X2, ..., Xn) = 0 in these cases. The remaining n elementary symmetric polynomials are building blocks for all symmetric polynomials in these variables: as mentioned above, any symmetric polynomial in the variables considered can be obtained from these elementary symmetric polynomials using multiplications and additions only. In fact one has the following more detailed facts:

  • any symmetric polynomial P in X1, ..., Xn can be written as a polynomial expression in the polynomials ek(X1, ..., Xn) with 1 ≤ k ≤ n;
  • this expression is unique up to equivalence of polynomial expressions;
  • if P has integral coefficients, then the polynomial expression also has integral coefficients.

For example, for n = 2, the relevant elementary symmetric polynomials are e1(X1, X2) = X1 + X2, and e2(X1, X2) = X1X2. The first polynomial in the list of examples above can then be written as

X 1 3 + X 2 3 7 = e 1 ( X 1 , X 2 ) 3 3 e 2 ( X 1 , X 2 ) e 1 ( X 1 , X 2 ) 7 {\displaystyle X_{1}^{3}+X_{2}^{3}-7=e_{1}(X_{1},X_{2})^{3}-3e_{2}(X_{1},X_{2})e_{1}(X_{1},X_{2})-7}

(for a proof that this is always possible see the fundamental theorem of symmetric polynomials).

Monomial symmetric polynomials

Powers and products of elementary symmetric polynomials work out to rather complicated expressions. If one seeks basic additive building blocks for symmetric polynomials, a more natural choice is to take those symmetric polynomials that contain only one type of monomial, with only those copies required to obtain symmetry. Any monomial in X1, ..., Xn can be written as X1...Xn where the exponents αi are natural numbers (possibly zero); writing α = (α1,...,αn) this can be abbreviated to X. The monomial symmetric polynomial mα(X1, ..., Xn) is defined as the sum of all monomials x where β ranges over all distinct permutations of (α1,...,αn). For instance one has

m ( 3 , 1 , 1 ) ( X 1 , X 2 , X 3 ) = X 1 3 X 2 X 3 + X 1 X 2 3 X 3 + X 1 X 2 X 3 3 {\displaystyle m_{(3,1,1)}(X_{1},X_{2},X_{3})=X_{1}^{3}X_{2}X_{3}+X_{1}X_{2}^{3}X_{3}+X_{1}X_{2}X_{3}^{3}} ,
m ( 3 , 2 , 1 ) ( X 1 , X 2 , X 3 ) = X 1 3 X 2 2 X 3 + X 1 3 X 2 X 3 2 + X 1 2 X 2 3 X 3 + X 1 2 X 2 X 3 3 + X 1 X 2 3 X 3 2 + X 1 X 2 2 X 3 3 . {\displaystyle m_{(3,2,1)}(X_{1},X_{2},X_{3})=X_{1}^{3}X_{2}^{2}X_{3}+X_{1}^{3}X_{2}X_{3}^{2}+X_{1}^{2}X_{2}^{3}X_{3}+X_{1}^{2}X_{2}X_{3}^{3}+X_{1}X_{2}^{3}X_{3}^{2}+X_{1}X_{2}^{2}X_{3}^{3}.}

Clearly mα = mβ when β is a permutation of α, so one usually considers only those mα for which α1 ≥ α2 ≥ ... ≥ αn, in other words for which α is a partition of an integer. These monomial symmetric polynomials form a vector space basis: every symmetric polynomial P can be written as a linear combination of the monomial symmetric polynomials. To do this it suffices to separate the different types of monomial occurring in P. In particular if P has integer coefficients, then so will the linear combination.

The elementary symmetric polynomials are particular cases of monomial symmetric polynomials: for 0 ≤ k ≤ n one has

e k ( X 1 , , X n ) = m α ( X 1 , , X n ) {\displaystyle e_{k}(X_{1},\ldots ,X_{n})=m_{\alpha }(X_{1},\ldots ,X_{n})} where α is the partition of k into k parts 1 (followed by n − k zeros).

Power-sum symmetric polynomials

Main article: Power sum symmetric polynomial

For each integer k ≥ 1, the monomial symmetric polynomial m(k,0,...,0)(X1, ..., Xn) is of special interest. It is the power sum symmetric polynomial, defined as

p k ( X 1 , , X n ) = X 1 k + X 2 k + + X n k . {\displaystyle p_{k}(X_{1},\ldots ,X_{n})=X_{1}^{k}+X_{2}^{k}+\cdots +X_{n}^{k}.}

All symmetric polynomials can be obtained from the first n power sum symmetric polynomials by additions and multiplications, possibly involving rational coefficients. More precisely,

Any symmetric polynomial in X1, ..., Xn can be expressed as a polynomial expression with rational coefficients in the power sum symmetric polynomials p1(X1, ..., Xn), ..., pn(X1, ..., Xn).

In particular, the remaining power sum polynomials pk(X1, ..., Xn) for k > n can be so expressed in the first n power sum polynomials; for example

p 3 ( X 1 , X 2 ) = 3 2 p 2 ( X 1 , X 2 ) p 1 ( X 1 , X 2 ) 1 2 p 1 ( X 1 , X 2 ) 3 . {\displaystyle p_{3}(X_{1},X_{2})=\textstyle {\frac {3}{2}}p_{2}(X_{1},X_{2})p_{1}(X_{1},X_{2})-{\frac {1}{2}}p_{1}(X_{1},X_{2})^{3}.}

In contrast to the situation for the elementary and complete homogeneous polynomials, a symmetric polynomial in n variables with integral coefficients need not be a polynomial function with integral coefficients of the power sum symmetric polynomials. For an example, for n = 2, the symmetric polynomial

m ( 2 , 1 ) ( X 1 , X 2 ) = X 1 2 X 2 + X 1 X 2 2 {\displaystyle m_{(2,1)}(X_{1},X_{2})=X_{1}^{2}X_{2}+X_{1}X_{2}^{2}}

has the expression

m ( 2 , 1 ) ( X 1 , X 2 ) = 1 2 p 1 ( X 1 , X 2 ) 3 1 2 p 2 ( X 1 , X 2 ) p 1 ( X 1 , X 2 ) . {\displaystyle m_{(2,1)}(X_{1},X_{2})=\textstyle {\frac {1}{2}}p_{1}(X_{1},X_{2})^{3}-{\frac {1}{2}}p_{2}(X_{1},X_{2})p_{1}(X_{1},X_{2}).}

Using three variables one gets a different expression

m ( 2 , 1 ) ( X 1 , X 2 , X 3 ) = X 1 2 X 2 + X 1 X 2 2 + X 1 2 X 3 + X 1 X 3 2 + X 2 2 X 3 + X 2 X 3 2 = p 1 ( X 1 , X 2 , X 3 ) p 2 ( X 1 , X 2 , X 3 ) p 3 ( X 1 , X 2 , X 3 ) . {\displaystyle {\begin{aligned}m_{(2,1)}(X_{1},X_{2},X_{3})&=X_{1}^{2}X_{2}+X_{1}X_{2}^{2}+X_{1}^{2}X_{3}+X_{1}X_{3}^{2}+X_{2}^{2}X_{3}+X_{2}X_{3}^{2}\\&=p_{1}(X_{1},X_{2},X_{3})p_{2}(X_{1},X_{2},X_{3})-p_{3}(X_{1},X_{2},X_{3}).\end{aligned}}}

The corresponding expression was valid for two variables as well (it suffices to set X3 to zero), but since it involves p3, it could not be used to illustrate the statement for n = 2. The example shows that whether or not the expression for a given monomial symmetric polynomial in terms of the first n power sum polynomials involves rational coefficients may depend on n. But rational coefficients are always needed to express elementary symmetric polynomials (except the constant ones, and e1 which coincides with the first power sum) in terms of power sum polynomials. The Newton identities provide an explicit method to do this; it involves division by integers up to n, which explains the rational coefficients. Because of these divisions, the mentioned statement fails in general when coefficients are taken in a field of finite characteristic; however, it is valid with coefficients in any ring containing the rational numbers.

Complete homogeneous symmetric polynomials

Main article: Complete homogeneous symmetric polynomial

For each nonnegative integer k, the complete homogeneous symmetric polynomial hk(X1, ..., Xn) is the sum of all distinct monomials of degree k in the variables X1, ..., Xn. For instance

h 3 ( X 1 , X 2 , X 3 ) = X 1 3 + X 1 2 X 2 + X 1 2 X 3 + X 1 X 2 2 + X 1 X 2 X 3 + X 1 X 3 2 + X 2 3 + X 2 2 X 3 + X 2 X 3 2 + X 3 3 . {\displaystyle h_{3}(X_{1},X_{2},X_{3})=X_{1}^{3}+X_{1}^{2}X_{2}+X_{1}^{2}X_{3}+X_{1}X_{2}^{2}+X_{1}X_{2}X_{3}+X_{1}X_{3}^{2}+X_{2}^{3}+X_{2}^{2}X_{3}+X_{2}X_{3}^{2}+X_{3}^{3}.}

The polynomial hk(X1, ..., Xn) is also the sum of all distinct monomial symmetric polynomials of degree k in X1, ..., Xn, for instance for the given example

h 3 ( X 1 , X 2 , X 3 ) = m ( 3 ) ( X 1 , X 2 , X 3 ) + m ( 2 , 1 ) ( X 1 , X 2 , X 3 ) + m ( 1 , 1 , 1 ) ( X 1 , X 2 , X 3 ) = ( X 1 3 + X 2 3 + X 3 3 ) + ( X 1 2 X 2 + X 1 2 X 3 + X 1 X 2 2 + X 1 X 3 2 + X 2 2 X 3 + X 2 X 3 2 ) + ( X 1 X 2 X 3 ) . {\displaystyle {\begin{aligned}h_{3}(X_{1},X_{2},X_{3})&=m_{(3)}(X_{1},X_{2},X_{3})+m_{(2,1)}(X_{1},X_{2},X_{3})+m_{(1,1,1)}(X_{1},X_{2},X_{3})\\&=(X_{1}^{3}+X_{2}^{3}+X_{3}^{3})+(X_{1}^{2}X_{2}+X_{1}^{2}X_{3}+X_{1}X_{2}^{2}+X_{1}X_{3}^{2}+X_{2}^{2}X_{3}+X_{2}X_{3}^{2})+(X_{1}X_{2}X_{3}).\\\end{aligned}}}

All symmetric polynomials in these variables can be built up from complete homogeneous ones: any symmetric polynomial in X1, ..., Xn can be obtained from the complete homogeneous symmetric polynomials h1(X1, ..., Xn), ..., hn(X1, ..., Xn) via multiplications and additions. More precisely:

Any symmetric polynomial P in X1, ..., Xn can be written as a polynomial expression in the polynomials hk(X1, ..., Xn) with 1 ≤ k ≤ n.
If P has integral coefficients, then the polynomial expression also has integral coefficients.

For example, for n = 2, the relevant complete homogeneous symmetric polynomials are h1(X1, X2) = X1 + X2 and h2(X1, X2) = X1 + X1X2 + X2. The first polynomial in the list of examples above can then be written as

X 1 3 + X 2 3 7 = 2 h 1 ( X 1 , X 2 ) 3 + 3 h 1 ( X 1 , X 2 ) h 2 ( X 1 , X 2 ) 7. {\displaystyle X_{1}^{3}+X_{2}^{3}-7=-2h_{1}(X_{1},X_{2})^{3}+3h_{1}(X_{1},X_{2})h_{2}(X_{1},X_{2})-7.}

As in the case of power sums, the given statement applies in particular to the complete homogeneous symmetric polynomials beyond hn(X1, ..., Xn), allowing them to be expressed in terms of the ones up to that point; again the resulting identities become invalid when the number of variables is increased.

An important aspect of complete homogeneous symmetric polynomials is their relation to elementary symmetric polynomials, which can be expressed as the identities

i = 0 k ( 1 ) i e i ( X 1 , , X n ) h k i ( X 1 , , X n ) = 0 {\displaystyle \sum _{i=0}^{k}(-1)^{i}e_{i}(X_{1},\ldots ,X_{n})h_{k-i}(X_{1},\ldots ,X_{n})=0} , for all k > 0, and any number of variables n.

Since e0(X1, ..., Xn) and h0(X1, ..., Xn) are both equal to 1, one can isolate either the first or the last term of these summations; the former gives a set of equations that allows one to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials, and the latter gives a set of equations that allows doing the inverse. This implicitly shows that any symmetric polynomial can be expressed in terms of the hk(X1, ..., Xn) with 1 ≤ k ≤ n: one first expresses the symmetric polynomial in terms of the elementary symmetric polynomials, and then expresses those in terms of the mentioned complete homogeneous ones.

Schur polynomials

Main article: Schur polynomial

Another class of symmetric polynomials is that of the Schur polynomials, which are of fundamental importance in the applications of symmetric polynomials to representation theory. They are however not as easy to describe as the other kinds of special symmetric polynomials; see the main article for details.

Symmetric polynomials in algebra

Symmetric polynomials are important to linear algebra, representation theory, and Galois theory. They are also important in combinatorics, where they are mostly studied through the ring of symmetric functions, which avoids having to carry around a fixed number of variables all the time.

Alternating polynomials

Main article: Alternating polynomials

Analogous to symmetric polynomials are alternating polynomials: polynomials that, rather than being invariant under permutation of the entries, change according to the sign of the permutation.

These are all products of the Vandermonde polynomial and a symmetric polynomial, and form a quadratic extension of the ring of symmetric polynomials: the Vandermonde polynomial is a square root of the discriminant.

See also

References

Categories: