Misplaced Pages

Ring of polynomial functions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Algebraic structure
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (August 2023) (Learn how and when to remove this message)

In mathematics, the ring of polynomial functions on a vector space V over a field k gives a coordinate-free analog of a polynomial ring. It is denoted by k. If V is finite dimensional and is viewed as an algebraic variety, then k is precisely the coordinate ring of V.

The explicit definition of the ring can be given as follows. Given a polynomial ring k [ t 1 , , t n ] {\displaystyle k} , we can view t i {\displaystyle t_{i}} as a coordinate function on k n {\displaystyle k^{n}} ; i.e., t i ( x ) = x i {\displaystyle t_{i}(x)=x_{i}} where x = ( x 1 , , x n ) . {\displaystyle x=(x_{1},\dots ,x_{n}).} This suggests the following: given a vector space V, let k be the commutative k-algebra generated by the dual space V {\displaystyle V^{*}} , which is a subring of the ring of all functions V k {\displaystyle V\to k} . If we fix a basis for V and write t i {\displaystyle t_{i}} for its dual basis, then k consists of polynomials in t i {\displaystyle t_{i}} .

If k is infinite, then k is the symmetric algebra of the dual space V {\displaystyle V^{*}} .

In applications, one also defines k when V is defined over some subfield of k (e.g., k is the complex field and V is a real vector space.) The same definition still applies.

Throughout the article, for simplicity, the base field k is assumed to be infinite.

Relation with polynomial ring

Let A = K [ x ] {\displaystyle A=K} be the set of all polynomials over a field K and B be the set of all polynomial functions in one variable over K. Both A and B are algebras over K given by the standard multiplication and addition of polynomials and functions. We can map each f {\displaystyle f} in A to f ^ {\displaystyle {\hat {f}}} in B by the rule f ^ ( t ) = f ( t ) {\displaystyle {\hat {f}}(t)=f(t)} . A routine check shows that the mapping f f ^ {\displaystyle f\mapsto {\hat {f}}} is a homomorphism of the algebras A and B. This homomorphism is an isomorphism if and only if K is an infinite field. For example, if K is a finite field then let p ( x ) = t K ( x t ) {\displaystyle p(x)=\prod \limits _{t\in K}(x-t)} . p is a nonzero polynomial in K, however p ( t ) = 0 {\displaystyle p(t)=0} for all t in K, so p ^ = 0 {\displaystyle {\hat {p}}=0} is the zero function and our homomorphism is not an isomorphism (and, actually, the algebras are not isomorphic, since the algebra of polynomials is infinite while that of polynomial functions is finite).

If K is infinite then choose a polynomial f such that f ^ = 0 {\displaystyle {\hat {f}}=0} . We want to show this implies that f = 0 {\displaystyle f=0} . Let deg f = n {\displaystyle \deg f=n} and let t 0 , t 1 , , t n {\displaystyle t_{0},t_{1},\dots ,t_{n}} be n +1 distinct elements of K. Then f ( t i ) = 0 {\displaystyle f(t_{i})=0} for 0 i n {\displaystyle 0\leq i\leq n} and by Lagrange interpolation we have f = 0 {\displaystyle f=0} . Hence the mapping f f ^ {\displaystyle f\mapsto {\hat {f}}} is injective. Since this mapping is clearly surjective, it is bijective and thus an algebra isomorphism of A and B.

Symmetric multilinear maps

Let k be an infinite field of characteristic zero (or at least very large) and V a finite-dimensional vector space.

Let S q ( V ) {\displaystyle S^{q}(V)} denote the vector space of multilinear functionals λ : 1 q V k {\displaystyle \textstyle \lambda :\prod _{1}^{q}V\to k} that are symmetric; λ ( v 1 , , v q ) {\displaystyle \lambda (v_{1},\dots ,v_{q})} is the same for all permutations of v i {\displaystyle v_{i}} 's.

Any λ in S q ( V ) {\displaystyle S^{q}(V)} gives rise to a homogeneous polynomial function f of degree q: we just let f ( v ) = λ ( v , , v ) . {\displaystyle f(v)=\lambda (v,\dots ,v).} To see that f is a polynomial function, choose a basis e i , 1 i n {\displaystyle e_{i},\,1\leq i\leq n} of V and t i {\displaystyle t_{i}} its dual. Then

λ ( v 1 , , v q ) = i 1 , , i q = 1 n λ ( e i 1 , , e i q ) t i 1 ( v 1 ) t i q ( v q ) {\displaystyle \lambda (v_{1},\dots ,v_{q})=\sum _{i_{1},\dots ,i_{q}=1}^{n}\lambda (e_{i_{1}},\dots ,e_{i_{q}})t_{i_{1}}(v_{1})\cdots t_{i_{q}}(v_{q})} ,

which implies f is a polynomial in the ti's.

Thus, there is a well-defined linear map:

ϕ : S q ( V ) k [ V ] q , ϕ ( λ ) ( v ) = λ ( v , , v ) . {\displaystyle \phi :S^{q}(V)\to k_{q},\,\phi (\lambda )(v)=\lambda (v,\cdots ,v).}

We show it is an isomorphism. Choosing a basis as before, any homogeneous polynomial function f of degree q can be written as:

f = i 1 , , i q = 1 n a i 1 i q t i 1 t i q {\displaystyle f=\sum _{i_{1},\dots ,i_{q}=1}^{n}a_{i_{1}\cdots i_{q}}t_{i_{1}}\cdots t_{i_{q}}}

where a i 1 i q {\displaystyle a_{i_{1}\cdots i_{q}}} are symmetric in i 1 , , i q {\displaystyle i_{1},\dots ,i_{q}} . Let

ψ ( f ) ( v 1 , , v q ) = i 1 , , i q = 1 n a i 1 i q t i 1 ( v 1 ) t i q ( v q ) . {\displaystyle \psi (f)(v_{1},\dots ,v_{q})=\sum _{i_{1},\cdots ,i_{q}=1}^{n}a_{i_{1}\cdots i_{q}}t_{i_{1}}(v_{1})\cdots t_{i_{q}}(v_{q}).}

Clearly, ϕ ψ {\displaystyle \phi \circ \psi } is the identity; in particular, φ is surjective. To see φ is injective, suppose φ(λ) = 0. Consider

ϕ ( λ ) ( t 1 v 1 + + t q v q ) = λ ( t 1 v 1 + + t q v q , . . . , t 1 v 1 + + t q v q ) {\displaystyle \phi (\lambda )(t_{1}v_{1}+\cdots +t_{q}v_{q})=\lambda (t_{1}v_{1}+\cdots +t_{q}v_{q},...,t_{1}v_{1}+\cdots +t_{q}v_{q})} ,

which is zero. The coefficient of t1t2tq in the above expression is q! times λ(v1, …, vq); it follows that λ = 0.

Note: φ is independent of a choice of basis; so the above proof shows that ψ is also independent of a basis, the fact not a priori obvious.

Example: A bilinear functional gives rise to a quadratic form in a unique way and any quadratic form arises in this way.

Taylor series expansion

Main article: Taylor series

Given a smooth function, locally, one can get a partial derivative of the function from its Taylor series expansion and, conversely, one can recover the function from the series expansion. This fact continues to hold for polynomials functions on a vector space. If f is in k, then we write: for x, y in V,

f ( x + y ) = n = 0 g n ( x , y ) {\displaystyle f(x+y)=\sum _{n=0}^{\infty }g_{n}(x,y)}

where gn(x, y) are homogeneous of degree n in y, and only finitely many of them are nonzero. We then let

( P y f ) ( x ) = g 1 ( x , y ) , {\displaystyle (P_{y}f)(x)=g_{1}(x,y),}

resulting in the linear endomorphism Py of k. It is called the polarization operator. We then have, as promised:

Theorem — For each f in k and x, y in V,

f ( x + y ) = n = 0 1 n ! P y n f ( x ) {\displaystyle f(x+y)=\sum _{n=0}^{\infty }{1 \over n!}P_{y}^{n}f(x)} .

Proof: We first note that (Py f) (x) is the coefficient of t in f(x + t y); in other words, since g0(x, y) = g0(x, 0) = f(x),

P y f ( x ) = d d t | t = 0 f ( x + t y ) {\displaystyle P_{y}f(x)=\left.{d \over dt}\right|_{t=0}f(x+ty)}

where the right-hand side is, by definition,

f ( x + t y ) f ( x ) t | t = 0 . {\displaystyle \left.{f(x+ty)-f(x) \over t}\right|_{t=0}.}

The theorem follows from this. For example, for n = 2, we have:

P y 2 f ( x ) = t 1 | t 1 = 0 P y f ( x + t 1 y ) = t 1 | t 1 = 0 t 2 | t 2 = 0 f ( x + ( t 1 + t 2 ) y ) = 2 ! g 2 ( x , y ) . {\displaystyle P_{y}^{2}f(x)=\left.{\partial \over \partial t_{1}}\right|_{t_{1}=0}P_{y}f(x+t_{1}y)=\left.{\partial \over \partial t_{1}}\right|_{t_{1}=0}\left.{\partial \over \partial t_{2}}\right|_{t_{2}=0}f(x+(t_{1}+t_{2})y)=2!g_{2}(x,y).}

The general case is similar. {\displaystyle \square }

Operator product algebra

When the polynomials are valued not over a field k, but over some algebra, then one may define additional structure. Thus, for example, one may consider the ring of functions over GL(n,m), instead of for k = GL(1,m). In this case, one may impose an additional axiom.

The operator product algebra is an associative algebra of the form

A i ( x ) B j ( y ) = k f k i j ( x , y , z ) C k ( z ) {\displaystyle A^{i}(x)B^{j}(y)=\sum _{k}f_{k}^{ij}(x,y,z)C^{k}(z)}

The structure constants f k i j ( x , y , z ) {\displaystyle f_{k}^{ij}(x,y,z)} are required to be single-valued functions, rather than sections of some vector bundle. The fields (or operators) A i ( x ) {\displaystyle A^{i}(x)} are required to span the ring of functions. In practical calculations, it is usually required that the sums be analytic within some radius of convergence; typically with a radius of convergence of | x y | {\displaystyle |x-y|} . Thus, the ring of functions can be taken to be the ring of polynomial functions.

The above can be considered to be an additional requirement imposed on the ring; it is sometimes called the bootstrap. In physics, a special case of the operator product algebra is known as the operator product expansion.

See also

Notes

References

Categories: