Misplaced Pages

Power series

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Power series expansion) For other uses, see Power series (disambiguation). Infinite sum of monomials

In mathematics, a power series (in one variable) is an infinite series of the form n = 0 a n ( x c ) n = a 0 + a 1 ( x c ) + a 2 ( x c ) 2 + {\displaystyle \sum _{n=0}^{\infty }a_{n}\left(x-c\right)^{n}=a_{0}+a_{1}(x-c)+a_{2}(x-c)^{2}+\dots } where an represents the coefficient of the nth term and c is a constant called the center of the series. Power series are useful in mathematical analysis, where they arise as Taylor series of infinitely differentiable functions. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function.

In many situations, the center c is equal to zero, for instance for Maclaurin series. In such cases, the power series takes the simpler form n = 0 a n x n = a 0 + a 1 x + a 2 x 2 + . {\displaystyle \sum _{n=0}^{\infty }a_{n}x^{n}=a_{0}+a_{1}x+a_{2}x^{2}+\dots .}

The partial sums of a power series are polynomials, the partial sums of the Taylor series of an analytic function are a sequence of converging polynomial approximations to the function at the center, and a converging power series can be seen as a kind of generalized polynomial with infinitely many terms. Conversely, every polynomial is a power series with only finitely many non-zero terms.

Beyond their role in mathematical analysis, power series also occur in combinatorics as generating functions (a kind of formal power series) and in electronic engineering (under the name of the Z-transform). The familiar decimal notation for real numbers can also be viewed as an example of a power series, with integer coefficients, but with the argument x fixed at 1⁄10. In number theory, the concept of p-adic numbers is also closely related to that of a power series.

Examples

Polynomial

The exponential function (in blue), and its improving approximation by the sum of the first n + 1 terms of its Maclaurin power series (in red). So
n=0 gives f ( x ) = 1 {\displaystyle f(x)=1} ,
n=1 f ( x ) = 1 + x {\displaystyle f(x)=1+x} ,
n=2 f ( x ) = 1 + x + x 2 / 2 {\displaystyle f(x)=1+x+x^{2}/2} ,
n=3 f ( x ) = 1 + x + x 2 / 2 + x 3 / 6 {\displaystyle f(x)=1+x+x^{2}/2+x^{3}/6} etcetera.

Every polynomial of degree d can be expressed as a power series around any center c, where all terms of degree higher than d have a coefficient of zero. For instance, the polynomial f ( x ) = x 2 + 2 x + 3 {\textstyle f(x)=x^{2}+2x+3} can be written as a power series around the center c = 0 {\textstyle c=0} as f ( x ) = 3 + 2 x + 1 x 2 + 0 x 3 + 0 x 4 + {\displaystyle f(x)=3+2x+1x^{2}+0x^{3}+0x^{4}+\cdots } or around the center c = 1 {\textstyle c=1} as f ( x ) = 6 + 4 ( x 1 ) + 1 ( x 1 ) 2 + 0 ( x 1 ) 3 + 0 ( x 1 ) 4 + . {\displaystyle f(x)=6+4(x-1)+1(x-1)^{2}+0(x-1)^{3}+0(x-1)^{4}+\cdots .}

One can view power series as being like "polynomials of infinite degree", although power series are not polynomials in the strict sense.

Geometric series, exponential function and sine

The geometric series formula 1 1 x = n = 0 x n = 1 + x + x 2 + x 3 + , {\displaystyle {\frac {1}{1-x}}=\sum _{n=0}^{\infty }x^{n}=1+x+x^{2}+x^{3}+\cdots ,} which is valid for | x | < 1 {\textstyle |x|<1} , is one of the most important examples of a power series, as are the exponential function formula e x = n = 0 x n n ! = 1 + x + x 2 2 ! + x 3 3 ! + {\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots } and the sine formula sin ( x ) = n = 0 ( 1 ) n x 2 n + 1 ( 2 n + 1 ) ! = x x 3 3 ! + x 5 5 ! x 7 7 ! + , {\displaystyle \sin(x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n+1}}{(2n+1)!}}=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots ,} valid for all real x. These power series are examples of Taylor series (or, more specifically, of Maclaurin series).

On the set of exponents

Negative powers are not permitted in an ordinary power series; for instance, x 1 + 1 + x 1 + x 2 + {\textstyle x^{-1}+1+x^{1}+x^{2}+\cdots } is not considered a power series (although it is a Laurent series). Similarly, fractional powers such as x 1 2 {\textstyle x^{\frac {1}{2}}} are not permitted; fractional powers arise in Puiseux series. The coefficients a n {\textstyle a_{n}} must not depend on x {\textstyle x} , thus for instance sin ( x ) x + sin ( 2 x ) x 2 + sin ( 3 x ) x 3 + {\textstyle \sin(x)x+\sin(2x)x^{2}+\sin(3x)x^{3}+\cdots } is not a power series.

Radius of convergence

A power series n = 0 a n ( x c ) n {\textstyle \sum _{n=0}^{\infty }a_{n}(x-c)^{n}} is convergent for some values of the variable x, which will always include x = c since ( x c ) 0 = 1 {\displaystyle (x-c)^{0}=1} and the sum of the series is thus a 0 {\displaystyle a_{0}} for x = c. The series may diverge for other values of x, possibly all of them. If c is not the only point of convergence, then there is always a number r with 0 < r ≤ ∞ such that the series converges whenever |xc| < r and diverges whenever |xc| > r. The number r is called the radius of convergence of the power series; in general it is given as r = lim inf n | a n | 1 n {\displaystyle r=\liminf _{n\to \infty }\left|a_{n}\right|^{-{\frac {1}{n}}}} or, equivalently, r 1 = lim sup n | a n | 1 n . {\displaystyle r^{-1}=\limsup _{n\to \infty }\left|a_{n}\right|^{\frac {1}{n}}.} This is the Cauchy–Hadamard theorem; see limit superior and limit inferior for an explanation of the notation. The relation r 1 = lim n | a n + 1 a n | {\displaystyle r^{-1}=\lim _{n\to \infty }\left|{a_{n+1} \over a_{n}}\right|} is also satisfied, if this limit exists.

The set of the complex numbers such that |xc| < r is called the disc of convergence of the series. The series converges absolutely inside its disc of convergence and it converges uniformly on every compact subset of the disc of convergence.

For |xc| = r, there is no general statement on the convergence of the series. However, Abel's theorem states that if the series is convergent for some value z such that |zc| = r, then the sum of the series for x = z is the limit of the sum of the series for x = c + t (zc) where t is a real variable less than 1 that tends to 1.

Operations on power series

Addition and subtraction

When two functions f and g are decomposed into power series around the same center c, the power series of the sum or difference of the functions can be obtained by termwise addition and subtraction. That is, if f ( x ) = n = 0 a n ( x c ) n {\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-c)^{n}} and g ( x ) = n = 0 b n ( x c ) n {\displaystyle g(x)=\sum _{n=0}^{\infty }b_{n}(x-c)^{n}} then f ( x ) ± g ( x ) = n = 0 ( a n ± b n ) ( x c ) n . {\displaystyle f(x)\pm g(x)=\sum _{n=0}^{\infty }(a_{n}\pm b_{n})(x-c)^{n}.}

The sum of two power series will have a radius of convergence of at least the smaller of the two radii of convergence of the two series, but possibly larger than either of the two. For instance it is not true that if two power series n = 0 a n x n {\textstyle \sum _{n=0}^{\infty }a_{n}x^{n}} and n = 0 b n x n {\textstyle \sum _{n=0}^{\infty }b_{n}x^{n}} have the same radius of convergence, then n = 0 ( a n + b n ) x n {\textstyle \sum _{n=0}^{\infty }\left(a_{n}+b_{n}\right)x^{n}} also has this radius of convergence: if a n = ( 1 ) n {\textstyle a_{n}=(-1)^{n}} and b n = ( 1 ) n + 1 ( 1 1 3 n ) {\textstyle b_{n}=(-1)^{n+1}\left(1-{\frac {1}{3^{n}}}\right)} , for instance, then both series have the same radius of convergence of 1, but the series n = 0 ( a n + b n ) x n = n = 0 ( 1 ) n 3 n x n {\textstyle \sum _{n=0}^{\infty }\left(a_{n}+b_{n}\right)x^{n}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{3^{n}}}x^{n}} has a radius of convergence of 3.

Multiplication and division

With the same definitions for f ( x ) {\displaystyle f(x)} and g ( x ) {\displaystyle g(x)} , the power series of the product and quotient of the functions can be obtained as follows: f ( x ) g ( x ) = ( n = 0 a n ( x c ) n ) ( n = 0 b n ( x c ) n ) = i = 0 j = 0 a i b j ( x c ) i + j = n = 0 ( i = 0 n a i b n i ) ( x c ) n . {\displaystyle {\begin{aligned}f(x)g(x)&={\biggl (}\sum _{n=0}^{\infty }a_{n}(x-c)^{n}{\biggr )}{\biggl (}\sum _{n=0}^{\infty }b_{n}(x-c)^{n}{\biggr )}\\&=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }a_{i}b_{j}(x-c)^{i+j}\\&=\sum _{n=0}^{\infty }{\biggl (}\sum _{i=0}^{n}a_{i}b_{n-i}{\biggr )}(x-c)^{n}.\end{aligned}}}

The sequence m n = i = 0 n a i b n i {\textstyle m_{n}=\sum _{i=0}^{n}a_{i}b_{n-i}} is known as the Cauchy product of the sequences a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} .

For division, if one defines the sequence d n {\displaystyle d_{n}} by f ( x ) g ( x ) = n = 0 a n ( x c ) n n = 0 b n ( x c ) n = n = 0 d n ( x c ) n {\displaystyle {\frac {f(x)}{g(x)}}={\frac {\sum _{n=0}^{\infty }a_{n}(x-c)^{n}}{\sum _{n=0}^{\infty }b_{n}(x-c)^{n}}}=\sum _{n=0}^{\infty }d_{n}(x-c)^{n}} then f ( x ) = ( n = 0 b n ( x c ) n ) ( n = 0 d n ( x c ) n ) {\displaystyle f(x)={\biggl (}\sum _{n=0}^{\infty }b_{n}(x-c)^{n}{\biggr )}{\biggl (}\sum _{n=0}^{\infty }d_{n}(x-c)^{n}{\biggr )}} and one can solve recursively for the terms d n {\displaystyle d_{n}} by comparing coefficients.

Solving the corresponding equations yields the formulae based on determinants of certain matrices of the coefficients of f ( x ) {\displaystyle f(x)} and g ( x ) {\displaystyle g(x)} d 0 = a 0 b 0 {\displaystyle d_{0}={\frac {a_{0}}{b_{0}}}} d n = 1 b 0 n + 1 | a n b 1 b 2 b n a n 1 b 0 b 1 b n 1 a n 2 0 b 0 b n 2 a 0 0 0 b 0 | {\displaystyle d_{n}={\frac {1}{b_{0}^{n+1}}}{\begin{vmatrix}a_{n}&b_{1}&b_{2}&\cdots &b_{n}\\a_{n-1}&b_{0}&b_{1}&\cdots &b_{n-1}\\a_{n-2}&0&b_{0}&\cdots &b_{n-2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\a_{0}&0&0&\cdots &b_{0}\end{vmatrix}}}

Differentiation and integration

Once a function f ( x ) {\displaystyle f(x)} is given as a power series as above, it is differentiable on the interior of the domain of convergence. It can be differentiated and integrated by treating every term separately since both differentiation and integration are linear transformations of functions: f ( x ) = n = 1 a n n ( x c ) n 1 = n = 0 a n + 1 ( n + 1 ) ( x c ) n , f ( x ) d x = n = 0 a n ( x c ) n + 1 n + 1 + k = n = 1 a n 1 ( x c ) n n + k . {\displaystyle {\begin{aligned}f'(x)&=\sum _{n=1}^{\infty }a_{n}n(x-c)^{n-1}=\sum _{n=0}^{\infty }a_{n+1}(n+1)(x-c)^{n},\\\int f(x)\,dx&=\sum _{n=0}^{\infty }{\frac {a_{n}(x-c)^{n+1}}{n+1}}+k=\sum _{n=1}^{\infty }{\frac {a_{n-1}(x-c)^{n}}{n}}+k.\end{aligned}}}

Both of these series have the same radius of convergence as the original series.

Analytic functions

Main article: Analytic function

A function f defined on some open subset U of R or C is called analytic if it is locally given by a convergent power series. This means that every aU has an open neighborhood VU, such that there exists a power series with center a that converges to f(x) for every xV.

Every power series with a positive radius of convergence is analytic on the interior of its region of convergence. All holomorphic functions are complex-analytic. Sums and products of analytic functions are analytic, as are quotients as long as the denominator is non-zero.

If a function is analytic, then it is infinitely differentiable, but in the real case the converse is not generally true. For an analytic function, the coefficients an can be computed as a n = f ( n ) ( c ) n ! {\displaystyle a_{n}={\frac {f^{\left(n\right)}\left(c\right)}{n!}}}

where f ( n ) ( c ) {\displaystyle f^{(n)}(c)} denotes the nth derivative of f at c, and f ( 0 ) ( c ) = f ( c ) {\displaystyle f^{(0)}(c)=f(c)} . This means that every analytic function is locally represented by its Taylor series.

The global form of an analytic function is completely determined by its local behavior in the following sense: if f and g are two analytic functions defined on the same connected open set U, and if there exists an element cU such that f(c) = g(c) for all n ≥ 0, then f(x) = g(x) for all xU.

If a power series with radius of convergence r is given, one can consider analytic continuations of the series, that is, analytic functions f which are defined on larger sets than { x | |xc| < r } and agree with the given power series on this set. The number r is maximal in the following sense: there always exists a complex number x with |xc| = r such that no analytic continuation of the series can be defined at x.

The power series expansion of the inverse function of an analytic function can be determined using the Lagrange inversion theorem.

Behavior near the boundary

The sum of a power series with a positive radius of convergence is an analytic function at every point in the interior of the disc of convergence. However, different behavior can occur at points on the boundary of that disc. For example:

  1. Divergence while the sum extends to an analytic function: n = 0 z n {\textstyle \sum _{n=0}^{\infty }z^{n}} has radius of convergence equal to 1 {\displaystyle 1} and diverges at every point of | z | = 1 {\displaystyle |z|=1} . Nevertheless, the sum in | z | < 1 {\displaystyle |z|<1} is 1 1 z {\textstyle {\frac {1}{1-z}}} , which is analytic at every point of the plane except for z = 1 {\displaystyle z=1} .
  2. Convergent at some points divergent at others: n = 1 z n n {\textstyle \sum _{n=1}^{\infty }{\frac {z^{n}}{n}}} has radius of convergence 1 {\displaystyle 1} . It converges for z = 1 {\displaystyle z=-1} , while it diverges for z = 1 {\displaystyle z=1} .
  3. Absolute convergence at every point of the boundary: n = 1 z n n 2 {\textstyle \sum _{n=1}^{\infty }{\frac {z^{n}}{n^{2}}}} has radius of convergence 1 {\displaystyle 1} , while it converges absolutely, and uniformly, at every point of | z | = 1 {\displaystyle |z|=1} due to Weierstrass M-test applied with the hyper-harmonic convergent series n = 1 1 n 2 {\textstyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}} .
  4. Convergent on the closure of the disc of convergence but not continuous sum: Sierpiński gave an example of a power series with radius of convergence 1 {\displaystyle 1} , convergent at all points with | z | = 1 {\displaystyle |z|=1} , but the sum is an unbounded function and, in particular, discontinuous. A sufficient condition for one-sided continuity at a boundary point is given by Abel's theorem.

Formal power series

Main article: Formal power series

In abstract algebra, one attempts to capture the essence of power series without being restricted to the fields of real and complex numbers, and without the need to talk about convergence. This leads to the concept of formal power series, a concept of great utility in algebraic combinatorics.

Power series in several variables

An extension of the theory is necessary for the purposes of multivariable calculus. A power series is here defined to be an infinite series of the form f ( x 1 , , x n ) = j 1 , , j n = 0 a j 1 , , j n k = 1 n ( x k c k ) j k , {\displaystyle f(x_{1},\dots ,x_{n})=\sum _{j_{1},\dots ,j_{n}=0}^{\infty }a_{j_{1},\dots ,j_{n}}\prod _{k=1}^{n}(x_{k}-c_{k})^{j_{k}},} where j = (j1, …, jn) is a vector of natural numbers, the coefficients a(j1, …, jn) are usually real or complex numbers, and the center c = (c1, …, cn) and argument x = (x1, …, xn) are usually real or complex vectors. The symbol Π {\displaystyle \Pi } is the product symbol, denoting multiplication. In the more convenient multi-index notation this can be written f ( x ) = α N n a α ( x c ) α . {\displaystyle f(x)=\sum _{\alpha \in \mathbb {N} ^{n}}a_{\alpha }(x-c)^{\alpha }.} where N {\displaystyle \mathbb {N} } is the set of natural numbers, and so N n {\displaystyle \mathbb {N} ^{n}} is the set of ordered n-tuples of natural numbers.

The theory of such series is trickier than for single-variable series, with more complicated regions of convergence. For instance, the power series n = 0 x 1 n x 2 n {\textstyle \sum _{n=0}^{\infty }x_{1}^{n}x_{2}^{n}} is absolutely convergent in the set { ( x 1 , x 2 ) : | x 1 x 2 | < 1 } {\displaystyle \{(x_{1},x_{2}):|x_{1}x_{2}|<1\}} between two hyperbolas. (This is an example of a log-convex set, in the sense that the set of points ( log | x 1 | , log | x 2 | ) {\displaystyle (\log |x_{1}|,\log |x_{2}|)} , where ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} lies in the above region, is a convex set. More generally, one can show that when c=0, the interior of the region of absolute convergence is always a log-convex set in this sense.) On the other hand, in the interior of this region of convergence one may differentiate and integrate under the series sign, just as one may with ordinary power series.

Order of a power series

Let α be a multi-index for a power series f(x1, x2, …, xn). The order of the power series f is defined to be the least value r {\displaystyle r} such that there is aα ≠ 0 with r = | α | = α 1 + α 2 + + α n {\displaystyle r=|\alpha |=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}} , or {\displaystyle \infty } if f ≡ 0. In particular, for a power series f(x) in a single variable x, the order of f is the smallest power of x with a nonzero coefficient. This definition readily extends to Laurent series.

Notes

  1. Howard Levi (1967). Polynomials, Power Series, and Calculus. Van Nostrand. p. 24.
  2. Erwin Kreyszig, Advanced Engineering Mathematics, 8th ed, page 747
  3. Wacław Sierpiński (1916). "Sur une série potentielle qui, étant convergente en tout point de son cercle de convergence, représente sur ce cercle une fonction discontinue. (French)". Rendiconti del Circolo Matematico di Palermo. 41. Palermo Rend.: 187–190. doi:10.1007/BF03018294. JFM 46.1466.03. S2CID 121218640.
  4. Beckenbach, E. F. (1948). "Convex functions". Bulletin of the American Mathematical Society. 54 (5): 439–460. doi:10.1090/S0002-9904-1948-08994-7.

References

External links

Sequences and series
Integer sequences
Basic
Advanced (list)
Fibonacci spiral with square sizes up to 34.
Properties of sequences
Properties of series
Series
Convergence
Explicit series
Convergent
Divergent
Kinds of series
Hypergeometric series
Categories: