Article snapshot taken from Wikipedia with creative commons attribution-sharealike license.
Give it a read and then ask your questions in the chat.
We can research this topic together.
(see ]). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives <math>f^{(k)}(x)</math> evaluated at the end points of the interval, that is to say when <math>x=m</math> and <math>x=n.</math>
(see ]). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives <math>f^{(k)}(x)</math> evaluated at the end points of the interval, that is to say when <math>x=m</math> and <math>x=n.</math>
Explicitly, for <math>p</math> a positive integer and a function <math>f(x)</math> that is <math>p</math> times continuously differentiable in the interval <math>,</math> we have
Explicitly, for <math>p</math> a positive integer and a function <math>f(x)</math> that is <math>p</math> times continuously differentiable in the interval <math>,</math> we have
Revision as of 11:13, 20 July 2018
In mathematics, the Euler–Maclaurin formula provides a powerful connection between integrals (see calculus) and sums. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.
The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735 (and later generalized as Darboux's formula). Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals.
(see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives evaluated at the end points of the interval, that is to say when and
Explicitly, for a positive integer and a function that is times continuously differentiable in the interval we have
where is the th Bernoulli number (with ) and is an error term which is normally small for suitable values of and depends on and
The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for in which case we have
or alternatively
The first few Bernoulli numbers of even index are:
We may write the Euler-Maclaurin formula then as
The Bernoulli polynomials and periodic function
The formula is derived below using repeated integration by parts applied to successive intervals for integers The derivation uses the periodic Bernoulli functions, which are defined in terms of the Bernoulli polynomials for
The Bernoulli polynomials may be defined recursively by
and the periodic Bernoulli functions are defined as
where denotes the largest integer that is not greater than so that always lies in the interval
It can be shown that for all so that except for all the periodic Bernoulli functions are continuous. The functions are sometimes written as
The remainder term
The remainder term can be written as
When it can be shown that
where denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials The bound is achieved for even when is zero. The term may be omitted for odd but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated using
Applicable formula
We can use the formula as a means of approximating a finite integral, with the following simple formula:
where is the number of points in the interval of integration from to and is the distance between points so that Note the series above is usually not convergent; indeed, often the terms will increase rapidly after a number of iterations. Thus, attention generally needs to be paid to the remainder term.
This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms.
Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals which he proved in the same year. Parseval's identity for the Fourier series of gives the same result.
Sums involving a polynomial
If is a polynomial and is big enough, then the remainder term vanishes. For instance, if we can choose to obtain after simplification
The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation.
Asymptotic expansion of sums
In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is
where and are integers. Often the expansion remains valid even after taking the limits or or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,
Here the left-hand side is equal to namely the first-order polygamma function defined through the gamma function is equal to if is a positive integer. This results in an asymptotic expansion for That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.
Examples
If s is an integer greater than 1 we have:
Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion:
For s equal to 2 this simplifies to
or
We can also derive (from Equation 1 below) the perhaps not-so-useful formula:
When s is 1, we get the following for the so-called harmonic numbers:
The Bernoulli polynomialsBn(x) and the periodic Bernoulli functions Pn(x) for n = 0, 1, 2, ... were introduced above.
The first several Bernoulli polynomials are
The values Bn(0) are the Bernoulli numbers. Notice that for n ≠ 1 we have
For n = 1,
The functions Pn agree with the Bernoulli polynomials on the interval and are periodic with period 1. Furthermore, except when n = 1, they are also continuous. Thus,
Using , , and summing the above from k = 0 to k = n − 1, we get
Adding (f(n) − f(0))/2 to both sides and rearranging, we have
The last two terms therefore give the error when the integral is taken to approximate the sum.
Next, consider
where
Integrating by parts again, we get
Then summing from k = 0 to k = n − 1, and then replacing the last integral in (1) with what we have thus shown to be equal to it, we have
By now the reader will have guessed that this process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on the identities for periodic Bernoulli functions.