Misplaced Pages

Euler–Maclaurin formula: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 08:21, 13 August 2020 editFrescoBot (talk | contribs)Bots1,135,457 editsm Bot: link specificity and minor changes← Previous edit Latest revision as of 21:21, 15 October 2024 edit undoRsjaffe (talk | contribs)Administrators55,213 editsm clean up, typo(s) fixed: Therefore → Therefore,Tag: AWB 
(26 intermediate revisions by 21 users not shown)
Line 1: Line 1:
{{Short description|Summation formula}}
{{Use American English|date = March 2019}} {{Use American English|date = March 2019}}
{{Short description|Summation formula}}
In ], the '''Euler–Maclaurin formula''' is a formula for the difference between an ] and a closely related ]. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and ] using integrals and the machinery of ]. For example, many asymptotic expansions are derived from the formula, and ] for the sum of powers is an immediate consequence. In ], the '''Euler–Maclaurin formula''' is a formula for the difference between an ] and a closely related ]. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and ] using integrals and the machinery of ]. For example, many asymptotic expansions are derived from the formula, and ] for the sum of powers is an immediate consequence.


Line 6: Line 6:


==The formula== ==The formula==
If <math>m</math> and <math>n</math> are ]s and <math>f(x)</math> is a ] or ] valued ] for ]s <math>x</math> in the ] <math></math>, then the integral If {{mvar|m}} and {{mvar|n}} are ]s and {{math|''f''(''x'')}} is a ] or ] valued ] for ]s {{mvar|x}} in the ] {{math|}}, then the integral
<math display=block>I = \int_m^n f(x)\,dx</math>

:<math>I = \int_m^n f(x)\,dx</math>

can be approximated by the sum (or vice versa) can be approximated by the sum (or vice versa)
<math display=block>S = f(m + 1) + \cdots + f(n - 1) + f(n)</math>
(see ]). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher ]s {{math|''f''{{isup|(''k'')}}(''x'')}} evaluated at the endpoints of the interval, that is to say {{math|''x'' {{=}} ''m''}} and {{math|''x'' {{=}} ''n''}}.


Explicitly, for {{mvar|p}} a positive ] and a function {{math|''f''(''x'')}} that is {{mvar|p}} times ] on the interval {{math|}}, we have
:<math>S = f(m + 1) + \cdots + f(n - 1) + f(n)</math>
<math display=block>S - I = \sum_{k=1}^p {\frac{B_k}{k!} \left(f^{(k - 1)}(n) - f^{(k - 1)}(m)\right)} + R_p,</math>
where {{mvar|B<sub>k</sub>}} is the {{mvar|k}}th ] (with {{math|''B''<sub>1</sub> {{=}} {{sfrac|1|2}}}}) and {{mvar|R<sub>p</sub>}} is an ] which depends on {{mvar|n}}, {{mvar|m}}, {{mvar|p}}, and {{mvar|f}} and is usually small for suitable values of {{mvar|p}}.


The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for {{math|''B''<sub>1</sub>}}. In this case we have<ref name=":0" /><ref name="DLMF">{{cite web|url=http://dlmf.nist.gov/2.10|title=Digital Library of Mathematical Functions: Sums and Sequences|publisher=]}}</ref>
(see ]). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher ]s <math>f^{(k)}(x)</math> evaluated at the endpoints of the interval, that is to say when <math>x=m</math> and <math>x=n</math>.
<math display=block>\sum_{i=m}^n f(i) =

Explicitly, for <math>p</math> a positive ] and a function <math>f(x)</math> that is <math>p</math> times ] on the interval <math></math>, we have

:<math> S - I = \sum_{k=1}^p {\frac{B_k}{k!} (f^{(k - 1)}(n) - f^{(k - 1)}(m))} + R_p,</math>

where <math>B_k</math> is the <math>k</math>th ] (with <math>B_1=1/2</math>) and <math>R_p</math> is an ] which depends on <math>n</math>, <math>m</math>, <math>p</math>, and <math>f</math> and is usually small for suitable values of <math>p</math>.

The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for <math>B_1</math>. In this case we have<ref name=":0" /><ref name="DLMF">{{cite web|url=http://dlmf.nist.gov/2.10|title=Digital Library of Mathematical Functions: Sums and Sequences|publisher=]}}</ref>

:<math>\sum_{i=m}^n f(i) =
\int^n_m f(x)\,dx + \frac{f(n) + f(m)}{2} + \int^n_m f(x)\,dx + \frac{f(n) + f(m)}{2} +
\sum_{k=1}^{\lfloor p/2\rfloor} \frac{B_{2k}}{(2k)!} (f^{(2k - 1)}(n) - f^{(2k - 1)}(m)) + R_p, \sum_{k=1}^{\left\lfloor \frac{p}{2}\right\rfloor} \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R_p,
</math> </math>

or alternatively or alternatively
<math display=block>\sum_{i=m+1}^n f(i) =

:<math>\sum_{i=m+1}^n f(i) =
\int^n_m f(x)\,dx + \frac{f(n) - f(m)}{2} + \int^n_m f(x)\,dx + \frac{f(n) - f(m)}{2} +
\sum_{k=1}^{\lfloor p/2\rfloor} \frac{B_{2k}}{(2k)!} (f^{(2k - 1)}(n) - f^{(2k - 1)}(m)) + R_p. \sum_{k=1}^{\left\lfloor \frac{p}{2}\right\rfloor} \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + R_p.
</math> </math>


===The remainder term=== ===The remainder term===
{{seealso|Bernoulli polynomials}} {{see also|Bernoulli polynomials}}


The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated ] to successive intervals <math></math> for <math>r = m, m + 1, \ldots, n - 1</math>. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term. The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated ] to successive intervals {{math|}} for {{math|''r'' {{=}} ''m'', ''m'' + 1, , ''n'' 1}}. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term.


The remainder term has an exact expression in terms of the periodized Bernoulli functions <math>P_k(x)</math>. The Bernoulli polynomials may be defined recursively by <math>B_0(x) = 1</math> and, for <math>k \ge 1</math>, The remainder term has an exact expression in terms of the periodized Bernoulli functions {{math|''P<sub>k</sub>''(''x'')}}. The Bernoulli polynomials may be defined recursively by {{math|''B''<sub>0</sub>(''x'') {{=}} 1}} and, for {{math|''k'' 1}},
:<math>\begin{align} <math display=block>\begin{align}
B_k'(x) &= kB_{k - 1}(x), \\ B_k'(x) &= kB_{k - 1}(x), \\
\int_0^1 B_k(x)\,dx &= 0. \int_0^1 B_k(x)\,dx &= 0.
\end{align}</math> \end{align}</math>
The periodized Bernoulli functions are defined as The periodized Bernoulli functions are defined as
:<math>P_k(x) = B_k(x - \lfloor x\rfloor),</math> <math display=block>P_k(x) = B_k\bigl(x - \lfloor x\rfloor\bigr),</math>
where <math>\lfloor x \rfloor</math> denotes the largest integer less than or equal to <math>x</math> (so that <math>x - \lfloor x \rfloor</math> always lies in the interval <math>[0,1)</math>). where {{math|⌊''x''⌋}} denotes the largest integer less than or equal to {{mvar|x}}, so that {{math|''x'' ⌊''x''⌋}} always lies in the interval {{math|[0,1)}}.


With this notation, the remainder term <math>R_p</math> equals With this notation, the remainder term {{mvar|R<sub>p</sub>}} equals
:<math>R_p = (-1)^{p+1}\int_m^n f^{(p)}(x) \frac{P_p(x)}{p!}\,dx. </math> <math display="block">R_{p} = (-1)^{p+1}\int_m^n f^{(p)}(x) \frac{P_p(x)}{p!}\,dx. </math>


When <math>k>0</math>, it can be shown that When {{math|''k'' > 0}}, it can be shown that for {{math|0 ≤ ''x'' ≤ 1}},
<math display=block>\bigl|B_k(x)\bigr| \le \frac{2 \cdot k!}{(2\pi)^k}\zeta(k),</math>

where {{mvar|ζ}} denotes the ]; one approach to prove this inequality is to obtain the Fourier series for the polynomials {{math|''B<sub>k</sub>''(''x'')}}. The bound is achieved for even {{mvar|k}} when {{mvar|x}} is zero. The term {{math|''ζ''(''k'')}} may be omitted for odd {{mvar|k}} but the proof in this case is more complex (see Lehmer).<ref name="Lehmer">{{cite journal|last1=Lehmer|first1=D. H.|author-link=D. H. Lehmer |title=On the maxima and minima of Bernoulli polynomials | date=1940 | journal=]|volume=47|issue=8|pages=533–538 |doi=10.2307/2303833|jstor=2303833}}</ref> Using this inequality, the size of the remainder term can be estimated as
:<math>|B_k(x)| \le \frac{2 \cdot k!}{(2\pi)^k}\zeta(k),</math>
<math display=block>\left|R_p\right| \leq \frac{2 \zeta(p)}{(2\pi)^p}\int_m^n \left|f^{(p)}(x)\right|\,dx.</math>

where <math>\zeta</math> denotes the ]; one approach to prove this inequality is to obtain the Fourier series for the polynomials <math>B_k(x)</math>. The bound is achieved for even <math>k</math> when <math>x</math> is zero. The term <math>\zeta(k)</math> may be omitted for odd <math>k</math> but the proof in this case is more complex (see Lehmer).<ref name="Lehmer">{{cite journal|last1=Lehmer|first1=D. H. |title=On the maxima and minima of Bernoulli polynomials | date=1940 | journal=The American Mathematical Monthly|volume=47|issue=8|pages=533–538 |doi=10.2307/2303833}}</ref> Using this inequality, the size of the remainder term can be estimated as

:<math>|R_p| \leq \frac{2 \zeta(p)}{(2\pi)^p}\int_m^n |f^{(p)}(x)|\,dx.</math>


===Low-order cases=== ===Low-order cases===


The Bernoulli numbers from <math>B_1</math> to <math>B_7</math> are <math>\tfrac{1}{2}, \tfrac{1}{6}, 0, -\tfrac{1}{30}, 0, \tfrac{1}{42}, 0.</math> Therefore the low-order cases of the Euler-Maclaurin formula are: The Bernoulli numbers from {{math|''B''<sub>1</sub>}} to {{math|''B''<sub>7</sub>}} are {{math|{{sfrac|1|2}}, {{sfrac|1|6}}, 0, {{sfrac|1|30}}, 0, {{sfrac|1|42}}, 0}}. Therefore, the low-order cases of the Euler–Maclaurin formula are:
<math display=block>\begin{align}

:<math>\begin{align}
\sum_{i=m}^n f(i) - \int_m^n f(x)\,dx &= \frac{f(m)+f(n)}{2} + \int_m^n f'(x)P_1(x)\,dx \\ \sum_{i=m}^n f(i) - \int_m^n f(x)\,dx &= \frac{f(m)+f(n)}{2} + \int_m^n f'(x)P_1(x)\,dx \\
&=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} - \int_m^n f''(x)\frac{P_2(x)}{2!}\,dx \\ &=\frac{f(m)+f(n)}{2} + \frac{1}{6}\frac{f'(n) - f'(m)}{2!} - \int_m^n f''(x)\frac{P_2(x)}{2!}\,dx \\
Line 79: Line 66:
===The Basel problem=== ===The Basel problem===
The ] is to determine the sum The ] is to determine the sum
<math display=block> 1 + \frac14 + \frac19 + \frac1{16} + \frac1{25} + \cdots = \sum_{n=1}^\infty \frac{1}{n^2}. </math>


Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals {{math|{{sfrac|''π''<sup>2</sup>|6}}}}, which he proved in the same year.<ref>{{cite book
:<math> 1 + \frac14 + \frac19 + \frac1{16} + \frac1{25} + \cdots = \sum_{n=1}^\infty \frac{1}{n^2}. </math>
| last = Pengelley | first = David J.

| arxiv = 1912.03527
Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals <math>\pi^2/6</math>, which he proved in the same year.<ref>Pengelley, David J. , in: Robert Bradley and Ed Sandifer (Eds), ''Proceedings, Euler 2K+2 Conference (Rumford, Maine, 2002)'', ], 2003.</ref>
| contribution = Dances between continuous and discrete: Euler's summation formula
| location = Washington, DC
| mr = 2349549
| pages = 169–189
| publisher = Mathematical Association of America
| series = MAA Spectrum
| title = Euler at 300
| year = 2007}}</ref>


===Sums involving a polynomial=== ===Sums involving a polynomial===
{{seealso|Faulhaber's formula}} {{see also|Faulhaber's formula}}
If <math>f</math> is a ] and <math>p</math> is big enough, then the remainder term vanishes. For instance, if <math>f(x)=x^3</math>, we can choose <math>p=2</math> to obtain, after simplification, If {{mvar|f}} is a ] and {{mvar|p}} is big enough, then the remainder term vanishes. For instance, if {{math|''f''(''x'') {{=}} ''x''<sup>3</sup>}}, we can choose {{math|''p'' {{=}} 2}} to obtain, after simplification,
<math display=block>\sum_{i=0}^n i^3 = \left(\frac{n(n + 1)}{2}\right)^2.</math>

:<math>\sum_{i=0}^n i^3 = \left(\frac{n(n + 1)}{2}\right)^2.</math>


===Approximation of integrals=== ===Approximation of integrals===
The formula provides a means of approximating a finite integral. Let <math>a < b</math> be the endpoints of the interval of integration. Fix <math>N</math>, the number of points to use in the approximation, and denote the corresponding step size by <math>h = (b - a) / (N - 1)</math>. Set <math>x_i = a + (i-1)h</math>, so that <math>x_1 = a</math> and <math>x_N = b</math>. Then:<ref name="Devries">{{cite book |last1=Devries |first1=Paul L. |last2=Hasbrun |first2=Javier E. |title=A first course in computational physics. |edition=2nd |publisher=Jones and Bartlett Publishers |year=2011 |page=156}}</ref> The formula provides a means of approximating a finite integral. Let {{math|''a'' < ''b''}} be the endpoints of the interval of integration. Fix {{mvar|N}}, the number of points to use in the approximation, and denote the corresponding step size by {{math|''h'' {{=}} {{sfrac|''b'' ''a''|''N'' 1}}}}. Set {{math|''x<sub>i</sub>'' {{=}} ''a'' + (''i'' − 1)''h''}}, so that {{math|''x''<sub>1</sub> {{=}} ''a''}} and {{math|''x<sub>N</sub>'' {{=}} ''b''}}. Then:<ref name="Devries">{{cite book |last1=Devries |first1=Paul L. |last2=Hasbrun |first2=Javier E. |title=A first course in computational physics. |edition=2nd |publisher=Jones and Bartlett Publishers |year=2011 |page=156}}</ref>
<math display=block>

:<math>
\begin{align} \begin{align}
I & = \int_a^b f(x)\,dx \\ I & = \int_a^b f(x)\,dx \\
&\sim h\left(\frac{f(x_1)}{2} + f(x_2) + \cdots + f(x_{N-1}) + \frac{f(x_N)}{2}\right) + \frac{h^2}{12} - \frac{h^4}{720} + \cdots. &\sim h\left(\frac{f(x_1)}{2} + f(x_2) + \cdots + f(x_{N-1}) + \frac{f(x_N)}{2}\right) + \frac{h^2}{12}\bigl - \frac{h^4}{720}\bigl + \cdots
\end{align} \end{align}
</math> </math>


This may be viewed as an extension of the ] by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some <math>p</math>, depending upon <math>f</math> and <math>h</math>, such that the terms past order <math>p</math> increase rapidly. Thus, the remainder term generally demands close attention.<ref name="Devries"/> This may be viewed as an extension of the ] by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some {{mvar|p}}, depending upon {{mvar|f}} and {{mvar|h}}, such that the terms past order {{mvar|p}} increase rapidly. Thus, the remainder term generally demands close attention.<ref name="Devries"/>


The Euler–Maclaurin formula is also used for detailed ] in ]. It explains the superior performance of the ] on smooth ]s and is used in certain ]. ] is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a ]). This technique is known as a periodizing transformation. The Euler–Maclaurin formula is also used for detailed ] in ]. It explains the superior performance of the ] on smooth ]s and is used in certain ]. ] is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a ]). This technique is known as a periodizing transformation.
Line 106: Line 100:
===Asymptotic expansion of sums=== ===Asymptotic expansion of sums===
In the context of computing ]s of sums and ], usually the most useful form of the Euler–Maclaurin formula is In the context of computing ]s of sums and ], usually the most useful form of the Euler–Maclaurin formula is
<math display=block>\sum_{n=a}^b f(n) \sim \int_a^b f(x)\,dx + \frac{f(b) + f(a)}{2} + \sum_{k=1}^\infty \,\frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\right),</math>


where {{mvar|a}} and {{mvar|b}} are integers.<ref>{{Cite book | editor1-last=Abramowitz | editor1-first=Milton | editor1-link=Milton Abramowitz | editor2-last=Stegun | editor2-first=Irene A. | editor2-link=Irene Stegun | title=] | publisher=] | location=New York | isbn=978-0-486-61272-0 | year=1972 | pages = 16, 806, 886}}</ref> Often the expansion remains valid even after taking the limits {{math|''a'' → −∞}} or {{math|''b'' → +∞}} or both. In many cases the integral on the right-hand side can be evaluated in ] in terms of ]s even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,
:<math>\sum_{n=a}^b f(n) \sim \int_a^b f(x)\,dx + \frac{f(b) + f(a)}{2} + \sum_{k=1}^\infty \,\frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\right),</math>
<math display=block>\sum_{k=0}^\infty \frac{1}{(z + k)^2} \sim \underbrace{\int_0^\infty\frac{1}{(z + k)^2}\,dk}_{= \dfrac{1}{z}} + \frac{1}{2z^2} + \sum_{t = 1}^\infty \frac{B_{2t}}{z^{2t + 1}}.</math>


Here the left-hand side is equal to {{math|''ψ''<sup>(1)</sup>(''z'')}}, namely the first-order ] defined by
where <math>a</math> and <math>b</math> are integers.<ref>{{harvtxt|Abramowitz|Stegun|1972}}, 23.1.30</ref> Often the expansion remains valid even after taking the limits <math>a\rightarrow-\infty</math> or <math>b\rightarrow +\infty</math> or both. In many cases the integral on the right-hand side can be evaluated in ] in terms of ]s even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,
:<math>\psi^{(1)}(z) = \frac{d^2}{dz^2}\log \Gamma(z);</math>

the ] {{math|Γ(''z'')}} is equal to {{math|(''z'' − 1)!}} when {{mvar|z}} is a ]. This results in an asymptotic expansion for {{math|''ψ''<sup>(1)</sup>(''z'')}}. That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for ] of the ] function.
:<math>\sum_{k=0}^\infty \frac{1}{(z + k)^2} \sim \underbrace{\int_0^\infty\frac{1}{(z + k)^2}\,dk}_{= 1/z} + \frac{1}{2z^2} + \sum_{t = 1}^\infty \frac{B_{2t}}{z^{2t + 1}}.</math>

Here the left-hand side is equal to <math>\psi^{(1)}(z)</math>, namely the first-order ] defined by <math>\psi^{(1)}(z) = (d/dz)^2(\log \Gamma(z))</math>; the ] <math>\Gamma(z) </math> is equal to <math>(z-1)! </math> if <math>z </math> is a ]. This results in an asymptotic expansion for <math>\psi^{(1)}(z)</math>. That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for ] of the ] function.


====Examples==== ====Examples====
If {{mvar|s}} is an integer greater than 1 we have: If {{mvar|s}} is an integer greater than 1 we have:
:<math>\sum_{k=1}^n \frac{1}{k^s} \approx \frac 1{s-1}+\frac 12-\frac 1{(s-1)n^{s-1}}+\frac 1{2n^s}+\sum_{i=1}\frac{B_{2i}}{(2i)!}\left.</math> <math display=block>\sum_{k=1}^n \frac{1}{k^s} \approx \frac 1{s-1}+\frac 12-\frac 1{(s-1)n^{s-1}}+\frac 1{2n^s}+\sum_{i=1}\frac{B_{2i}}{(2i)!}\left.</math>


Collecting the constants into a value of the ], we can write an asymptotic expansion: Collecting the constants into a value of the ], we can write an asymptotic expansion:
:<math>\sum_{k=1}^n \frac{1}{k^s} \sim\zeta(s)-\frac 1{(s-1)n^{s-1}}+\frac 1{2n^s}-\sum_{i=1}\frac{B_{2i}}{(2i)!}\frac{(s+2i-2)!}{(s-1)!n^{s+2i-1}}.</math> <math display=block>\sum_{k=1}^n \frac{1}{k^s} \sim\zeta(s)-\frac 1{(s-1)n^{s-1}}+\frac 1{2n^s}-\sum_{i=1}\frac{B_{2i}}{(2i)!}\frac{(s+2i-2)!}{(s-1)!n^{s+2i-1}}.</math>


For {{mvar|s}} equal to 2 this simplifies to For {{mvar|s}} equal to 2 this simplifies to
:<math>\sum_{k=1}^n \frac{1}{k^2} \sim\zeta(2)-\frac 1n+\frac 1{2n^2}-\sum_{i=1}\frac{B_{2i}}{n^{2i+1}},</math> <math display=block>\sum_{k=1}^n \frac{1}{k^2} \sim\zeta(2)-\frac 1n+\frac 1{2n^2}-\sum_{i=1}\frac{B_{2i}}{n^{2i+1}},</math>
or or
:<math>\sum_{k=1}^n \frac{1}{k^2} \sim \frac{\pi^2}{6} -\frac{1}{n} +\frac{1}{2n^2} -\frac{1}{6n^3}+\frac{1}{30n^5}-\frac{1}{42n^7} + \cdots.</math> <math display=block>\sum_{k=1}^n \frac{1}{k^2} \sim \frac{\pi^2}{6} -\frac{1}{n} +\frac{1}{2n^2} -\frac{1}{6n^3}+\frac{1}{30n^5}-\frac{1}{42n^7} + \cdots.</math>


When {{math|s {{=}} 1}}, the corresponding technique gives an asymptotic expansion for the ]s: When {{math|''s'' {{=}} 1}}, the corresponding technique gives an asymptotic expansion for the ]s:
:<math>\sum_{k=1}^n \frac{1}{k} \sim \gamma + \log n + \frac{1}{2n} - \sum_{k=1}^\infty \frac{B_{2k}}{2kn^{2k}},</math> <math display=block>\sum_{k=1}^n \frac{1}{k} \sim \gamma + \log n + \frac{1}{2n} - \sum_{k=1}^\infty \frac{B_{2k}}{2kn^{2k}},</math>
where <math>\gamma \approx 0.5772156649015328606065</math> is the ]. where {{math|''γ'' 0.5772...}} is the ].


== Proofs == == Proofs ==


=== Derivation by mathematical induction === === Derivation by mathematical induction ===
We outline the argument given in Apostol.<ref name=":0">{{Cite journal| first = T. M. | title = An Elementary View of Euler's Summation Formula | journal = The ] | volume = 106 | date = 1 May 1999 | issue = 5 | pages = 409–418 | issn = 0002-9890| last = Apostol | doi = 10.2307/2589145| authorlink= Tom M. Apostol| publisher = Mathematical Association of America| jstor = 2589145}}</ref> We outline the argument given in Apostol.<ref name=":0">{{Cite journal| first = T. M. | title = An Elementary View of Euler's Summation Formula | journal = ] | volume = 106 | date = 1 May 1999 | issue = 5 | pages = 409–418 | issn = 0002-9890| last = Apostol | doi = 10.2307/2589145| author-link= Tom M. Apostol| publisher = Mathematical Association of America| jstor = 2589145}}</ref>


The ] {{math|''B<sub>n</sub>''(''x'')}} and the periodic Bernoulli functions {{math|''P<sub>n</sub>''(''x'')}} for {{math|''n'' {{=}} 0, 1, 2, ...}} were introduced above. The ] {{math|''B<sub>n</sub>''(''x'')}} and the periodic Bernoulli functions {{math|''P<sub>n</sub>''(''x'')}} for {{math|''n'' {{=}} 0, 1, 2, ...}} were introduced above.


The first several Bernoulli polynomials are The first several Bernoulli polynomials are
<math display=block>\begin{align}

:<math>\begin{align}
B_0(x) &= 1, \\ B_0(x) &= 1, \\
B_1(x) &= x - \frac{1}{2}, \\ B_1(x) &= x - \tfrac{1}{2}, \\
B_2(x) &= x^2 - x + \frac{1}{6}, \\ B_2(x) &= x^2 - x + \tfrac{1}{6}, \\
B_3(x) &= x^3 - \frac{3}{2}x^2 + \frac{1}{2}x, \\ B_3(x) &= x^3 - \tfrac{3}{2}x^2 + \tfrac{1}{2}x, \\
B_4(x) &= x^4 - 2x^3 + x^2 - \frac{1}{30}, \\ B_4(x) &= x^4 - 2x^3 + x^2 - \tfrac{1}{30}, \\
& \vdots &\,\,\,\vdots
\end{align}</math> \end{align}</math>


The values {{math|''B<sub>n</sub>''(0)}} are the ] {{math|''B''<sub>''n''</sub>}}. Notice that for {{math|''n''&nbsp;&nbsp;1}} we have The values {{math|''B<sub>n</sub>''(1)}} are the ] {{math|''B''<sub>''n''</sub>}}. Notice that for {{math|''n'' 1}} we have
<math display=block>B_n = B_n(1) = B_n(0),</math>

and for {{math|''n'' {{=}} 1}},
:<math>B_n = B_n(0) = B_n(1),</math>
<math display=block>B_1 = B_1(1) = -B_1(0).</math>

and for {{math|''n''&nbsp;{{=}}&nbsp;1}},

:<math>B_1 = B_1(0) = -B_1(1).</math>


The functions {{math|''P''<sub>''n''</sub>}} agree with the Bernoulli polynomials on the interval {{math|}} and are ] with period 1. Furthermore, except when {{math|''n'' {{=}} 1}}, they are also continuous. Thus, The functions {{math|''P''<sub>''n''</sub>}} agree with the Bernoulli polynomials on the interval {{math|}} and are ] with period 1. Furthermore, except when {{math|''n'' {{=}} 1}}, they are also continuous. Thus,
<math display=block> P_n(0) = P_n(1) = B_n \quad \text{for }n \neq 1.</math>

:<math> P_n(0) = P_n(1) = B_n \quad (n \neq 1)</math>


Let {{math|''k''}} be an integer, and consider the integral Let {{math|''k''}} be an integer, and consider the integral
<math display=block> \int_k^{k + 1} f(x)\,dx = \int_k^{k + 1} u\,dv,</math>

:<math> \int_k^{k + 1} f(x)\,dx = \int_k^{k + 1} u\,dv,</math>

where where
<math display=block>\begin{align}

:<math>\begin{align}
u &= f(x), \\ u &= f(x), \\
du &= f'(x)\,dx, \\ du &= f'(x)\,dx, \\
dv &= P_0(x)\,dx && (\text{since }P_0(x) = 1), \\ dv &= P_0(x)\,dx & \text{since }P_0(x) &= 1, \\
v &= P_1(x). v &= P_1(x).
\end{align}</math> \end{align}</math>


], we get ], we get
<math display=block>\begin{align}

:<math>\begin{align}
\int_k^{k + 1} f(x)\,dx &= \bigl_k^{k + 1} - \int_k^{k + 1} v\,du \\ \int_k^{k + 1} f(x)\,dx &= \bigl_k^{k + 1} - \int_k^{k + 1} v\,du \\
&= \bigl_k^{k + 1} - \int_k^{k+1} f'(x)P_1(x)\,dx \\ &= \bigl_k^{k + 1} - \int_k^{k+1} f'(x)P_1(x)\,dx \\
Line 182: Line 167:
\end{align}</math> \end{align}</math>


Using <math>B_1(0)=-1/2</math>, <math>B_1(1)=1/2</math>, and summing the above from {{math|''k'' {{=}} 0}} to {{math|''k'' {{=}} ''n'' − 1}}, we get Using {{math|''B''<sub>1</sub>(0) {{=}} −{{sfrac|1|2}}}}, {{math|''B''<sub>1</sub>(1) {{=}} {{sfrac|1|2}}}}, and summing the above from {{math|''k'' {{=}} 0}} to {{math|''k'' {{=}} ''n'' − 1}}, we get
<math display=block>\begin{align}

\int_0^n f(x)\, dx &= \int_0^1 f(x)\,dx + \cdots + \int_{n-1}^n f(x)\,dx \\
:<math>\begin{align}
\int_0^n f(x)\, dx &= \int_0^1 f(x)\,dx + \dotsb + \int_{n-1}^n f(x)\,dx \\
&= \frac{f(0)}{2}+ f(1) + \dotsb + f(n-1) + \frac{f(n)}{2} - \int_0^n f'(x) P_1(x)\,dx. &= \frac{f(0)}{2}+ f(1) + \dotsb + f(n-1) + \frac{f(n)}{2} - \int_0^n f'(x) P_1(x)\,dx.
\end{align}</math> \end{align}</math>


Adding (''f''(''n'')&nbsp;&nbsp;''f''(0))/2 to both sides and rearranging, we have Adding {{math|{{sfrac|''f''(''n'') ''f''(0)|2}}}} to both sides and rearranging, we have
<math display=block> \sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + \frac{f(n) - f(0)}{2} + \int_0^n f'(x) P_1(x)\,dx.</math>

:<math> \sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + \frac{f(n) - f(0)}{2} + \int_0^n f'(x) P_1(x)\,dx.</math>


This is the {{math|''p'' {{=}} 1}} case of the summation formula. To continue the induction, we apply integration by parts to the error term: This is the {{math|''p'' {{=}} 1}} case of the summation formula. To continue the induction, we apply integration by parts to the error term:
<math display=block>\int_k^{k+1} f'(x)P_1(x)\,dx = \int_k^{k + 1} u\,dv,</math>

:<math>\int_k^{k+1} f'(x)P_1(x)\,dx = \int_k^{k + 1} u\,dv,</math>

where where
<math display=block>\begin{align}

:<math>\begin{align}
u &= f'(x), \\ u &= f'(x), \\
du &= f''(x)\,dx, \\ du &= f''(x)\,dx, \\
dv &= P_1(x)\,dx, \\ dv &= P_1(x)\,dx, \\
v &= \frac{1}{2}P_2(x). v &= \tfrac{1}{2}P_2(x).
\end{align}</math> \end{align}</math>


The result of integrating by parts is The result of integrating by parts is
<math display=block>\begin{align}

:<math>\begin{align}
\bigl_k^{k + 1} - \int_k^{k + 1} v\,du &= \left_k^{k+1} - \frac{1}{2}\int_k^{k+1} f''(x)P_2(x)\,dx \\ \bigl_k^{k + 1} - \int_k^{k + 1} v\,du &= \left_k^{k+1} - \frac{1}{2}\int_k^{k+1} f''(x)P_2(x)\,dx \\
&= \frac{B_2}{2}(f'(k + 1) - f'(k)) - \frac{1}{2}\int_k^{k + 1} f''(x)P_2(x)\,dx. &= \frac{B_2}{2}(f'(k + 1) - f'(k)) - \frac{1}{2}\int_k^{k + 1} f''(x)P_2(x)\,dx.
Line 214: Line 193:


Summing from {{math|''k'' {{=}} 0}} to {{math|''k'' {{=}} ''n'' &minus; 1}} and substituting this for the lower order error term results in the {{math|''p'' {{=}} 2}} case of the formula, Summing from {{math|''k'' {{=}} 0}} to {{math|''k'' {{=}} ''n'' &minus; 1}} and substituting this for the lower order error term results in the {{math|''p'' {{=}} 2}} case of the formula,
<math display=block>\sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + \frac{f(n) - f(0)}{2} + \frac{B_2}{2}\bigl(f'(n) - f'(0)\bigr) - \frac{1}{2}\int_0^n f''(x)P_2(x)\,dx.</math>

:<math>\sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + \frac{f(n) + f(0)}{2} + \frac{B_2}{2}(f'(n) - f'(0)) - \frac{1}{2}\int_0^n f''(x)P_2(x)\,dx.</math>


This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by ], in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions. This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by ], in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.
Line 226: Line 204:
* ] * ]


==Notes== ==References==
{{reflist}} {{reflist}}


==References== ==Further reading==
{{refbegin|2}} {{refbegin}}
* {{cite journal|first1= H. W. |last1=Gould | first2=William | last2=Squire | title = Maclaurin's second formula and its generalization | year=1963| journal=Amer. Math. Monthly | volume=70 | number=1 | pages=44–52| jstor=2312783 | mr=0146551 | doi=10.2307/2312783}}
* {{Cite book | editor1-last=Abramowitz | editor1-first=Milton | editor1-link=Milton Abramowitz | editor2-last=Stegun | editor2-first=Irene A. | editor2-link=Irene Stegun | title=] | publisher=] | location=New York | isbn=978-0-486-61272-0 | year=1972 | id=tenth printing | ref=harv }}, pp.&nbsp;16, 806, 886
* {{cite web|last1=Gourdon | first1=Xavier|last2=Sebah|first2=Pascal |url=http://numbers.computation.free.fr/Constants/Miscellaneous/bernoulli.html |title= Introduction on Bernoulli's numbers|year=2002}}
* {{MathWorld|title=Euler–Maclaurin Integration Formulas|urlname=Euler-MaclaurinIntegrationFormulas}}
* {{cite journal|first1=Erich | last1=Martensen | title= On the generalized Euler-Maclaurin formula|journal=Z. Angew. Math. Mech. | volume =85 | number=12 | pages=858–863 | doi=10.1002/zamm.200410217 | mr=2184846 | year=2005| bibcode=2005ZaMM...85..858M | s2cid=123419717 }}
* Gourdon, Xavier; Sebah, Pascal '''', (2002)
* {{cite book | first1=Hugh L. |last=Montgomery | authorlink=Hugh Montgomery (mathematician) |first2=Robert C. |last2=Vaughan |authorlink2=Robert Charles Vaughan (mathematician) | title=Multiplicative number theory I. Classical theory | series=Cambridge tracts in advanced mathematics | volume=97 | year=2007 | isbn=0-521-84903-9 | pages=495–519}} * {{cite book | first1=Hugh L. |last1=Montgomery | author-link=Hugh Montgomery (mathematician) |first2=Robert C. |last2=Vaughan |author-link2=Robert Charles Vaughan (mathematician) | title=Multiplicative number theory I. Classical theory | series=Cambridge tracts in advanced mathematics | volume=97 | year=2007 | isbn=978-0-521-84903-6 | pages=495–519}}
{{refend}} {{refend}}

==External links==
* {{MathWorld|title=Euler–Maclaurin Integration Formulas|urlname=Euler-MaclaurinIntegrationFormulas}}

{{Leonhard Euler}}

{{Calculus topics}}


{{DEFAULTSORT:Euler-Maclaurin Formula}} {{DEFAULTSORT:Euler-Maclaurin Formula}}
]
] ]
] ]
] ]
] ]

Latest revision as of 21:21, 15 October 2024

Summation formula

In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.

The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula.

The formula

If m and n are natural numbers and f(x) is a real or complex valued continuous function for real numbers x in the interval , then the integral I = m n f ( x ) d x {\displaystyle I=\int _{m}^{n}f(x)\,dx} can be approximated by the sum (or vice versa) S = f ( m + 1 ) + + f ( n 1 ) + f ( n ) {\displaystyle S=f(m+1)+\cdots +f(n-1)+f(n)} (see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives f(x) evaluated at the endpoints of the interval, that is to say x = m and x = n.

Explicitly, for p a positive integer and a function f(x) that is p times continuously differentiable on the interval , we have S I = k = 1 p B k k ! ( f ( k 1 ) ( n ) f ( k 1 ) ( m ) ) + R p , {\displaystyle S-I=\sum _{k=1}^{p}{{\frac {B_{k}}{k!}}\left(f^{(k-1)}(n)-f^{(k-1)}(m)\right)}+R_{p},} where Bk is the kth Bernoulli number (with B1 = ⁠1/2⁠) and Rp is an error term which depends on n, m, p, and f and is usually small for suitable values of p.

The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for B1. In this case we have i = m n f ( i ) = m n f ( x ) d x + f ( n ) + f ( m ) 2 + k = 1 p 2 B 2 k ( 2 k ) ! ( f ( 2 k 1 ) ( n ) f ( 2 k 1 ) ( m ) ) + R p , {\displaystyle \sum _{i=m}^{n}f(i)=\int _{m}^{n}f(x)\,dx+{\frac {f(n)+f(m)}{2}}+\sum _{k=1}^{\left\lfloor {\frac {p}{2}}\right\rfloor }{\frac {B_{2k}}{(2k)!}}\left(f^{(2k-1)}(n)-f^{(2k-1)}(m)\right)+R_{p},} or alternatively i = m + 1 n f ( i ) = m n f ( x ) d x + f ( n ) f ( m ) 2 + k = 1 p 2 B 2 k ( 2 k ) ! ( f ( 2 k 1 ) ( n ) f ( 2 k 1 ) ( m ) ) + R p . {\displaystyle \sum _{i=m+1}^{n}f(i)=\int _{m}^{n}f(x)\,dx+{\frac {f(n)-f(m)}{2}}+\sum _{k=1}^{\left\lfloor {\frac {p}{2}}\right\rfloor }{\frac {B_{2k}}{(2k)!}}\left(f^{(2k-1)}(n)-f^{(2k-1)}(m)\right)+R_{p}.}

The remainder term

See also: Bernoulli polynomials

The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals for r = m, m + 1, …, n − 1. The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term.

The remainder term has an exact expression in terms of the periodized Bernoulli functions Pk(x). The Bernoulli polynomials may be defined recursively by B0(x) = 1 and, for k ≥ 1, B k ( x ) = k B k 1 ( x ) , 0 1 B k ( x ) d x = 0. {\displaystyle {\begin{aligned}B_{k}'(x)&=kB_{k-1}(x),\\\int _{0}^{1}B_{k}(x)\,dx&=0.\end{aligned}}} The periodized Bernoulli functions are defined as P k ( x ) = B k ( x x ) , {\displaystyle P_{k}(x)=B_{k}{\bigl (}x-\lfloor x\rfloor {\bigr )},} where ⌊x⌋ denotes the largest integer less than or equal to x, so that x − ⌊x⌋ always lies in the interval [0,1).

With this notation, the remainder term Rp equals R p = ( 1 ) p + 1 m n f ( p ) ( x ) P p ( x ) p ! d x . {\displaystyle R_{p}=(-1)^{p+1}\int _{m}^{n}f^{(p)}(x){\frac {P_{p}(x)}{p!}}\,dx.}

When k > 0, it can be shown that for 0 ≤ x ≤ 1, | B k ( x ) | 2 k ! ( 2 π ) k ζ ( k ) , {\displaystyle {\bigl |}B_{k}(x){\bigr |}\leq {\frac {2\cdot k!}{(2\pi )^{k}}}\zeta (k),} where ζ denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials Bk(x). The bound is achieved for even k when x is zero. The term ζ(k) may be omitted for odd k but the proof in this case is more complex (see Lehmer). Using this inequality, the size of the remainder term can be estimated as | R p | 2 ζ ( p ) ( 2 π ) p m n | f ( p ) ( x ) | d x . {\displaystyle \left|R_{p}\right|\leq {\frac {2\zeta (p)}{(2\pi )^{p}}}\int _{m}^{n}\left|f^{(p)}(x)\right|\,dx.}

Low-order cases

The Bernoulli numbers from B1 to B7 are ⁠1/2⁠, ⁠1/6⁠, 0, −⁠1/30⁠, 0, ⁠1/42⁠, 0. Therefore, the low-order cases of the Euler–Maclaurin formula are: i = m n f ( i ) m n f ( x ) d x = f ( m ) + f ( n ) 2 + m n f ( x ) P 1 ( x ) d x = f ( m ) + f ( n ) 2 + 1 6 f ( n ) f ( m ) 2 ! m n f ( x ) P 2 ( x ) 2 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ( n ) f ( m ) 2 ! + m n f ( x ) P 3 ( x ) 3 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ( n ) f ( m ) 2 ! 1 30 f ( n ) f ( m ) 4 ! m n f ( 4 ) ( x ) P 4 ( x ) 4 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ( n ) f ( m ) 2 ! 1 30 f ( n ) f ( m ) 4 ! + m n f ( 5 ) ( x ) P 5 ( x ) 5 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ( n ) f ( m ) 2 ! 1 30 f ( n ) f ( m ) 4 ! + 1 42 f ( 5 ) ( n ) f ( 5 ) ( m ) 6 ! m n f ( 6 ) ( x ) P 6 ( x ) 6 ! d x = f ( m ) + f ( n ) 2 + 1 6 f ( n ) f ( m ) 2 ! 1 30 f ( n ) f ( m ) 4 ! + 1 42 f ( 5 ) ( n ) f ( 5 ) ( m ) 6 ! + m n f ( 7 ) ( x ) P 7 ( x ) 7 ! d x . {\displaystyle {\begin{aligned}\sum _{i=m}^{n}f(i)-\int _{m}^{n}f(x)\,dx&={\frac {f(m)+f(n)}{2}}+\int _{m}^{n}f'(x)P_{1}(x)\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-\int _{m}^{n}f''(x){\frac {P_{2}(x)}{2!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}+\int _{m}^{n}f'''(x){\frac {P_{3}(x)}{3!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}-\int _{m}^{n}f^{(4)}(x){\frac {P_{4}(x)}{4!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}+\int _{m}^{n}f^{(5)}(x){\frac {P_{5}(x)}{5!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}+{\frac {1}{42}}{\frac {f^{(5)}(n)-f^{(5)}(m)}{6!}}-\int _{m}^{n}f^{(6)}(x){\frac {P_{6}(x)}{6!}}\,dx\\&={\frac {f(m)+f(n)}{2}}+{\frac {1}{6}}{\frac {f'(n)-f'(m)}{2!}}-{\frac {1}{30}}{\frac {f'''(n)-f'''(m)}{4!}}+{\frac {1}{42}}{\frac {f^{(5)}(n)-f^{(5)}(m)}{6!}}+\int _{m}^{n}f^{(7)}(x){\frac {P_{7}(x)}{7!}}\,dx.\end{aligned}}}

Applications

The Basel problem

The Basel problem is to determine the sum 1 + 1 4 + 1 9 + 1 16 + 1 25 + = n = 1 1 n 2 . {\displaystyle 1+{\frac {1}{4}}+{\frac {1}{9}}+{\frac {1}{16}}+{\frac {1}{25}}+\cdots =\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.}

Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals ⁠π/6⁠, which he proved in the same year.

Sums involving a polynomial

See also: Faulhaber's formula

If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if f(x) = x, we can choose p = 2 to obtain, after simplification, i = 0 n i 3 = ( n ( n + 1 ) 2 ) 2 . {\displaystyle \sum _{i=0}^{n}i^{3}=\left({\frac {n(n+1)}{2}}\right)^{2}.}

Approximation of integrals

The formula provides a means of approximating a finite integral. Let a < b be the endpoints of the interval of integration. Fix N, the number of points to use in the approximation, and denote the corresponding step size by h = ⁠ba/N − 1⁠. Set xi = a + (i − 1)h, so that x1 = a and xN = b. Then: I = a b f ( x ) d x h ( f ( x 1 ) 2 + f ( x 2 ) + + f ( x N 1 ) + f ( x N ) 2 ) + h 2 12 [ f ( x 1 ) f ( x N ) ] h 4 720 [ f ( x 1 ) f ( x N ) ] + {\displaystyle {\begin{aligned}I&=\int _{a}^{b}f(x)\,dx\\&\sim h\left({\frac {f(x_{1})}{2}}+f(x_{2})+\cdots +f(x_{N-1})+{\frac {f(x_{N})}{2}}\right)+{\frac {h^{2}}{12}}{\bigl }-{\frac {h^{4}}{720}}{\bigl }+\cdots \end{aligned}}}

This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some p, depending upon f and h, such that the terms past order p increase rapidly. Thus, the remainder term generally demands close attention.

The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation.

Asymptotic expansion of sums

In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is n = a b f ( n ) a b f ( x ) d x + f ( b ) + f ( a ) 2 + k = 1 B 2 k ( 2 k ) ! ( f ( 2 k 1 ) ( b ) f ( 2 k 1 ) ( a ) ) , {\displaystyle \sum _{n=a}^{b}f(n)\sim \int _{a}^{b}f(x)\,dx+{\frac {f(b)+f(a)}{2}}+\sum _{k=1}^{\infty }\,{\frac {B_{2k}}{(2k)!}}\left(f^{(2k-1)}(b)-f^{(2k-1)}(a)\right),}

where a and b are integers. Often the expansion remains valid even after taking the limits a → −∞ or b → +∞ or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example, k = 0 1 ( z + k ) 2 0 1 ( z + k ) 2 d k = 1 z + 1 2 z 2 + t = 1 B 2 t z 2 t + 1 . {\displaystyle \sum _{k=0}^{\infty }{\frac {1}{(z+k)^{2}}}\sim \underbrace {\int _{0}^{\infty }{\frac {1}{(z+k)^{2}}}\,dk} _{={\dfrac {1}{z}}}+{\frac {1}{2z^{2}}}+\sum _{t=1}^{\infty }{\frac {B_{2t}}{z^{2t+1}}}.}

Here the left-hand side is equal to ψ(z), namely the first-order polygamma function defined by

ψ ( 1 ) ( z ) = d 2 d z 2 log Γ ( z ) ; {\displaystyle \psi ^{(1)}(z)={\frac {d^{2}}{dz^{2}}}\log \Gamma (z);}

the gamma function Γ(z) is equal to (z − 1)! when z is a positive integer. This results in an asymptotic expansion for ψ(z). That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.

Examples

If s is an integer greater than 1 we have: k = 1 n 1 k s 1 s 1 + 1 2 1 ( s 1 ) n s 1 + 1 2 n s + i = 1 B 2 i ( 2 i ) ! [ ( s + 2 i 2 ) ! ( s 1 ) ! ( s + 2 i 2 ) ! ( s 1 ) ! n s + 2 i 1 ] . {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{s}}}\approx {\frac {1}{s-1}}+{\frac {1}{2}}-{\frac {1}{(s-1)n^{s-1}}}+{\frac {1}{2n^{s}}}+\sum _{i=1}{\frac {B_{2i}}{(2i)!}}\left.}

Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion: k = 1 n 1 k s ζ ( s ) 1 ( s 1 ) n s 1 + 1 2 n s i = 1 B 2 i ( 2 i ) ! ( s + 2 i 2 ) ! ( s 1 ) ! n s + 2 i 1 . {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{s}}}\sim \zeta (s)-{\frac {1}{(s-1)n^{s-1}}}+{\frac {1}{2n^{s}}}-\sum _{i=1}{\frac {B_{2i}}{(2i)!}}{\frac {(s+2i-2)!}{(s-1)!n^{s+2i-1}}}.}

For s equal to 2 this simplifies to k = 1 n 1 k 2 ζ ( 2 ) 1 n + 1 2 n 2 i = 1 B 2 i n 2 i + 1 , {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{2}}}\sim \zeta (2)-{\frac {1}{n}}+{\frac {1}{2n^{2}}}-\sum _{i=1}{\frac {B_{2i}}{n^{2i+1}}},} or k = 1 n 1 k 2 π 2 6 1 n + 1 2 n 2 1 6 n 3 + 1 30 n 5 1 42 n 7 + . {\displaystyle \sum _{k=1}^{n}{\frac {1}{k^{2}}}\sim {\frac {\pi ^{2}}{6}}-{\frac {1}{n}}+{\frac {1}{2n^{2}}}-{\frac {1}{6n^{3}}}+{\frac {1}{30n^{5}}}-{\frac {1}{42n^{7}}}+\cdots .}

When s = 1, the corresponding technique gives an asymptotic expansion for the harmonic numbers: k = 1 n 1 k γ + log n + 1 2 n k = 1 B 2 k 2 k n 2 k , {\displaystyle \sum _{k=1}^{n}{\frac {1}{k}}\sim \gamma +\log n+{\frac {1}{2n}}-\sum _{k=1}^{\infty }{\frac {B_{2k}}{2kn^{2k}}},} where γ ≈ 0.5772... is the Euler–Mascheroni constant.

Proofs

Derivation by mathematical induction

We outline the argument given in Apostol.

The Bernoulli polynomials Bn(x) and the periodic Bernoulli functions Pn(x) for n = 0, 1, 2, ... were introduced above.

The first several Bernoulli polynomials are B 0 ( x ) = 1 , B 1 ( x ) = x 1 2 , B 2 ( x ) = x 2 x + 1 6 , B 3 ( x ) = x 3 3 2 x 2 + 1 2 x , B 4 ( x ) = x 4 2 x 3 + x 2 1 30 , {\displaystyle {\begin{aligned}B_{0}(x)&=1,\\B_{1}(x)&=x-{\tfrac {1}{2}},\\B_{2}(x)&=x^{2}-x+{\tfrac {1}{6}},\\B_{3}(x)&=x^{3}-{\tfrac {3}{2}}x^{2}+{\tfrac {1}{2}}x,\\B_{4}(x)&=x^{4}-2x^{3}+x^{2}-{\tfrac {1}{30}},\\&\,\,\,\vdots \end{aligned}}}

The values Bn(1) are the Bernoulli numbers Bn. Notice that for n ≠ 1 we have B n = B n ( 1 ) = B n ( 0 ) , {\displaystyle B_{n}=B_{n}(1)=B_{n}(0),} and for n = 1, B 1 = B 1 ( 1 ) = B 1 ( 0 ) . {\displaystyle B_{1}=B_{1}(1)=-B_{1}(0).}

The functions Pn agree with the Bernoulli polynomials on the interval and are periodic with period 1. Furthermore, except when n = 1, they are also continuous. Thus, P n ( 0 ) = P n ( 1 ) = B n for  n 1. {\displaystyle P_{n}(0)=P_{n}(1)=B_{n}\quad {\text{for }}n\neq 1.}

Let k be an integer, and consider the integral k k + 1 f ( x ) d x = k k + 1 u d v , {\displaystyle \int _{k}^{k+1}f(x)\,dx=\int _{k}^{k+1}u\,dv,} where u = f ( x ) , d u = f ( x ) d x , d v = P 0 ( x ) d x since  P 0 ( x ) = 1 , v = P 1 ( x ) . {\displaystyle {\begin{aligned}u&=f(x),\\du&=f'(x)\,dx,\\dv&=P_{0}(x)\,dx&{\text{since }}P_{0}(x)&=1,\\v&=P_{1}(x).\end{aligned}}}

Integrating by parts, we get k k + 1 f ( x ) d x = [ u v ] k k + 1 k k + 1 v d u = [ f ( x ) P 1 ( x ) ] k k + 1 k k + 1 f ( x ) P 1 ( x ) d x = B 1 ( 1 ) f ( k + 1 ) B 1 ( 0 ) f ( k ) k k + 1 f ( x ) P 1 ( x ) d x . {\displaystyle {\begin{aligned}\int _{k}^{k+1}f(x)\,dx&={\bigl }_{k}^{k+1}-\int _{k}^{k+1}v\,du\\&={\bigl }_{k}^{k+1}-\int _{k}^{k+1}f'(x)P_{1}(x)\,dx\\&=B_{1}(1)f(k+1)-B_{1}(0)f(k)-\int _{k}^{k+1}f'(x)P_{1}(x)\,dx.\end{aligned}}}

Using B1(0) = −⁠1/2⁠, B1(1) = ⁠1/2⁠, and summing the above from k = 0 to k = n − 1, we get 0 n f ( x ) d x = 0 1 f ( x ) d x + + n 1 n f ( x ) d x = f ( 0 ) 2 + f ( 1 ) + + f ( n 1 ) + f ( n ) 2 0 n f ( x ) P 1 ( x ) d x . {\displaystyle {\begin{aligned}\int _{0}^{n}f(x)\,dx&=\int _{0}^{1}f(x)\,dx+\cdots +\int _{n-1}^{n}f(x)\,dx\\&={\frac {f(0)}{2}}+f(1)+\dotsb +f(n-1)+{\frac {f(n)}{2}}-\int _{0}^{n}f'(x)P_{1}(x)\,dx.\end{aligned}}}

Adding ⁠f(n) − f(0)/2⁠ to both sides and rearranging, we have k = 1 n f ( k ) = 0 n f ( x ) d x + f ( n ) f ( 0 ) 2 + 0 n f ( x ) P 1 ( x ) d x . {\displaystyle \sum _{k=1}^{n}f(k)=\int _{0}^{n}f(x)\,dx+{\frac {f(n)-f(0)}{2}}+\int _{0}^{n}f'(x)P_{1}(x)\,dx.}

This is the p = 1 case of the summation formula. To continue the induction, we apply integration by parts to the error term: k k + 1 f ( x ) P 1 ( x ) d x = k k + 1 u d v , {\displaystyle \int _{k}^{k+1}f'(x)P_{1}(x)\,dx=\int _{k}^{k+1}u\,dv,} where u = f ( x ) , d u = f ( x ) d x , d v = P 1 ( x ) d x , v = 1 2 P 2 ( x ) . {\displaystyle {\begin{aligned}u&=f'(x),\\du&=f''(x)\,dx,\\dv&=P_{1}(x)\,dx,\\v&={\tfrac {1}{2}}P_{2}(x).\end{aligned}}}

The result of integrating by parts is [ u v ] k k + 1 k k + 1 v d u = [ f ( x ) P 2 ( x ) 2 ] k k + 1 1 2 k k + 1 f ( x ) P 2 ( x ) d x = B 2 2 ( f ( k + 1 ) f ( k ) ) 1 2 k k + 1 f ( x ) P 2 ( x ) d x . {\displaystyle {\begin{aligned}{\bigl }_{k}^{k+1}-\int _{k}^{k+1}v\,du&=\left_{k}^{k+1}-{\frac {1}{2}}\int _{k}^{k+1}f''(x)P_{2}(x)\,dx\\&={\frac {B_{2}}{2}}(f'(k+1)-f'(k))-{\frac {1}{2}}\int _{k}^{k+1}f''(x)P_{2}(x)\,dx.\end{aligned}}}

Summing from k = 0 to k = n − 1 and substituting this for the lower order error term results in the p = 2 case of the formula, k = 1 n f ( k ) = 0 n f ( x ) d x + f ( n ) f ( 0 ) 2 + B 2 2 ( f ( n ) f ( 0 ) ) 1 2 0 n f ( x ) P 2 ( x ) d x . {\displaystyle \sum _{k=1}^{n}f(k)=\int _{0}^{n}f(x)\,dx+{\frac {f(n)-f(0)}{2}}+{\frac {B_{2}}{2}}{\bigl (}f'(n)-f'(0){\bigr )}-{\frac {1}{2}}\int _{0}^{n}f''(x)P_{2}(x)\,dx.}

This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.

See also

References

  1. ^ Apostol, T. M. (1 May 1999). "An Elementary View of Euler's Summation Formula". The American Mathematical Monthly. 106 (5). Mathematical Association of America: 409–418. doi:10.2307/2589145. ISSN 0002-9890. JSTOR 2589145.
  2. "Digital Library of Mathematical Functions: Sums and Sequences". National Institute of Standards and Technology.
  3. Lehmer, D. H. (1940). "On the maxima and minima of Bernoulli polynomials". The American Mathematical Monthly. 47 (8): 533–538. doi:10.2307/2303833. JSTOR 2303833.
  4. Pengelley, David J. (2007). "Dances between continuous and discrete: Euler's summation formula". Euler at 300. MAA Spectrum. Washington, DC: Mathematical Association of America. pp. 169–189. arXiv:1912.03527. MR 2349549.
  5. ^ Devries, Paul L.; Hasbrun, Javier E. (2011). A first course in computational physics (2nd ed.). Jones and Bartlett Publishers. p. 156.
  6. Abramowitz, Milton; Stegun, Irene A., eds. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications. pp. 16, 806, 886. ISBN 978-0-486-61272-0.

Further reading

External links

Leonhard Euler
Calculus
Precalculus
Limits
Differential calculus
Integral calculus
Vector calculus
Multivariable calculus
Sequences and series
Special functions
and numbers
History of calculus
Lists
Integrals
Miscellaneous topics
Categories: