Misplaced Pages

Euler's formula: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 17:32, 27 April 2007 edit64.215.219.2 (talk) See also← Previous edit Revision as of 18:47, 30 April 2007 edit undo208.255.152.227 (talk) Using ordinary differential equations: Fixed A FormulaNext edit →
Line 164: Line 164:
===Using ordinary differential equations=== ===Using ordinary differential equations===
Define the function ''g''(''x'') by Define the function ''g''(''x'') by
: <math>g(x) \ \stackrel{\mathrm{def}}{=}\ e^{ix} .\ </math> : <math>g(x)\ \stackrel{\mathrm{def}}{=}\ e^{ix} .\ </math>


Considering that ''i'' is constant, the first and second derivatives of ''g''(''x'') are Considering that ''i'' is constant, the first and second derivatives of ''g''(''x'') are

Revision as of 18:47, 30 April 2007

This article is about Euler's formula in complex analysis. For Euler's formula in graph theory see planar graph. See also topics named after Euler.

Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that shows a deep relationship between the trigonometric functions and the complex exponential function. (Euler's identity is a special case of the Euler formula.)

Euler's formula states that, for any real number x,

e i x = cos ( x ) + i sin ( x ) {\displaystyle e^{ix}=\cos(x)+i\sin(x)\!}

where

e {\displaystyle e\,} is the base of the natural logarithm
i {\displaystyle i\,} is the imaginary unit
cos ( ) {\displaystyle \cos()\,} and sin ( ) {\displaystyle \sin()\,} are trigonometric functions.

Richard Feynman called Euler's formula "our jewel" and "the most remarkable formula in mathematics".

History

Euler's formula was proven for the first time by Roger Cotes in 1714 in the form ln(cos(x) + i sin(x)) = ix (where "ln" means natural logarithm, i.e. log to base e). It was Euler who published the equation in its current form in 1748, basing his proof on the infinite series of both sides being equal. Neither of these men saw the geometrical interpretation of the formula: the view of complex numbers as points in the complex plane arose only some 50 years later (see Caspar Wessel).

Applications in complex number theory

This formula can be interpreted as saying that the function e traces out the unit circle in the complex number plane as x ranges through the real numbers. Here, x is the angle that a line connecting the origin with a point on the unit circle makes with the positive real axis, measured counter clockwise and in radians. The formula is valid only if sin and cos take their arguments in radians rather than in degrees.

The original proof is based on the Taylor series expansions of the exponential function e (where z is a complex number) and of sin x and cos x for real numbers x (see below). In fact, the same proof shows that Euler's formula is even valid for all complex numbers z.

Euler's formula can be used to represent complex numbers in polar coordinates. Any complex number z = x + iy can be written as

z = x + i y = | z | ( cos ϕ + i sin ϕ ) = | z | e i ϕ {\displaystyle z=x+iy=|z|(\cos \phi +i\sin \phi )=|z|e^{i\phi }\,}
z ¯ = x i y = | z | ( cos ϕ i sin ϕ ) = | z | e i ϕ {\displaystyle {\bar {z}}=x-iy=|z|(\cos \phi -i\sin \phi )=|z|e^{-i\phi }\,}

where

x = R e { z } {\displaystyle x=\mathrm {Re} \{z\}\,} the real part
y = I m { z } {\displaystyle y=\mathrm {Im} \{z\}\,} the imaginary part
| z | = x 2 + y 2 {\displaystyle |z|={\sqrt {x^{2}+y^{2}}}} the magnitude of z

and ϕ {\displaystyle \phi \,} is the argument of z— the angle between the x axis and the vector z measured counterclockwise and in radians — which is defined up to addition of 2π.


Now, taking this derived formula, we can use Euler's formula to define the logarithm of a complex number. To do this, we also use the facts that

a = e l n ( a ) {\displaystyle a=e^{ln(a)}\,}

and

e a e b = e a + b {\displaystyle e^{a}e^{b}=e^{a+b}\,}

both valid for any complex numbers a and b.

Therefore, one can write:

z = | z | e i ϕ = e ln | z | e i ϕ = e ln | z | + i ϕ {\displaystyle z=|z|e^{i\phi }=e^{\ln |z|}e^{i\phi }=e^{\ln |z|+i\phi }\,}

for any z 0 {\displaystyle z\neq 0} . Taking the logarithm of both sides shows that:

ln z = ln | z | + i ϕ . {\displaystyle \ln z=\ln |z|+i\phi .\,}

and in fact this can be used as the definition for the complex logarithm. The logarithm of a complex number is thus a multi-valued function, due to the fact that ϕ {\displaystyle \phi \,} is multi-valued.

Finally, the other exponential law

( e a ) k = e a k , {\displaystyle (e^{a})^{k}=e^{ak},\,}

which can be seen to hold for all integers k, together with Euler's formula, implies several trigonometric identities as well as de Moivre's formula.

Relationship to trigonometry

Euler's formula provides a powerful connection between analysis and trigonometry, and provides an interpretation of the sine and cosine functions as weighted sums of the exponential function:

cos x = e i x + e i x 2 {\displaystyle \cos x={e^{ix}+e^{-ix} \over 2}}
sin x = e i x e i x 2 i {\displaystyle \sin x={e^{ix}-e^{-ix} \over 2i}}

The two equations above can be derived by adding or subtracting Euler's formulas:

e i x = cos x + i sin x {\displaystyle e^{ix}=\cos x+i\sin x\;}
e i x = cos x i sin x {\displaystyle e^{-ix}=\cos x-i\sin x\;}

and solving for either cosine or sine.

These formulas can even serve as the definition of the trigonometric functions for complex arguments x. For example, letting x = iy, we have:

cos ( i y ) = e y + e y 2 = cosh ( y ) {\displaystyle \cos(iy)={e^{-y}+e^{y} \over 2}=\cosh(y)}
sin ( i y ) = e y e y 2 i = i sinh ( y ) . {\displaystyle \sin(iy)={e^{-y}-e^{y} \over 2i}=i\sinh(y).}

Other applications

In differential equations, the function e is often used to simplify derivations, even if the final answer is a real function involving sine and cosine. Euler's identity is an easy consequence of Euler's formula.

In electrical engineering and other fields, signals that vary periodically over time are often described as a combination of sine and cosine functions (see Fourier analysis), and these are more conveniently expressed as the real part of exponential functions with imaginary exponents, using Euler's formula. Also, phasor analysis of circuits can include Euler's formula to represent the impedance of a capacitor or an inductor.

Proofs

Using Taylor series

Here is a proof of Euler's formula using Taylor series expansions as well as basic facts about the powers of i:

i 0 = 1 , i 1 = i , i 2 = 1 , i 3 = i , i 4 = 1 , i 5 = i , i 6 = 1 , i 7 = i , {\displaystyle {\begin{aligned}i^{0}&{}=1,\quad &i^{1}&{}=i,\quad &i^{2}&{}=-1,\quad &i^{3}&{}=-i,\\i^{4}&={}1,\quad &i^{5}&={}i,\quad &i^{6}&{}=-1,\quad &i^{7}&{}=-i,\\\end{aligned}}}

and so on. The functions e, cos(x) and sin(x) (assuming x is real) can be expressed using their Taylor expansions around zero:

e x = 1 + x + x 2 2 ! + x 3 3 ! + cos x = 1 x 2 2 ! + x 4 4 ! x 6 6 ! + sin x = x x 3 3 ! + x 5 5 ! x 7 7 ! + {\displaystyle {\begin{aligned}e^{x}&{}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots \\\cos x&{}=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\\sin x&{}=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \end{aligned}}}

For complex z we define each of these function by the above series, replacing x with z. This is possible because the radius of convergence of each series is infinite. We then find that

e i z = 1 + i z + ( i z ) 2 2 ! + ( i z ) 3 3 ! + ( i z ) 4 4 ! + ( i z ) 5 5 ! + ( i z ) 6 6 ! + ( i z ) 7 7 ! + ( i z ) 8 8 ! + = 1 + i z z 2 2 ! i z 3 3 ! + z 4 4 ! + i z 5 5 ! z 6 6 ! i z 7 7 ! + z 8 8 ! + = ( 1 z 2 2 ! + z 4 4 ! z 6 6 ! + z 8 8 ! ) + i ( z z 3 3 ! + z 5 5 ! z 7 7 ! + ) = cos ( z ) + i sin ( z ) {\displaystyle {\begin{aligned}e^{iz}&{}=1+iz+{\frac {(iz)^{2}}{2!}}+{\frac {(iz)^{3}}{3!}}+{\frac {(iz)^{4}}{4!}}+{\frac {(iz)^{5}}{5!}}+{\frac {(iz)^{6}}{6!}}+{\frac {(iz)^{7}}{7!}}+{\frac {(iz)^{8}}{8!}}+\cdots \\&{}=1+iz-{\frac {z^{2}}{2!}}-{\frac {iz^{3}}{3!}}+{\frac {z^{4}}{4!}}+{\frac {iz^{5}}{5!}}-{\frac {z^{6}}{6!}}-{\frac {iz^{7}}{7!}}+{\frac {z^{8}}{8!}}+\cdots \\&{}=\left(1-{\frac {z^{2}}{2!}}+{\frac {z^{4}}{4!}}-{\frac {z^{6}}{6!}}+{\frac {z^{8}}{8!}}-\cdots \right)+i\left(z-{\frac {z^{3}}{3!}}+{\frac {z^{5}}{5!}}-{\frac {z^{7}}{7!}}+\cdots \right)\\&{}=\cos(z)+i\sin(z)\end{aligned}}}

The rearrangement of terms is justified because each series is absolutely convergent. Taking z = x to be a real number gives the original identity as Euler discovered it.


Using calculus

Define the function f   {\displaystyle f\ } by

f ( x ) = cos x + i sin x e i x .   {\displaystyle f(x)={\frac {\cos x+i\sin x}{e^{ix}}}.\ }

This is allowed since the equation

e i x e i x = e 0 = 1   {\displaystyle e^{ix}\cdot e^{-ix}=e^{0}=1\ }

implies that e i x   {\displaystyle e^{ix}\ } is never zero.

The derivative of f   {\displaystyle f\ } is, according to the quotient rule:

f ( x ) = ( sin x + i cos x ) e i x ( cos x + i sin x ) i e i x ( e i x ) 2 = sin x e i x i 2 sin x e i x ( e i x ) 2 = ( 1 i 2 ) sin x e i x ( e i x ) 2 = ( 1 ( 1 ) ) sin x e i x ( e i x ) 2 = 0. {\displaystyle {\begin{aligned}f'(x)&{}={\frac {(-\sin x+i\cos x)\cdot e^{ix}-(\cos x+i\sin x)\cdot i\cdot e^{ix}}{(e^{ix})^{2}}}\\&{}={\frac {-\sin x\cdot e^{ix}-i^{2}\sin x\cdot e^{ix}}{(e^{ix})^{2}}}\\&{}={\frac {(-1-i^{2})\cdot \sin x\cdot e^{ix}}{(e^{ix})^{2}}}\\&{}={\frac {(-1-(-1))\cdot \sin x\cdot e^{ix}}{(e^{ix})^{2}}}\\&{}=0.\end{aligned}}}

Therefore, f   {\displaystyle f\ } must be a constant function. Thus,

cos x + i sin x e i x = f ( x ) = f ( 0 ) = cos 0 + i sin 0 e 0 = 1. {\displaystyle {\frac {\cos x+i\sin x}{e^{ix}}}=f(x)=f(0)={\frac {\cos 0+i\sin 0}{e^{0}}}=1.}

Rearranging,

cos x + i sin x = e i x . {\displaystyle \displaystyle \cos x+i\sin x=e^{ix}.}

Q.E.D.

Using ordinary differential equations

Define the function g(x) by

g ( x )   = d e f   e i x .   {\displaystyle g(x)\ {\stackrel {\mathrm {def} }{=}}\ e^{ix}.\ }

Considering that i is constant, the first and second derivatives of g(x) are

g ( x ) = i e i x   {\displaystyle g'(x)=ie^{ix}\ }
g ( x ) = i 2 e i x = e i x   {\displaystyle g''(x)=i^{2}e^{ix}=-e^{ix}\ }

because i = −1 by definition. From this the following 2-order linear ordinary differential equation is constructed:

g ( x ) = g ( x )   {\displaystyle g''(x)=-g(x)\ }

or

g ( x ) + g ( x ) = 0.   {\displaystyle g''(x)+g(x)=0.\ }

Being a 2-order differential equation, there are two linearly independent solutions that satisfy it:

g 1 ( x ) = cos ( x )   {\displaystyle g_{1}(x)=\cos(x)\ }
g 2 ( x ) = sin ( x ) .   {\displaystyle g_{2}(x)=\sin(x).\ }

Both cos(x) and sin(x) are real functions in which the 2 derivative is identical to the negative of that function. Any linear combination of solutions to a homogeneous differential equation is also a solution. Then, in general, the solution to the differential equation is

g ( x ) {\displaystyle g(x)\,} = A g 1 ( x ) + B g 2 ( x )   {\displaystyle =Ag_{1}(x)+Bg_{2}(x)\ }
= A cos ( x ) + B sin ( x )   {\displaystyle =A\cos(x)+B\sin(x)\ }

for any constants A and B. But not all values of these two constants satisfy the known initial conditions for g(x):

g ( 0 ) = e i 0 = 1   {\displaystyle g(0)=e^{i0}=1\ }
g ( 0 ) = i e i 0 = i   {\displaystyle g'(0)=ie^{i0}=i\ } .

However these same initial conditions (applied to the general solution) are

g ( 0 ) = A cos ( 0 ) + B sin ( 0 ) = A   {\displaystyle g(0)=A\cos(0)+B\sin(0)=A\ }
g ( 0 ) = A sin ( 0 ) + B cos ( 0 ) = B   {\displaystyle g'(0)=-A\sin(0)+B\cos(0)=B\ }

resulting in

g ( 0 ) = A = 1   {\displaystyle g(0)=A=1\ }
g ( 0 ) = B = i   {\displaystyle g'(0)=B=i\ }

and, finally,

g ( x )   = d e f   e i x = cos ( x ) + i sin ( x ) .   {\displaystyle g(x)\ {\stackrel {\mathrm {def} }{=}}\ e^{ix}=\cos(x)+i\sin(x).\ }

Q.E.D.

See also

References

  1. Feynman, Richard P. (1977). The Feynman Lectures on Physics, vol. I. Addison-Wesley. pp. p. 22-10. ISBN 0-201-02010-6. {{cite book}}: |pages= has extra text (help)
  2. John Stillwell (2002). Mathematics and Its History. Springer.

External links

Categories: