Misplaced Pages

AM–GM inequality

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from AM-GM Inequality) Arithmetic mean is greater than or equal to geometric mean
Proof without words of the AM–GM inequality:
PR is the diameter of a circle centered on O; its radius AO is the arithmetic mean of a and b. Using the geometric mean theorem, triangle PGR's altitude GQ is the geometric mean. For any ratio a:b, AO ≥ GQ.
Visual proof that (x + y) ≥ 4xy. Taking square roots and dividing by two gives the AM–GM inequality.

In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same (in which case they are both that number).

The simplest non-trivial case is for two non-negative numbers x and y, that is,

x + y 2 x y {\displaystyle {\frac {x+y}{2}}\geq {\sqrt {xy}}}

with equality if and only if x = y. This follows from the fact that the square of a real number is always non-negative (greater than or equal to zero) and from the identity (a ± b) = a ± 2ab + b:

0 ( x y ) 2 = x 2 2 x y + y 2 = x 2 + 2 x y + y 2 4 x y = ( x + y ) 2 4 x y . {\displaystyle {\begin{aligned}0&\leq (x-y)^{2}\\&=x^{2}-2xy+y^{2}\\&=x^{2}+2xy+y^{2}-4xy\\&=(x+y)^{2}-4xy.\end{aligned}}}

Hence (x + y) ≥ 4xy, with equality when (xy) = 0, i.e. x = y. The AM–GM inequality then follows from taking the positive square root of both sides and then dividing both sides by 2.

For a geometrical interpretation, consider a rectangle with sides of length x and y; it has perimeter 2x + 2y and area xy. Similarly, a square with all sides of length √xy has the perimeter 4√xy and the same area as the rectangle. The simplest non-trivial case of the AM–GM inequality implies for the perimeters that 2x + 2y ≥ 4√xy and that only the square has the smallest perimeter amongst all rectangles of equal area.

The simplest case is implicit in Euclid's Elements, Book 5, Proposition 25.

Extensions of the AM–GM inequality treat weighted means and generalized means.

Background

The arithmetic mean, or less precisely the average, of a list of n numbers x1, x2, . . . , xn is the sum of the numbers divided by n:

x 1 + x 2 + + x n n . {\displaystyle {\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}.}

The geometric mean is similar, except that it is only defined for a list of nonnegative real numbers, and uses multiplication and a root in place of addition and division:

x 1 x 2 x n n . {\displaystyle {\sqrt{x_{1}\cdot x_{2}\cdots x_{n}}}.}

If x1, x2, . . . , xn > 0, this is equal to the exponential of the arithmetic mean of the natural logarithms of the numbers:

exp ( ln x 1 + ln x 2 + + ln x n n ) . {\displaystyle \exp \left({\frac {\ln {x_{1}}+\ln {x_{2}}+\cdots +\ln {x_{n}}}{n}}\right).}

The inequality

Restating the inequality using mathematical notation, we have that for any list of n nonnegative real numbers x1, x2, . . . , xn,

x 1 + x 2 + + x n n x 1 x 2 x n n , {\displaystyle {\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\geq {\sqrt{x_{1}\cdot x_{2}\cdots x_{n}}}\,,}

and that equality holds if and only if x1 = x2 = · · · = xn.

Geometric interpretation

In two dimensions, 2x1 + 2x2 is the perimeter of a rectangle with sides of length x1 and x2. Similarly, 4√x1x2 is the perimeter of a square with the same area, x1x2, as that rectangle. Thus for n = 2 the AM–GM inequality states that a rectangle of a given area has the smallest perimeter if that rectangle is also a square.

The full inequality is an extension of this idea to n dimensions. Consider an n-dimensional box with edge lengths x1, x2, . . . , xn. Every vertex of the box is connected to n edges of different directions, so the average length of edges incident to the vertex is (x1 + x2 + · · · + xn)/n. On the other hand, x 1 x 2 x n n {\displaystyle {\sqrt{x_{1}x_{2}\cdots x_{n}}}} is the edge length of an n-dimensional cube of equal volume, which therefore is also the average length of edges incident to a vertex of the cube.

Thus the AM–GM inequality states that only the n-cube has the smallest average length of edges connected to each vertex amongst all n-dimensional boxes with the same volume.

Examples

Example 1

If a , b , c > 0 {\displaystyle a,b,c>0} , then the AM-GM inequality tells us that

( 1 + a ) ( 1 + b ) ( 1 + c ) 2 1 a 2 1 b 2 1 c = 8 a b c {\displaystyle (1+a)(1+b)(1+c)\geq 2{\sqrt {1\cdot {a}}}\cdot 2{\sqrt {1\cdot {b}}}\cdot 2{\sqrt {1\cdot {c}}}=8{\sqrt {abc}}}

Example 2

A simple upper bound for n ! {\displaystyle n!} can be found. AM-GM tells us

1 + 2 + + n n n ! n {\displaystyle 1+2+\dots +n\geq n{\sqrt{n!}}}
n ( n + 1 ) 2 n n ! n {\displaystyle {\frac {n(n+1)}{2}}\geq n{\sqrt{n!}}}

and so

( n + 1 2 ) n n ! {\displaystyle \left({\frac {n+1}{2}}\right)^{n}\geq n!}

with equality at n = 1 {\displaystyle n=1} .

Equivalently,

( n + 1 ) n 2 n n ! {\displaystyle (n+1)^{n}\geq 2^{n}n!}

Example 3

Consider the function

f ( x , y , z ) = x y + y z + z x 3 {\displaystyle f(x,y,z)={\frac {x}{y}}+{\sqrt {\frac {y}{z}}}+{\sqrt{\frac {z}{x}}}}

for all positive real numbers x, y and z. Suppose we wish to find the minimal value of this function. It can be rewritten as:

f ( x , y , z ) = 6 x y + 1 2 y z + 1 2 y z + 1 3 z x 3 + 1 3 z x 3 + 1 3 z x 3 6 = 6 x 1 + x 2 + x 3 + x 4 + x 5 + x 6 6 {\displaystyle {\begin{aligned}f(x,y,z)&=6\cdot {\frac {{\frac {x}{y}}+{\frac {1}{2}}{\sqrt {\frac {y}{z}}}+{\frac {1}{2}}{\sqrt {\frac {y}{z}}}+{\frac {1}{3}}{\sqrt{\frac {z}{x}}}+{\frac {1}{3}}{\sqrt{\frac {z}{x}}}+{\frac {1}{3}}{\sqrt{\frac {z}{x}}}}{6}}\\&=6\cdot {\frac {x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}}{6}}\end{aligned}}}

with

x 1 = x y , x 2 = x 3 = 1 2 y z , x 4 = x 5 = x 6 = 1 3 z x 3 . {\displaystyle x_{1}={\frac {x}{y}},\qquad x_{2}=x_{3}={\frac {1}{2}}{\sqrt {\frac {y}{z}}},\qquad x_{4}=x_{5}=x_{6}={\frac {1}{3}}{\sqrt{\frac {z}{x}}}.}

Applying the AM–GM inequality for n = 6, we get

f ( x , y , z ) 6 x y 1 2 y z 1 2 y z 1 3 z x 3 1 3 z x 3 1 3 z x 3 6 = 6 1 2 2 3 3 3 x y y z z x 6 = 2 2 / 3 3 1 / 2 . {\displaystyle {\begin{aligned}f(x,y,z)&\geq 6\cdot {\sqrt{{\frac {x}{y}}\cdot {\frac {1}{2}}{\sqrt {\frac {y}{z}}}\cdot {\frac {1}{2}}{\sqrt {\frac {y}{z}}}\cdot {\frac {1}{3}}{\sqrt{\frac {z}{x}}}\cdot {\frac {1}{3}}{\sqrt{\frac {z}{x}}}\cdot {\frac {1}{3}}{\sqrt{\frac {z}{x}}}}}\\&=6\cdot {\sqrt{{\frac {1}{2\cdot 2\cdot 3\cdot 3\cdot 3}}{\frac {x}{y}}{\frac {y}{z}}{\frac {z}{x}}}}\\&=2^{2/3}\cdot 3^{1/2}.\end{aligned}}}

Further, we know that the two sides are equal exactly when all the terms of the mean are equal:

f ( x , y , z ) = 2 2 / 3 3 1 / 2 when x y = 1 2 y z = 1 3 z x 3 . {\displaystyle f(x,y,z)=2^{2/3}\cdot 3^{1/2}\quad {\mbox{when}}\quad {\frac {x}{y}}={\frac {1}{2}}{\sqrt {\frac {y}{z}}}={\frac {1}{3}}{\sqrt{\frac {z}{x}}}.}

All the points (x, y, z) satisfying these conditions lie on a half-line starting at the origin and are given by

( x , y , z ) = ( t , 2 3 3 t , 3 3 2 t ) with t > 0. {\displaystyle (x,y,z)={\biggr (}t,{\sqrt{2}}{\sqrt {3}}\,t,{\frac {3{\sqrt {3}}}{2}}\,t{\biggr )}\quad {\mbox{with}}\quad t>0.}

Applications

Cauchy-Schwarz inequality

The AM-GM equality can be used to prove the Cauchy–Schwarz inequality.

Annualized returns

In financial mathematics, the AM-GM inequality shows that the annualized return, the geometric mean, is less than the average annual return, the arithmetic mean.

Nonnegative polynomials

The Motzkin polynomial x 4 y 2 + x 2 y 4 3 x 2 y 2 + 1 {\displaystyle x^{4}y^{2}+x^{2}y^{4}-3x^{2}y^{2}+1} is a nonnegative polynomial which is not a sum of square polynomials. It can be proven nonnegative using the AM-GM inequality with x 1 = x 4 y 2 {\displaystyle x_{1}=x^{4}y^{2}} , x 2 = x 2 y 4 {\displaystyle x_{2}=x^{2}y^{4}} , and x 3 = 1 {\displaystyle x_{3}=1} , that is, ( x 4 y 2 ) ( x 2 y 4 ) ( 1 ) 3 ( x 4 y 2 ) + ( x 2 y 4 ) + ( 1 ) 3 . {\displaystyle {\sqrt{(x^{4}y^{2})\cdot (x^{2}y^{4})\cdot (1)}}\leq {{(x^{4}y^{2})+(x^{2}y^{4})+(1)} \over {3}}.} Simplifying and multiplying both sides by 3 gives 3 x 2 y 2 x 4 y 2 + x 2 y 4 + 1 , {\displaystyle {3x^{2}y^{2}}\leq {x^{4}y^{2}+x^{2}y^{4}+1},} so 0 x 4 y 2 + x 2 y 4 3 x 2 y 2 + 1 . {\displaystyle {0\leq x^{4}y^{2}+x^{2}y^{4}-3x^{2}y^{2}+1}.}

Proofs of the AM–GM inequality

The AM–GM inequality can be proven in many ways.

Proof using Jensen's inequality

Jensen's inequality states that the value of a concave function of an arithmetic mean is greater than or equal to the arithmetic mean of the function's values. Since the logarithm function is concave, we have

log ( x i n ) 1 n log x i = 1 n log ( x i ) = log ( x i ) 1 / n . {\displaystyle \log \left({\frac {\sum x_{i}}{n}}\right)\geq {\frac {1}{n}}\sum \log x_{i}={\frac {1}{n}}\log \left(\prod x_{i}\right)=\log \left(\prod x_{i}\right)^{1/n}.}

Taking antilogs of the far left and far right sides, we have the AM–GM inequality.

Proof by successive replacement of elements

We have to show that

α = x 1 + x 2 + + x n n x 1 x 2 x n n = β {\displaystyle \alpha ={\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\geq {\sqrt{x_{1}x_{2}\cdots x_{n}}}=\beta }

with equality only when all numbers are equal.

If not all numbers are equal, then there exist x i , x j {\displaystyle x_{i},x_{j}} such that x i < α < x j {\displaystyle x_{i}<\alpha <x_{j}} . Replacing xi by α {\displaystyle \alpha } and xj by ( x i + x j α ) {\displaystyle (x_{i}+x_{j}-\alpha )} will leave the arithmetic mean of the numbers unchanged, but will increase the geometric mean because

α ( x j + x i α ) x i x j = ( α x i ) ( x j α ) > 0 {\displaystyle \alpha (x_{j}+x_{i}-\alpha )-x_{i}x_{j}=(\alpha -x_{i})(x_{j}-\alpha )>0}

If the numbers are still not equal, we continue replacing numbers as above. After at most ( n 1 ) {\displaystyle (n-1)} such replacement steps all the numbers will have been replaced with α {\displaystyle \alpha } while the geometric mean strictly increases at each step. After the last step, the geometric mean will be α α α n = α {\displaystyle {\sqrt{\alpha \alpha \cdots \alpha }}=\alpha } , proving the inequality.

It may be noted that the replacement strategy works just as well from the right hand side. If any of the numbers is 0 then so will the geometric mean thus proving the inequality trivially. Therefore we may suppose that all the numbers are positive. If they are not all equal, then there exist x i , x j {\displaystyle x_{i},x_{j}} such that 0 < x i < β < x j {\displaystyle 0<x_{i}<\beta <x_{j}} . Replacing x i {\displaystyle x_{i}} by β {\displaystyle \beta } and x j {\displaystyle x_{j}} by x i x j β {\displaystyle {\frac {x_{i}x_{j}}{\beta }}} leaves the geometric mean unchanged but strictly decreases the arithmetic mean since

x i + x j β x i x j β = ( β x i ) ( x j β ) β > 0 {\displaystyle x_{i}+x_{j}-\beta -{\frac {x_{i}x_{j}}{\beta }}={\frac {(\beta -x_{i})(x_{j}-\beta )}{\beta }}>0} . The proof then follows along similar lines as in the earlier replacement.

Induction proofs

Proof by induction #1

Of the non-negative real numbers x1, . . . , xn, the AM–GM statement is equivalent to

α n x 1 x 2 x n {\displaystyle \alpha ^{n}\geq x_{1}x_{2}\cdots x_{n}}

with equality if and only if α = xi for all i ∈ {1, . . . , n}.

For the following proof we apply mathematical induction and only well-known rules of arithmetic.

Induction basis: For n = 1 the statement is true with equality.

Induction hypothesis: Suppose that the AM–GM statement holds for all choices of n non-negative real numbers.

Induction step: Consider n + 1 non-negative real numbers x1, . . . , xn+1, . Their arithmetic mean α satisfies

( n + 1 ) α =   x 1 + + x n + x n + 1 . {\displaystyle (n+1)\alpha =\ x_{1}+\cdots +x_{n}+x_{n+1}.}

If all the xi are equal to α, then we have equality in the AM–GM statement and we are done. In the case where some are not equal to α, there must exist one number that is greater than the arithmetic mean α, and one that is smaller than α. Without loss of generality, we can reorder our xi in order to place these two particular elements at the end: xn > α and xn+1 < α. Then

x n α > 0 α x n + 1 > 0 {\displaystyle x_{n}-\alpha >0\qquad \alpha -x_{n+1}>0}
( x n α ) ( α x n + 1 ) > 0 . ( ) {\displaystyle \implies (x_{n}-\alpha )(\alpha -x_{n+1})>0\,.\qquad (*)}

Now define y with

y := x n + x n + 1 α x n α > 0 , {\displaystyle y:=x_{n}+x_{n+1}-\alpha \geq x_{n}-\alpha >0\,,}

and consider the n numbers x1, . . . , xn–1, y which are all non-negative. Since

( n + 1 ) α = x 1 + + x n 1 + x n + x n + 1 {\displaystyle (n+1)\alpha =x_{1}+\cdots +x_{n-1}+x_{n}+x_{n+1}}
n α = x 1 + + x n 1 + x n + x n + 1 α = y , {\displaystyle n\alpha =x_{1}+\cdots +x_{n-1}+\underbrace {x_{n}+x_{n+1}-\alpha } _{=\,y},}

Thus, α is also the arithmetic mean of n numbers x1, . . . , xn–1, y and the induction hypothesis implies

α n + 1 = α n α x 1 x 2 x n 1 y α . ( ) {\displaystyle \alpha ^{n+1}=\alpha ^{n}\cdot \alpha \geq x_{1}x_{2}\cdots x_{n-1}y\cdot \alpha .\qquad (**)}

Due to (*) we know that

( x n + x n + 1 α = y ) α x n x n + 1 = ( x n α ) ( α x n + 1 ) > 0 , {\displaystyle (\underbrace {x_{n}+x_{n+1}-\alpha } _{=\,y})\alpha -x_{n}x_{n+1}=(x_{n}-\alpha )(\alpha -x_{n+1})>0,}

hence

y α > x n x n + 1 , ( ) {\displaystyle y\alpha >x_{n}x_{n+1}\,,\qquad ({*}{*}{*})}

in particular α > 0. Therefore, if at least one of the numbers x1, . . . , xn–1 is zero, then we already have strict inequality in (**). Otherwise the right-hand side of (**) is positive and strict inequality is obtained by using the estimate (***) to get a lower bound of the right-hand side of (**). Thus, in both cases we can substitute (***) into (**) to get

α n + 1 > x 1 x 2 x n 1 x n x n + 1 , {\displaystyle \alpha ^{n+1}>x_{1}x_{2}\cdots x_{n-1}x_{n}x_{n+1}\,,}

which completes the proof.

Proof by induction #2

First of all we shall prove that for real numbers x1 < 1 and x2 > 1 there follows

x 1 + x 2 > x 1 x 2 + 1. {\displaystyle x_{1}+x_{2}>x_{1}x_{2}+1.}

Indeed, multiplying both sides of the inequality x2 > 1 by 1 – x1, gives

x 2 x 1 x 2 > 1 x 1 , {\displaystyle x_{2}-x_{1}x_{2}>1-x_{1},}

whence the required inequality is obtained immediately.

Now, we are going to prove that for positive real numbers x1, . . . , xn satisfying x1 . . . xn = 1, there holds

x 1 + + x n n . {\displaystyle x_{1}+\cdots +x_{n}\geq n.}

The equality holds only if x1 = ... = xn = 1.

Induction basis: For n = 2 the statement is true because of the above property.

Induction hypothesis: Suppose that the statement is true for all natural numbers up to n – 1.

Induction step: Consider natural number n, i.e. for positive real numbers x1, . . . , xn, there holds x1 . . . xn = 1. There exists at least one xk < 1, so there must be at least one xj > 1. Without loss of generality, we let k =n – 1 and j = n.

Further, the equality x1 . . . xn = 1 we shall write in the form of (x1 . . . xn–2) (xn–1 xn) = 1. Then, the induction hypothesis implies

( x 1 + + x n 2 ) + ( x n 1 x n ) > n 1. {\displaystyle (x_{1}+\cdots +x_{n-2})+(x_{n-1}x_{n})>n-1.}

However, taking into account the induction basis, we have

x 1 + + x n 2 + x n 1 + x n = ( x 1 + + x n 2 ) + ( x n 1 + x n ) > ( x 1 + + x n 2 ) + x n 1 x n + 1 > n , {\displaystyle {\begin{aligned}x_{1}+\cdots +x_{n-2}+x_{n-1}+x_{n}&=(x_{1}+\cdots +x_{n-2})+(x_{n-1}+x_{n})\\&>(x_{1}+\cdots +x_{n-2})+x_{n-1}x_{n}+1\\&>n,\end{aligned}}}

which completes the proof.

For positive real numbers a1, . . . , an, let's denote

x 1 = a 1 a 1 a n n , . . . , x n = a n a 1 a n n . {\displaystyle x_{1}={\frac {a_{1}}{\sqrt{a_{1}\cdots a_{n}}}},...,x_{n}={\frac {a_{n}}{\sqrt{a_{1}\cdots a_{n}}}}.}

The numbers x1, . . . , xn satisfy the condition x1 . . . xn = 1. So we have

a 1 a 1 a n n + + a n a 1 a n n n , {\displaystyle {\frac {a_{1}}{\sqrt{a_{1}\cdots a_{n}}}}+\cdots +{\frac {a_{n}}{\sqrt{a_{1}\cdots a_{n}}}}\geq n,}

whence we obtain

a 1 + + a n n a 1 a n n , {\displaystyle {\frac {a_{1}+\cdots +a_{n}}{n}}\geq {\sqrt{a_{1}\cdots a_{n}}},}

with the equality holding only for a1 = ... = an.

Proof by Cauchy using forward–backward induction

The following proof by cases relies directly on well-known rules of arithmetic but employs the rarely used technique of forward-backward-induction. It is essentially from Augustin Louis Cauchy and can be found in his Cours d'analyse.

The case where all the terms are equal

If all the terms are equal:

x 1 = x 2 = = x n , {\displaystyle x_{1}=x_{2}=\cdots =x_{n},}

then their sum is nx1, so their arithmetic mean is x1; and their product is x1, so their geometric mean is x1; therefore, the arithmetic mean and geometric mean are equal, as desired.

The case where not all the terms are equal

It remains to show that if not all the terms are equal, then the arithmetic mean is greater than the geometric mean. Clearly, this is only possible when n > 1.

This case is significantly more complex, and we divide it into subcases.

The subcase where n = 2

If n = 2, then we have two terms, x1 and x2, and since (by our assumption) not all terms are equal, we have:

( x 1 + x 2 2 ) 2 x 1 x 2 = 1 4 ( x 1 2 + 2 x 1 x 2 + x 2 2 ) x 1 x 2 = 1 4 ( x 1 2 2 x 1 x 2 + x 2 2 ) = ( x 1 x 2 2 ) 2 > 0 , {\displaystyle {\begin{aligned}{\Bigl (}{\frac {x_{1}+x_{2}}{2}}{\Bigr )}^{2}-x_{1}x_{2}&={\frac {1}{4}}(x_{1}^{2}+2x_{1}x_{2}+x_{2}^{2})-x_{1}x_{2}\\&={\frac {1}{4}}(x_{1}^{2}-2x_{1}x_{2}+x_{2}^{2})\\&={\Bigl (}{\frac {x_{1}-x_{2}}{2}}{\Bigr )}^{2}>0,\end{aligned}}}

hence

x 1 + x 2 2 > x 1 x 2 {\displaystyle {\frac {x_{1}+x_{2}}{2}}>{\sqrt {x_{1}x_{2}}}}

as desired.

The subcase where n = 2

Consider the case where n = 2, where k is a positive integer. We proceed by mathematical induction.

In the base case, k = 1, so n = 2. We have already shown that the inequality holds when n = 2, so we are done.

Now, suppose that for a given k > 1, we have already shown that the inequality holds for n = 2, and we wish to show that it holds for n = 2. To do so, we apply the inequality twice for 2 numbers and once for 2 numbers to obtain:

x 1 + x 2 + + x 2 k 2 k = x 1 + x 2 + + x 2 k 1 2 k 1 + x 2 k 1 + 1 + x 2 k 1 + 2 + + x 2 k 2 k 1 2 x 1 x 2 x 2 k 1 2 k 1 + x 2 k 1 + 1 x 2 k 1 + 2 x 2 k 2 k 1 2 x 1 x 2 x 2 k 1 2 k 1 x 2 k 1 + 1 x 2 k 1 + 2 x 2 k 2 k 1 = x 1 x 2 x 2 k 2 k {\displaystyle {\begin{aligned}{\frac {x_{1}+x_{2}+\cdots +x_{2^{k}}}{2^{k}}}&{}={\frac {{\frac {x_{1}+x_{2}+\cdots +x_{2^{k-1}}}{2^{k-1}}}+{\frac {x_{2^{k-1}+1}+x_{2^{k-1}+2}+\cdots +x_{2^{k}}}{2^{k-1}}}}{2}}\\&\geq {\frac {{\sqrt{x_{1}x_{2}\cdots x_{2^{k-1}}}}+{\sqrt{x_{2^{k-1}+1}x_{2^{k-1}+2}\cdots x_{2^{k}}}}}{2}}\\&\geq {\sqrt {{\sqrt{x_{1}x_{2}\cdots x_{2^{k-1}}}}{\sqrt{x_{2^{k-1}+1}x_{2^{k-1}+2}\cdots x_{2^{k}}}}}}\\&={\sqrt{x_{1}x_{2}\cdots x_{2^{k}}}}\end{aligned}}}

where in the first inequality, the two sides are equal only if

x 1 = x 2 = = x 2 k 1 {\displaystyle x_{1}=x_{2}=\cdots =x_{2^{k-1}}}

and

x 2 k 1 + 1 = x 2 k 1 + 2 = = x 2 k {\displaystyle x_{2^{k-1}+1}=x_{2^{k-1}+2}=\cdots =x_{2^{k}}}

(in which case the first arithmetic mean and first geometric mean are both equal to x1, and similarly with the second arithmetic mean and second geometric mean); and in the second inequality, the two sides are only equal if the two geometric means are equal. Since not all 2 numbers are equal, it is not possible for both inequalities to be equalities, so we know that:

x 1 + x 2 + + x 2 k 2 k > x 1 x 2 x 2 k 2 k {\displaystyle {\frac {x_{1}+x_{2}+\cdots +x_{2^{k}}}{2^{k}}}>{\sqrt{x_{1}x_{2}\cdots x_{2^{k}}}}}

as desired.

The subcase where n < 2

If n is not a natural power of 2, then it is certainly less than some natural power of 2, since the sequence 2, 4, 8, . . . , 2, . . . is unbounded above. Therefore, without loss of generality, let m be some natural power of 2 that is greater than n.

So, if we have n terms, then let us denote their arithmetic mean by α, and expand our list of terms thus:

x n + 1 = x n + 2 = = x m = α . {\displaystyle x_{n+1}=x_{n+2}=\cdots =x_{m}=\alpha .}

We then have:

α = x 1 + x 2 + + x n n = m n ( x 1 + x 2 + + x n ) m = x 1 + x 2 + + x n + ( m n ) n ( x 1 + x 2 + + x n ) m ) ( x 1 + x 2 + + x n = n ( x 1 + x 2 + + x n ) n ) = x 1 + x 2 + + x n + ( m n ) α m = x 1 + x 2 + + x n + x n + 1 + + x m m x 1 x 2 x n x n + 1 x m m = x 1 x 2 x n α m n m , {\displaystyle {\begin{aligned}\alpha &={\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\\&={\frac {{\frac {m}{n}}\left(x_{1}+x_{2}+\cdots +x_{n}\right)}{m}}\\&={\frac {x_{1}+x_{2}+\cdots +x_{n}+{\frac {(m-n)}{n}}\left(x_{1}+x_{2}+\cdots +x_{n}\right)}{m}})(\because x_{1}+x_{2}+\cdots +x_{n}={\frac {{n}(x_{1}+x_{2}+\cdots +x_{n})}{n}})\\&={\frac {x_{1}+x_{2}+\cdots +x_{n}+\left(m-n\right)\alpha }{m}}\\&={\frac {x_{1}+x_{2}+\cdots +x_{n}+x_{n+1}+\cdots +x_{m}}{m}}\\&\geq {\sqrt{x_{1}x_{2}\cdots x_{n}x_{n+1}\cdots x_{m}}}\\&={\sqrt{x_{1}x_{2}\cdots x_{n}\alpha ^{m-n}}}\,,\end{aligned}}}

so

α m x 1 x 2 x n α m n {\displaystyle \alpha ^{m}\geq x_{1}x_{2}\cdots x_{n}\alpha ^{m-n}}

and

α x 1 x 2 x n n {\displaystyle \alpha \geq {\sqrt{x_{1}x_{2}\cdots x_{n}}}}

as desired.


Proof by induction using basic calculus

The following proof uses mathematical induction and some basic differential calculus.

Induction basis: For n = 1 the statement is true with equality.

Induction hypothesis: Suppose that the AM–GM statement holds for all choices of n non-negative real numbers.

Induction step: In order to prove the statement for n + 1 non-negative real numbers x1, . . . , xn, xn+1, we need to prove that

x 1 + + x n + x n + 1 n + 1 ( x 1 x n x n + 1 ) 1 n + 1 0 {\displaystyle {\frac {x_{1}+\cdots +x_{n}+x_{n+1}}{n+1}}-({x_{1}\cdots x_{n}x_{n+1}})^{\frac {1}{n+1}}\geq 0}

with equality only if all the n + 1 numbers are equal.

If all numbers are zero, the inequality holds with equality. If some but not all numbers are zero, we have strict inequality. Therefore, we may assume in the following, that all n + 1 numbers are positive.

We consider the last number xn+1 as a variable and define the function

f ( t ) = x 1 + + x n + t n + 1 ( x 1 x n t ) 1 n + 1 , t > 0. {\displaystyle f(t)={\frac {x_{1}+\cdots +x_{n}+t}{n+1}}-({x_{1}\cdots x_{n}t})^{\frac {1}{n+1}},\qquad t>0.}

Proving the induction step is equivalent to showing that f(t) ≥ 0 for all t > 0, with f(t) = 0 only if x1, . . . , xn and t are all equal. This can be done by analyzing the critical points of f using some basic calculus.

The first derivative of f is given by

f ( t ) = 1 n + 1 1 n + 1 ( x 1 x n ) 1 n + 1 t n n + 1 , t > 0. {\displaystyle f'(t)={\frac {1}{n+1}}-{\frac {1}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n+1}}t^{-{\frac {n}{n+1}}},\qquad t>0.}

A critical point t0 has to satisfy f′(t0) = 0, which means

( x 1 x n ) 1 n + 1 t 0 n n + 1 = 1. {\displaystyle ({x_{1}\cdots x_{n}})^{\frac {1}{n+1}}t_{0}^{-{\frac {n}{n+1}}}=1.}

After a small rearrangement we get

t 0 n n + 1 = ( x 1 x n ) 1 n + 1 , {\displaystyle t_{0}^{\frac {n}{n+1}}=({x_{1}\cdots x_{n}})^{\frac {1}{n+1}},}

and finally

t 0 = ( x 1 x n ) 1 n , {\displaystyle t_{0}=({x_{1}\cdots x_{n}})^{\frac {1}{n}},}

which is the geometric mean of x1, . . . , xn. This is the only critical point of f. Since f′′(t) > 0 for all t > 0, the function f is strictly convex and has a strict global minimum at t0. Next we compute the value of the function at this global minimum:

f ( t 0 ) = x 1 + + x n + ( x 1 x n ) 1 / n n + 1 ( x 1 x n ) 1 n + 1 ( x 1 x n ) 1 n ( n + 1 ) = x 1 + + x n n + 1 + 1 n + 1 ( x 1 x n ) 1 n ( x 1 x n ) 1 n = x 1 + + x n n + 1 n n + 1 ( x 1 x n ) 1 n = n n + 1 ( x 1 + + x n n ( x 1 x n ) 1 n ) 0 , {\displaystyle {\begin{aligned}f(t_{0})&={\frac {x_{1}+\cdots +x_{n}+({x_{1}\cdots x_{n}})^{1/n}}{n+1}}-({x_{1}\cdots x_{n}})^{\frac {1}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n(n+1)}}\\&={\frac {x_{1}+\cdots +x_{n}}{n+1}}+{\frac {1}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n}}-({x_{1}\cdots x_{n}})^{\frac {1}{n}}\\&={\frac {x_{1}+\cdots +x_{n}}{n+1}}-{\frac {n}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n}}\\&={\frac {n}{n+1}}{\Bigl (}{\frac {x_{1}+\cdots +x_{n}}{n}}-({x_{1}\cdots x_{n}})^{\frac {1}{n}}{\Bigr )}\\&\geq 0,\end{aligned}}}

where the final inequality holds due to the induction hypothesis. The hypothesis also says that we can have equality only when x1, . . . , xn are all equal. In this case, their geometric mean  t0 has the same value, Hence, unless x1, . . . , xn, xn+1 are all equal, we have f(xn+1) > 0. This completes the proof.

This technique can be used in the same manner to prove the generalized AM–GM inequality and Cauchy–Schwarz inequality in Euclidean space R.

Proof by Pólya using the exponential function

George Pólya provided a proof similar to what follows. Let f(x) = e – x for all real x, with first derivative f′(x) = e – 1 and second derivative f′′(x) = e. Observe that f(1) = 0, f′(1) = 0 and f′′(x) > 0 for all real x, hence f is strictly convex with the absolute minimum at x = 1. Hence x ≤ e for all real x with equality only for x = 1.

Consider a list of non-negative real numbers x1, x2, . . . , xn. If they are all zero, then the AM–GM inequality holds with equality. Hence we may assume in the following for their arithmetic mean α > 0. By n-fold application of the above inequality, we obtain that

x 1 α x 2 α x n α e x 1 α 1 e x 2 α 1 e x n α 1 = exp ( x 1 α 1 + x 2 α 1 + + x n α 1 ) , ( ) {\displaystyle {\begin{aligned}{{\frac {x_{1}}{\alpha }}{\frac {x_{2}}{\alpha }}\cdots {\frac {x_{n}}{\alpha }}}&\leq {e^{{\frac {x_{1}}{\alpha }}-1}e^{{\frac {x_{2}}{\alpha }}-1}\cdots e^{{\frac {x_{n}}{\alpha }}-1}}\\&=\exp {\Bigl (}{\frac {x_{1}}{\alpha }}-1+{\frac {x_{2}}{\alpha }}-1+\cdots +{\frac {x_{n}}{\alpha }}-1{\Bigr )},\qquad (*)\end{aligned}}}

with equality if and only if xi = α for every i ∈ {1, . . . , n}. The argument of the exponential function can be simplified:

x 1 α 1 + x 2 α 1 + + x n α 1 = x 1 + x 2 + + x n α n = n α α n = 0. {\displaystyle {\begin{aligned}{\frac {x_{1}}{\alpha }}-1+{\frac {x_{2}}{\alpha }}-1+\cdots +{\frac {x_{n}}{\alpha }}-1&={\frac {x_{1}+x_{2}+\cdots +x_{n}}{\alpha }}-n\\&={\frac {n\alpha }{\alpha }}-n\\&=0.\end{aligned}}}

Returning to (*),

x 1 x 2 x n α n e 0 = 1 , {\displaystyle {\frac {x_{1}x_{2}\cdots x_{n}}{\alpha ^{n}}}\leq e^{0}=1,}

which produces x1 x2 · · · xnα, hence the result

x 1 x 2 x n n α . {\displaystyle {\sqrt{x_{1}x_{2}\cdots x_{n}}}\leq \alpha .}

Proof by Lagrangian multipliers

If any of the x i {\displaystyle x_{i}} are 0 {\displaystyle 0} , then there is nothing to prove. So we may assume all the x i {\displaystyle x_{i}} are strictly positive.

Because the arithmetic and geometric means are homogeneous of degree 1, without loss of generality assume that i = 1 n x i = 1 {\displaystyle \prod _{i=1}^{n}x_{i}=1} . Set G ( x 1 , x 2 , , x n ) = i = 1 n x i {\displaystyle G(x_{1},x_{2},\ldots ,x_{n})=\prod _{i=1}^{n}x_{i}} , and F ( x 1 , x 2 , , x n ) = 1 n i = 1 n x i {\displaystyle F(x_{1},x_{2},\ldots ,x_{n})={\frac {1}{n}}\sum _{i=1}^{n}x_{i}} . The inequality will be proved (together with the equality case) if we can show that the minimum of F ( x 1 , x 2 , . . . , x n ) , {\displaystyle F(x_{1},x_{2},...,x_{n}),} subject to the constraint G ( x 1 , x 2 , , x n ) = 1 , {\displaystyle G(x_{1},x_{2},\ldots ,x_{n})=1,} is equal to 1 {\displaystyle 1} , and the minimum is only achieved when x 1 = x 2 = = x n = 1 {\displaystyle x_{1}=x_{2}=\cdots =x_{n}=1} . Let us first show that the constrained minimization problem has a global minimum.

Set K = { ( x 1 , x 2 , , x n ) : 0 x 1 , x 2 , , x n n } {\displaystyle K=\{(x_{1},x_{2},\ldots ,x_{n})\colon 0\leq x_{1},x_{2},\ldots ,x_{n}\leq n\}} . Since the intersection K { G = 1 } {\displaystyle K\cap \{G=1\}} is compact, the extreme value theorem guarantees that the minimum of F ( x 1 , x 2 , . . . , x n ) {\displaystyle F(x_{1},x_{2},...,x_{n})} subject to the constraints G ( x 1 , x 2 , , x n ) = 1 {\displaystyle G(x_{1},x_{2},\ldots ,x_{n})=1} and ( x 1 , x 2 , , x n ) K {\displaystyle (x_{1},x_{2},\ldots ,x_{n})\in K} is attained at some point inside K {\displaystyle K} . On the other hand, observe that if any of the x i > n {\displaystyle x_{i}>n} , then F ( x 1 , x 2 , , x n ) > 1 {\displaystyle F(x_{1},x_{2},\ldots ,x_{n})>1} , while F ( 1 , 1 , , 1 ) = 1 {\displaystyle F(1,1,\ldots ,1)=1} , and ( 1 , 1 , , 1 ) K { G = 1 } {\displaystyle (1,1,\ldots ,1)\in K\cap \{G=1\}} . This means that the minimum inside K { G = 1 } {\displaystyle K\cap \{G=1\}} is in fact a global minimum, since the value of F {\displaystyle F} at any point inside K { G = 1 } {\displaystyle K\cap \{G=1\}} is certainly no smaller than the minimum, and the value of F {\displaystyle F} at any point ( y 1 , y 2 , , y n ) {\displaystyle (y_{1},y_{2},\ldots ,y_{n})} not inside K {\displaystyle K} is strictly bigger than the value at ( 1 , 1 , , 1 ) {\displaystyle (1,1,\ldots ,1)} , which is no smaller than the minimum.

The method of Lagrange multipliers says that the global minimum is attained at a point ( x 1 , x 2 , , x n ) {\displaystyle (x_{1},x_{2},\ldots ,x_{n})} where the gradient of F ( x 1 , x 2 , , x n ) {\displaystyle F(x_{1},x_{2},\ldots ,x_{n})} is λ {\displaystyle \lambda } times the gradient of G ( x 1 , x 2 , , x n ) {\displaystyle G(x_{1},x_{2},\ldots ,x_{n})} , for some λ {\displaystyle \lambda } . We will show that the only point at which this happens is when x 1 = x 2 = = x n = 1 {\displaystyle x_{1}=x_{2}=\cdots =x_{n}=1} and F ( x 1 , x 2 , . . . , x n ) = 1. {\displaystyle F(x_{1},x_{2},...,x_{n})=1.}

Compute F x i = 1 n {\displaystyle {\frac {\partial F}{\partial x_{i}}}={\frac {1}{n}}} and

G x i = j i x j = G ( x 1 , x 2 , , x n ) x i = 1 x i {\displaystyle {\frac {\partial G}{\partial x_{i}}}=\prod _{j\neq i}x_{j}={\frac {G(x_{1},x_{2},\ldots ,x_{n})}{x_{i}}}={\frac {1}{x_{i}}}}

along the constraint. Setting the gradients proportional to one another therefore gives for each i {\displaystyle i} that 1 n = λ x i , {\displaystyle {\frac {1}{n}}={\frac {\lambda }{x_{i}}},} and so n λ = x i . {\displaystyle n\lambda =x_{i}.} Since the left-hand side does not depend on i {\displaystyle i} , it follows that x 1 = x 2 = = x n {\displaystyle x_{1}=x_{2}=\cdots =x_{n}} , and since G ( x 1 , x 2 , , x n ) = 1 {\displaystyle G(x_{1},x_{2},\ldots ,x_{n})=1} , it follows that x 1 = x 2 = = x n = 1 {\displaystyle x_{1}=x_{2}=\cdots =x_{n}=1} and F ( x 1 , x 2 , , x n ) = 1 {\displaystyle F(x_{1},x_{2},\ldots ,x_{n})=1} , as desired.

Generalizations

Weighted AM–GM inequality

There is a similar inequality for the weighted arithmetic mean and weighted geometric mean. Specifically, let the nonnegative numbers x1, x2, . . . , xn and the nonnegative weights w1, w2, . . . , wn be given. Set w = w1 + w2 + · · · + wn. If w > 0, then the inequality

w 1 x 1 + w 2 x 2 + + w n x n w x 1 w 1 x 2 w 2 x n w n w {\displaystyle {\frac {w_{1}x_{1}+w_{2}x_{2}+\cdots +w_{n}x_{n}}{w}}\geq {\sqrt{x_{1}^{w_{1}}x_{2}^{w_{2}}\cdots x_{n}^{w_{n}}}}}

holds with equality if and only if all the xk with wk > 0 are equal. Here the convention 0 = 1 is used.

If all wk = 1, this reduces to the above inequality of arithmetic and geometric means.

One stronger version of this, which also gives strengthened version of the unweighted version, is due to Aldaz. Specifically, let the nonnegative numbers x1, x2, . . . , xn and the nonnegative weights w1, w2, . . . , wn be given. Assume further that the sum of the weights is 1. Then

i = 1 n w i x i i = 1 n x i w i + i = 1 n w i ( x i 1 2 i = 1 n w i x i 1 2 ) 2 {\displaystyle \sum _{i=1}^{n}w_{i}x_{i}\geq \prod _{i=1}^{n}x_{i}^{w_{i}}+\sum _{i=1}^{n}w_{i}\left(x_{i}^{\frac {1}{2}}-\sum _{i=1}^{n}w_{i}x_{i}^{\frac {1}{2}}\right)^{2}} .

Proof using Jensen's inequality

Using the finite form of Jensen's inequality for the natural logarithm, we can prove the inequality between the weighted arithmetic mean and the weighted geometric mean stated above.

Since an xk with weight wk = 0 has no influence on the inequality, we may assume in the following that all weights are positive. If all xk are equal, then equality holds. Therefore, it remains to prove strict inequality if they are not all equal, which we will assume in the following, too. If at least one xk is zero (but not all), then the weighted geometric mean is zero, while the weighted arithmetic mean is positive, hence strict inequality holds. Therefore, we may assume also that all xk are positive.

Since the natural logarithm is strictly concave, the finite form of Jensen's inequality and the functional equations of the natural logarithm imply

ln ( w 1 x 1 + + w n x n w ) > w 1 w ln x 1 + + w n w ln x n = ln x 1 w 1 x 2 w 2 x n w n w . {\displaystyle {\begin{aligned}\ln {\Bigl (}{\frac {w_{1}x_{1}+\cdots +w_{n}x_{n}}{w}}{\Bigr )}&>{\frac {w_{1}}{w}}\ln x_{1}+\cdots +{\frac {w_{n}}{w}}\ln x_{n}\\&=\ln {\sqrt{x_{1}^{w_{1}}x_{2}^{w_{2}}\cdots x_{n}^{w_{n}}}}.\end{aligned}}}

Since the natural logarithm is strictly increasing,

w 1 x 1 + + w n x n w > x 1 w 1 x 2 w 2 x n w n w . {\displaystyle {\frac {w_{1}x_{1}+\cdots +w_{n}x_{n}}{w}}>{\sqrt{x_{1}^{w_{1}}x_{2}^{w_{2}}\cdots x_{n}^{w_{n}}}}.}

Matrix arithmetic–geometric mean inequality

Most matrix generalizations of the arithmetic geometric mean inequality apply on the level of unitarily invariant norms, since, even if the matrices A {\displaystyle A} and B {\displaystyle B} are positive semi-definite, the matrix A B {\displaystyle AB} may not be positive semi-definite and hence may not have a canonical square root. In Bhatia and Kittaneh proved that for any unitarily invariant norm | | | | | | {\displaystyle |||\cdot |||} and positive semi-definite matrices A {\displaystyle A} and B {\displaystyle B} it is the case that

| | | A B | | | 1 2 | | | A 2 + B 2 | | | {\displaystyle |||AB|||\leq {\frac {1}{2}}|||A^{2}+B^{2}|||}

Later, in the same authors proved the stronger inequality that

| | | A B | | | 1 4 | | | ( A + B ) 2 | | | {\displaystyle |||AB|||\leq {\frac {1}{4}}|||(A+B)^{2}|||}

Finally, it is known for dimension n = 2 {\displaystyle n=2} that the following strongest possible matrix generalization of the arithmetic-geometric mean inequality holds, and it is conjectured to hold for all n {\displaystyle n}

| | | ( A B ) 1 2 | | | 1 2 | | | A + B | | | {\displaystyle |||(AB)^{\frac {1}{2}}|||\leq {\frac {1}{2}}|||A+B|||}

This conjectured inequality was shown by Stephen Drury in 2012. Indeed, he proved

σ j ( A B ) 1 2 λ j ( A + B ) ,   j = 1 , , n . {\displaystyle {\sqrt {\sigma _{j}(AB)}}\leq {\frac {1}{2}}\lambda _{j}(A+B),\ j=1,\ldots ,n.}

Finance: Link to geometric asset returns

In finance much research is concerned with accurately estimating the rate of return of an asset over multiple periods in the future. In the case of lognormal asset returns, there is an exact formula to compute the arithmetic asset return from the geometric asset return.

For simplicity, assume we are looking at yearly geometric returns r1, r2, ... , rN over a time horizon of N years, i.e.

r n = V n V n 1 V n 1 , {\displaystyle r_{n}={\frac {V_{n}-V_{n-1}}{V_{n-1}}},}

where:

V n {\displaystyle V_{n}} = value of the asset at time n {\displaystyle n} ,
V n 1 {\displaystyle V_{n-1}} = value of the asset at time n 1 {\displaystyle n-1} .

The geometric and arithmetic returns are respectively defined as

g N = ( n = 1 N ( 1 + r n ) ) 1 / N , {\displaystyle g_{N}=\left(\prod _{n=1}^{N}(1+r_{n})\right)^{1/N},}
a N = 1 N n = 1 N r n . {\displaystyle a_{N}={\frac {1}{N}}\sum _{n=1}^{N}r_{n}.}

When the yearly geometric asset returns are lognormally distributed, then the following formula can be used to convert the geometric average return to the arithemtic average return:

1 + g N = 1 + a N 1 + σ 2 ( 1 + a N ) 2 , {\displaystyle 1+g_{N}={\frac {1+a_{N}}{\sqrt {1+{\frac {\sigma ^{2}}{(1+a_{N})^{2}}}}}},}

where σ 2 {\displaystyle \sigma ^{2}} is the variance of the observed asset returns This implicit equation for aN can be solved exactly as follows. First, notice that by setting

z = ( 1 + a N ) 2 , {\displaystyle z=(1+a_{N})^{2},}

we obtain a polynomial equation of degree 2:

z 2 ( 1 + g ) 2 ( 1 + g ) 2 σ 2 = 0. {\displaystyle z^{2}-(1+g)^{2}-(1+g)^{2}\sigma ^{2}=0.}

Solving this equation for z and using the definition of z, we obtain 4 possible solutions for aN:

a N = ± 1 + g N 2 1 ± 1 + 4 σ 2 ( 1 + g N ) 2 1. {\displaystyle a_{N}=\pm {\frac {1+g_{N}}{\sqrt {2}}}{\sqrt {1\pm {\sqrt {1+{\frac {4\sigma ^{2}}{(1+g_{N})^{2}}}}}}}-1.}

However, notice that

1 + 4 σ 2 ( 1 + g N ) 2 1. {\displaystyle {\sqrt {1+{\frac {4\sigma ^{2}}{(1+g_{N})^{2}}}}}\geq 1.}

This implies that the only 2 possible solutions are (as asset returns are real numbers):

a N = ± 1 + g N 2 1 + 1 + 4 σ 2 ( 1 + g N ) 2 1. {\displaystyle a_{N}=\pm {\frac {1+g_{N}}{\sqrt {2}}}{\sqrt {1+{\sqrt {1+{\frac {4\sigma ^{2}}{(1+g_{N})^{2}}}}}}}-1.}

Finally, we expect the derivative of aN with respect to gN to be non-negative as an increase in the geometric return should never cause a decrease in the arithmetic return. Indeed, both measure the average growth of an asset's value and therefore should move in similar directions. This leaves us with one solution to the implicit equation for aN, namely

a N = 1 + g N 2 1 + 1 + 4 σ 2 ( 1 + g N ) 2 1. {\displaystyle a_{N}={\frac {1+g_{N}}{\sqrt {2}}}{\sqrt {1+{\sqrt {1+{\frac {4\sigma ^{2}}{(1+g_{N})^{2}}}}}}}-1.}

Therefore, under the assumption of lognormally distributed asset returns, the arithmetic asset return is fully determined by the geometric asset return.

Other generalizations

Geometric proof without words that max (a,b) > root mean square (RMS) or quadratic mean (QM) > arithmetic mean (AM) > geometric mean (GM) > harmonic mean (HM) > min (a,b) of two distinct positive numbers a and b

Other generalizations of the inequality of arithmetic and geometric means include:

See also

Notes

  1. If AC = a and BC = b. OC = AM of a and b, and radius r = QO = OG.
    Using Pythagoras' theorem, QC² = QO² + OC² ∴ QC = √QO² + OC² = QM.
    Using Pythagoras' theorem, OC² = OG² + GC² ∴ GC = √OC² − OG² = GM.
    Using similar triangles, ⁠HC/GC⁠ = ⁠GC/OC⁠ ∴ HC = ⁠GC²/OC⁠ = HM.

References

  1. Hoffman, D. G. (1981), "Packing problems and inequalities", in Klarner, David A. (ed.), The Mathematical Gardner, Springer, pp. 212–225, doi:10.1007/978-1-4684-6686-7_19, ISBN 978-1-4684-6688-1
  2. "Euclid's Elements, Book V, Proposition 25".
  3. Steele, J. Michael (2004). The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities. MAA Problem Books Series. Cambridge University Press. ISBN 978-0-521-54677-5. OCLC 54079548.
  4. Motzkin, T. S. (1967). "The arithmetic-geometric inequality". Inequalities (Proc. Sympos. Wright-Patterson Air Force Base, Ohio, 1965). New york: Academic Press. pp. 205–224. MR 0223521.
  5. Aaron Potechin, Sum of Squares seminar, University of Chicago, "Lecture 5: SOS Proofs and the Motzkin Polynomial", slide 25
  6. Cauchy, Augustin-Louis (1821). Cours d'analyse de l'École Royale Polytechnique, première partie, Analyse algébrique, Paris. The proof of the inequality of arithmetic and geometric means can be found on pages 457ff.
  7. Arnold, Denise; Arnold, Graham (1993). Four unit mathematics. Hodder Arnold H&S. p. 242. ISBN 978-0-340-54335-1. OCLC 38328013.
  8. Aldaz, J.M. (2009). "Self-Improvement of the Inequality Between Arithmetic and Geometric Means". Journal of Mathematical Inequalities. 3 (2): 213–216. doi:10.7153/jmi-03-21. Retrieved 11 January 2023.
  9. Bhatia, Rajendra; Kittaneh, Fuad (1990). "On the singular values of a product of operators". SIAM Journal on Matrix Analysis and Applications. 11 (2): 272–277. doi:10.1137/0611018.
  10. Bhatia, Rajendra; Kittaneh, Fuad (2000). "Notes on matrix arithmetic-geometric mean inequalities". Linear Algebra and Its Applications. 308 (1–3): 203–211. doi:10.1016/S0024-3795(00)00048-3.
  11. S.W. Drury, On a question of Bhatia and Kittaneh, Linear Algebra Appl. 437 (2012) 1955–1960.
  12. Mindlin, Dimitry (2011). "On the Relationship between Arithmetic and Geometric Returns". SSRN Electronic Journal. doi:10.2139/ssrn.2083915. ISSN 1556-5068.
  13. cf. Iordanescu, R.; Nichita, F.F.; Pasarescu, O. Unification Theories: Means and Generalized Euler Formulas. Axioms 2020, 9, 144.

External links

Categories: