Misplaced Pages

Azuma's inequality

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Azuma–Hoeffding inequality) Theorem in probability theory

In probability theory, the Azuma–Hoeffding inequality (named after Kazuoki Azuma and Wassily Hoeffding) gives a concentration result for the values of martingales that have bounded differences.

Suppose { X k : k = 0 , 1 , 2 , 3 , } {\displaystyle \{X_{k}:k=0,1,2,3,\dots \}} is a martingale (or super-martingale) and

| X k X k 1 | c k , {\displaystyle |X_{k}-X_{k-1}|\leq c_{k},\,}

almost surely. Then for all positive integers N and all positive reals ϵ {\displaystyle \epsilon } ,

P ( X N X 0 ϵ ) exp ( ϵ 2 2 k = 1 N c k 2 ) . {\displaystyle {\text{P}}(X_{N}-X_{0}\geq \epsilon )\leq \exp \left({-\epsilon ^{2} \over 2\sum _{k=1}^{N}c_{k}^{2}}\right).}

And symmetrically (when Xk is a sub-martingale):

P ( X N X 0 ϵ ) exp ( ϵ 2 2 k = 1 N c k 2 ) . {\displaystyle {\text{P}}(X_{N}-X_{0}\leq -\epsilon )\leq \exp \left({-\epsilon ^{2} \over 2\sum _{k=1}^{N}c_{k}^{2}}\right).}

If X is a martingale, using both inequalities above and applying the union bound allows one to obtain a two-sided bound:

P ( | X N X 0 | ϵ ) 2 exp ( ϵ 2 2 k = 1 N c k 2 ) . {\displaystyle {\text{P}}(|X_{N}-X_{0}|\geq \epsilon )\leq 2\exp \left({-\epsilon ^{2} \over 2\sum _{k=1}^{N}c_{k}^{2}}\right).}

Proof

The proof shares similar idea of the proof for the general form of Azuma's inequality listed below. Actually, this can be viewed as a direct corollary of the general form of Azuma's inequality.

A general form of Azuma's inequality

Limitation of the vanilla Azuma's inequality

Note that the vanilla Azuma's inequality requires symmetric bounds on martingale increments, i.e. c t X t X t 1 c t {\displaystyle -c_{t}\leq X_{t}-X_{t-1}\leq c_{t}} . So, if known bound is asymmetric, e.g. a t X t X t 1 b t {\displaystyle a_{t}\leq X_{t}-X_{t-1}\leq b_{t}} , to use Azuma's inequality, one need to choose c t = max ( | a t | , | b t | ) {\displaystyle c_{t}=\max(|a_{t}|,|b_{t}|)} which might be a waste of information on the boundedness of X t X t 1 {\displaystyle X_{t}-X_{t-1}} . However, this issue can be resolved and one can obtain a tighter probability bound with the following general form of Azuma's inequality.

Statement

Let { X 0 , X 1 , } {\displaystyle \left\{X_{0},X_{1},\cdots \right\}} be a martingale (or supermartingale) with respect to filtration { F 0 , F 1 , } {\displaystyle \left\{{\mathcal {F}}_{0},{\mathcal {F}}_{1},\cdots \right\}} . Assume there are predictable processes { A 0 , A 1 , } {\displaystyle \left\{A_{0},A_{1},\cdots \right\}} and { B 0 , B 1 , } {\displaystyle \left\{B_{0},B_{1},\dots \right\}} with respect to { F 0 , F 1 , } {\displaystyle \left\{{\mathcal {F}}_{0},{\mathcal {F}}_{1},\cdots \right\}} , i.e. for all t {\displaystyle t} , A t , B t {\displaystyle A_{t},B_{t}} are F t 1 {\displaystyle {\mathcal {F}}_{t-1}} -measurable, and constants 0 < c 1 , c 2 , < {\displaystyle 0<c_{1},c_{2},\cdots <\infty } such that

A t X t X t 1 B t and B t A t c t {\displaystyle A_{t}\leq X_{t}-X_{t-1}\leq B_{t}\quad {\text{and}}\quad B_{t}-A_{t}\leq c_{t}}

almost surely. Then for all ϵ > 0 {\displaystyle \epsilon >0} ,

P ( X n X 0 ϵ ) exp ( 2 ϵ 2 t = 1 n c t 2 ) . {\displaystyle {\text{P}}(X_{n}-X_{0}\geq \epsilon )\leq \exp \left(-{\frac {2\epsilon ^{2}}{\sum _{t=1}^{n}c_{t}^{2}}}\right).}

Since a submartingale is a supermartingale with signs reversed, we have if instead { X 0 , X 1 , } {\displaystyle \left\{X_{0},X_{1},\dots \right\}} is a martingale (or submartingale),

P ( X n X 0 ϵ ) exp ( 2 ϵ 2 t = 1 n c t 2 ) . {\displaystyle {\text{P}}(X_{n}-X_{0}\leq -\epsilon )\leq \exp \left(-{\frac {2\epsilon ^{2}}{\sum _{t=1}^{n}c_{t}^{2}}}\right).}

If { X 0 , X 1 , } {\displaystyle \left\{X_{0},X_{1},\dots \right\}} is a martingale, since it is both a supermartingale and submartingale, by applying union bound to the two inequalities above, we could obtain the two-sided bound:

P ( | X n X 0 | ϵ ) 2 exp ( 2 ϵ 2 t = 1 n c t 2 ) . {\displaystyle {\text{P}}(|X_{n}-X_{0}|\geq \epsilon )\leq 2\exp \left(-{\frac {2\epsilon ^{2}}{\sum _{t=1}^{n}c_{t}^{2}}}\right).}

Proof

We will prove the supermartingale case only as the rest are self-evident. By Doob decomposition, we could decompose supermartingale { X t } {\displaystyle \left\{X_{t}\right\}} as X t = Y t + Z t {\displaystyle X_{t}=Y_{t}+Z_{t}} where { Y t , F t } {\displaystyle \left\{Y_{t},{\mathcal {F}}_{t}\right\}} is a martingale and { Z t , F t } {\displaystyle \left\{Z_{t},{\mathcal {F}}_{t}\right\}} is a nonincreasing predictable sequence (Note that if { X t } {\displaystyle \left\{X_{t}\right\}} itself is a martingale, then Z t = 0 {\displaystyle Z_{t}=0} ). From A t X t X t 1 B t {\displaystyle A_{t}\leq X_{t}-X_{t-1}\leq B_{t}} , we have

( Z t Z t 1 ) + A t Y t Y t 1 ( Z t Z t 1 ) + B t {\displaystyle -(Z_{t}-Z_{t-1})+A_{t}\leq Y_{t}-Y_{t-1}\leq -(Z_{t}-Z_{t-1})+B_{t}}

Applying Chernoff bound to Y n Y 0 {\displaystyle Y_{n}-Y_{0}} , we have for ϵ > 0 {\displaystyle \epsilon >0} ,

P ( Y n Y 0 ϵ ) min s > 0   e s ϵ E [ e s ( Y n Y 0 ) ] = min s > 0   e s ϵ E [ exp ( s t = 1 n ( Y t Y t 1 ) ) ] = min s > 0   e s ϵ E [ exp ( s t = 1 n 1 ( Y t Y t 1 ) ) E [ exp ( s ( Y n Y n 1 ) ) F n 1 ] ] {\displaystyle {\begin{aligned}{\text{P}}(Y_{n}-Y_{0}\geq \epsilon )&\leq {\underset {s>0}{\min }}\ e^{-s\epsilon }\mathbb {E} \\&={\underset {s>0}{\min }}\ e^{-s\epsilon }\mathbb {E} \left\\&={\underset {s>0}{\min }}\ e^{-s\epsilon }\mathbb {E} \left\right]\end{aligned}}}

For the inner expectation term, since

(i) E [ Y t Y t 1 F t 1 ] = 0 {\displaystyle \mathbb {E} =0} as { Y t } {\displaystyle \left\{Y_{t}\right\}} is a martingale;

(ii) ( Z t Z t 1 ) + A t Y t Y t 1 ( Z t Z t 1 ) + B t {\displaystyle -(Z_{t}-Z_{t-1})+A_{t}\leq Y_{t}-Y_{t-1}\leq -(Z_{t}-Z_{t-1})+B_{t}} ;

(iii) ( Z t Z t 1 ) + A t {\displaystyle -(Z_{t}-Z_{t-1})+A_{t}} and ( Z t Z t 1 ) + B t {\displaystyle -(Z_{t}-Z_{t-1})+B_{t}} are both F t 1 {\displaystyle {\mathcal {F}}_{t-1}} -measurable as { Z t } {\displaystyle \left\{Z_{t}\right\}} is a predictable process;

(iv) B t A t c t {\displaystyle B_{t}-A_{t}\leq c_{t}} ;

by applying Hoeffding's lemma, we have

E [ exp ( s ( Y t Y t 1 ) ) F t 1 ] exp ( s 2 ( B t A t ) 2 8 ) exp ( s 2 c t 2 8 ) . {\displaystyle \mathbb {E} \left\leq \exp \left({\frac {s^{2}(B_{t}-A_{t})^{2}}{8}}\right)\leq \exp \left({\frac {s^{2}c_{t}^{2}}{8}}\right).}

Repeating this step, one could get

P ( Y n Y 0 ϵ ) min s > 0   e s ϵ exp ( s 2 t = 1 n c t 2 8 ) . {\displaystyle {\text{P}}(Y_{n}-Y_{0}\geq \epsilon )\leq {\underset {s>0}{\min }}\ e^{-s\epsilon }\exp \left({\frac {s^{2}\sum _{t=1}^{n}c_{t}^{2}}{8}}\right).}

Note that the minimum is achieved at s = 4 ϵ t = 1 n c t 2 {\displaystyle s={\frac {4\epsilon }{\sum _{t=1}^{n}c_{t}^{2}}}} , so we have

P ( Y n Y 0 ϵ ) exp ( 2 ϵ 2 t = 1 n c t 2 ) . {\displaystyle {\text{P}}(Y_{n}-Y_{0}\geq \epsilon )\leq \exp \left(-{\frac {2\epsilon ^{2}}{\sum _{t=1}^{n}c_{t}^{2}}}\right).}

Finally, since X n X 0 = ( Y n Y 0 ) + ( Z n Z 0 ) {\displaystyle X_{n}-X_{0}=(Y_{n}-Y_{0})+(Z_{n}-Z_{0})} and Z n Z 0 0 {\displaystyle Z_{n}-Z_{0}\leq 0} as { Z n } {\displaystyle \left\{Z_{n}\right\}} is nonincreasing, so event { X n X 0 ϵ } {\displaystyle \left\{X_{n}-X_{0}\geq \epsilon \right\}} implies { Y n Y 0 ϵ } {\displaystyle \left\{Y_{n}-Y_{0}\geq \epsilon \right\}} , and therefore

P ( X n X 0 ϵ ) P ( Y n Y 0 ϵ ) exp ( 2 ϵ 2 t = 1 n c t 2 ) . {\displaystyle {\text{P}}(X_{n}-X_{0}\geq \epsilon )\leq {\text{P}}(Y_{n}-Y_{0}\geq \epsilon )\leq \exp \left(-{\frac {2\epsilon ^{2}}{\sum _{t=1}^{n}c_{t}^{2}}}\right).\square }

Remark

Note that by setting A t = c t , B t = c t {\displaystyle A_{t}=-c_{t},B_{t}=c_{t}} , we could obtain the vanilla Azuma's inequality.

Note that for either submartingale or supermartingale, only one side of Azuma's inequality holds. We can't say much about how fast a submartingale with bounded increments rises (or a supermartingale falls).

This general form of Azuma's inequality applied to the Doob martingale gives McDiarmid's inequality which is common in the analysis of randomized algorithms.


Simple example of Azuma's inequality for coin flips

Let Fi be a sequence of independent and identically distributed random coin flips (i.e., let Fi be equally likely to be −1 or 1 independent of the other values of Fi). Defining X i = j = 1 i F j {\displaystyle X_{i}=\sum _{j=1}^{i}F_{j}} yields a martingale with |Xk − Xk−1| ≤ 1, allowing us to apply Azuma's inequality. Specifically, we get

P ( X n > t ) exp ( t 2 2 n ) . {\displaystyle \operatorname {P} (X_{n}>t)\leq \exp \left({\frac {-t^{2}}{2n}}\right).}

For example, if we set t proportional to n, then this tells us that although the maximum possible value of Xn scales linearly with n, the probability that the sum scales linearly with n decreases exponentially fast with n.

If we set t = 2 n ln n {\displaystyle t={\sqrt {2n\ln n}}} we get:

P ( X n > 2 n ln n ) 1 n , {\displaystyle \operatorname {P} \left(X_{n}>{\sqrt {2n\ln n}}\right)\leq {\frac {1}{n}},}

which means that the probability of deviating more than 2 n ln n {\displaystyle {\sqrt {2n\ln n}}} approaches 0 as n goes to infinity.

Remark

A similar inequality was proved under weaker assumptions by Sergei Bernstein in 1937.

Hoeffding proved this result for independent variables rather than martingale differences, and also observed that slight modifications of his argument establish the result for martingale differences (see page 9 of his 1963 paper).

See also

Notes

  1. It is not a direct application of Hoeffding's lemma though. The statement of Hoeffding's lemma handles the total expectation, but it also holds for the case when the expectation is conditional expectation and the bounds are measurable with respect to the sigma-field the conditional expectation is conditioned on. The proof is the same as for the classical Hoeffding's lemma.

References

Categories: