Misplaced Pages

Kingman's subadditive ergodic theorem

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In mathematics, Kingman's subadditive ergodic theorem is one of several ergodic theorems. It can be seen as a generalization of Birkhoff's ergodic theorem. Intuitively, the subadditive ergodic theorem is a kind of random variable version of Fekete's lemma (hence the name ergodic). As a result, it can be rephrased in the language of probability, e.g. using a sequence of random variables and expected values. The theorem is named after John Kingman.

Statement of theorem

Let T {\displaystyle T} be a measure-preserving transformation on the probability space ( Ω , Σ , μ ) {\displaystyle (\Omega ,\Sigma ,\mu )} , and let { g n } n N {\displaystyle \{g_{n}\}_{n\in \mathbb {N} }} be a sequence of L 1 {\displaystyle L^{1}} functions such that g n + m ( x ) g n ( x ) + g m ( T n x ) {\displaystyle g_{n+m}(x)\leq g_{n}(x)+g_{m}(T^{n}x)} (subadditivity relation). Then

lim n g n ( x ) n =: g ( x ) {\displaystyle \lim _{n\to \infty }{\frac {g_{n}(x)}{n}}=:g(x)\geq -\infty }

for μ {\displaystyle \mu } -a.e. x, where g(x) is T-invariant.

In particular, if T is ergodic, then g(x) is a constant.

Equivalent statement

Given a family of real random variables X ( m , n ) {\textstyle X(m,n)} , with 0 m < n N {\textstyle 0\leq m<n\in \mathbb {N} } , such that they are subadditive in the sense that X ( m + 1 , n + 1 ) = X ( m , n ) T X ( 0 , n ) X ( 0 , m ) + X ( m , n ) {\displaystyle {\begin{aligned}&X(m+1,n+1)=X(m,n)\circ T\\&X(0,n)\leq X(0,m)+X(m,n)\end{aligned}}} Then there exists a random variable Y {\textstyle Y} such that Y [ , + ) {\textstyle Y\in [-\infty ,+\infty )} , Y {\textstyle Y} is invariant with respect to T {\textstyle T} , and lim n 1 n X ( 0 , n ) = Y {\textstyle \lim _{n}{\frac {1}{n}}X(0,n)=Y} a.s..

They are equivalent by setting

  • g n = X ( 0 , n ) {\textstyle g_{n}=X(0,n)} with n 1 {\textstyle n\geq 1} ;
  • X ( m , m + n ) = g n T m {\textstyle X(m,m+n)=g_{n}\circ T^{m}} with m 0 {\textstyle m\geq 0} .

Proof

Proof due to (J. Michael Steele, 1989).

Subadditivity by partition

Fix some n 1 {\textstyle n\geq 1} . By subadditivity, for any l 1 : n 1 {\textstyle l\in 1:n-1} g n g n l + g l T n l {\displaystyle g_{n}\leq g_{n-l}+g_{l}\circ T^{n-l}}

We can picture this as starting with the set 0 : n 1 {\textstyle 0:n-1} , and then removing its length l {\textstyle l} tail.

Repeating this construction until the set 0 : n 1 {\textstyle 0:n-1} is all gone, we have a one-to-one correspondence between upper bounds of g n {\textstyle g_{n}} and partitions of 1 : n 1 {\textstyle 1:n-1} .

Specifically, let { k i : ( k i + l i 1 ) } i {\textstyle \{k_{i}:(k_{i}+l_{i}-1)\}_{i}} be a partition of 0 : n 1 {\textstyle 0:n-1} , then we have g n i g l i T k i {\displaystyle g_{n}\leq \sum _{i}g_{l_{i}}\circ T^{k_{i}}}

Constructing g

Let g := lim inf g n / n {\textstyle g:=\liminf g_{n}/n} , then it is T {\textstyle T} -invariant.

By subadditivity, g n + 1 n + 1 g 1 + g n T n + 1 {\displaystyle {\frac {g_{n+1}}{n+1}}\leq {\frac {g_{1}+g_{n}\circ T}{n+1}}}

Taking the n {\textstyle n\to \infty } limit, we have g g T {\displaystyle g\leq g\circ T} We can visualize T {\textstyle T} as hill-climbing on the graph of g {\textstyle g} . If T {\textstyle T} actually causes a nontrivial amount of hill-climbing, then we would get a spatial contraction, and so T {\textstyle T} does not preserve measure. Therefore g = g T {\textstyle g=g\circ T} a.e.

Let c R {\textstyle c\in \mathbb {R} } , then { g c } { g T c } = T 1 ( { g c } ) {\displaystyle \{g\geq c\}\subset \{g\circ T\geq c\}=T^{-1}(\{g\geq c\})} and since both sides have the same measure, by squeezing, they are equal a.e..

That is, g ( x ) c g ( T x ) c {\textstyle g(x)\geq c\iff g(Tx)\geq c} , a.e..

Now apply this for all rational c {\textstyle c} .

Reducing to the case of gₙ ≤ 0

By subadditivity, using the partition of 0 : n 1 {\textstyle 0:n-1} into singletons. g 1 g 1 g 2 g 1 + g 1 T g 3 g 1 + g 1 T + g 1 T 2 {\displaystyle {\begin{aligned}g_{1}&\leq g_{1}\\g_{2}&\leq g_{1}+g_{1}\circ T\\g_{3}&\leq g_{1}+g_{1}\circ T+g_{1}\circ T^{2}\\&\cdots \end{aligned}}} Now, construct the sequence f 1 = g 1 g 1 f 2 = g 2 ( g 1 + g 1 T ) f 3 = g 3 ( g 1 + g 1 T + g 1 T 2 ) {\displaystyle {\begin{aligned}f_{1}&=g_{1}-g_{1}\\f_{2}&=g_{2}-(g_{1}+g_{1}\circ T)\\f_{3}&=g_{3}-(g_{1}+g_{1}\circ T+g_{1}\circ T^{2})\\&\cdots \end{aligned}}} which satisfies f n 0 {\textstyle f_{n}\leq 0} for all n {\textstyle n} .

By the special case, f n / n {\textstyle f_{n}/n} converges a.e. to a T {\textstyle T} -invariant function.

By Birkhoff's pointwise ergodic theorem, the running average 1 n ( g 1 + g 1 T + g 1 T 2 + ) {\displaystyle {\frac {1}{n}}(g_{1}+g_{1}\circ T+g_{1}\circ T^{2}+\cdots )} converges a.e. to a T {\textstyle T} -invariant function. Therefore, their sum does as well.

Bounding the truncation

Fix arbitrary ϵ , M > 0 {\textstyle \epsilon ,M>0} , and construct the truncated function, still T {\textstyle T} -invariant: g := max ( g , M ) {\displaystyle g':=\max(g,-M)} With these, it suffices to prove an a.e. upper bound lim sup g n / n g + ϵ {\displaystyle \limsup g_{n}/n\leq g'+\epsilon } since it would allow us to take the limit ϵ = 1 / 1 , 1 / 2 , 1 / 3 , {\textstyle \epsilon =1/1,1/2,1/3,\dots } , then the limit M = 1 , 2 , 3 , {\textstyle M=1,2,3,\dots } , giving us a.e.

lim sup g n / n lim inf g n / n =: g {\displaystyle \limsup g_{n}/n\leq \liminf g_{n}/n=:g} And by squeezing, we have g n / n {\textstyle g_{n}/n} converging a.e. to g {\textstyle g} . Define two families of sets, one shrinking to the empty set, and one growing to the full set. For each "length" L = 1 , 2 , 3 , {\displaystyle L=1,2,3,\dots } , define B L := { x : g l / l > g + ϵ , l 1 , 2 , , L } {\displaystyle B_{L}:=\{x:g_{l}/l>g'+\epsilon ,\forall l\in 1,2,\dots ,L\}} A L := B L c = { x : g l / l g + ϵ , l 1 , 2 , , L } {\displaystyle A_{L}:=B_{L}^{c}=\{x:g_{l}/l\leq g'+\epsilon ,\exists l\in 1,2,\dots ,L\}} Since g lim inf g n / n {\textstyle g'\geq \liminf g_{n}/n} , the B {\textstyle B} family shrinks to the empty set.


Fix x X {\textstyle x\in X} . Fix L N {\textstyle L\in \mathbb {N} } . Fix n > L {\textstyle n>L} . The ordering of these qualifiers is vitally important, because we will be removing the qualifiers one by one in the reverse order.

To prove the a.e. upper bound, we must use the subadditivity, which means we must construct a partition of the set 0 : n 1 {\textstyle 0:n-1} . We do this inductively:

Take the smallest k {\textstyle k} not already in a partition.

If T k x A N {\textstyle T^{k}x\in A_{N}} , then g l ( T k x ) / l g ( x ) + ϵ {\textstyle g_{l}(T^{k}x)/l\leq g'(x)+\epsilon } for some l 1 , 2 , L {\textstyle l\in 1,2,\dots L} . Take one such l {\textstyle l} – the choice does not matter.

If k + l 1 n 1 {\textstyle k+l-1\leq n-1} , then we cut out { k , , k + l 1 } {\textstyle \{k,\dots ,k+l-1\}} . Call these partitions “type 1”. Else, we cut out { k } {\textstyle \{k\}} . Call these partitions “type 2”.

Else, we cut out { k } {\textstyle \{k\}} . Call these partitions “type 3”.

Now convert this partition into an inequality: g n ( x ) i g l i ( T k i x ) {\displaystyle g_{n}(x)\leq \sum _{i}g_{l_{i}}(T^{k_{i}}x)} where k i {\textstyle k_{i}} are the heads of the partitions, and l i {\textstyle l_{i}} are the lengths.

Since all g n 0 {\textstyle g_{n}\leq 0} , we can remove the other kinds of partitions: g n ( x ) i : type 1 g l i ( T k i x ) {\displaystyle g_{n}(x)\leq \sum _{i:{\text{type 1}}}g_{l_{i}}(T^{k_{i}}x)} By construction, each g l i ( T k i x ) l i ( g ( x ) + ϵ ) {\textstyle g_{l_{i}}(T^{k_{i}}x)\leq l_{i}(g'(x)+\epsilon )} , thus 1 n g n ( x ) g ( x ) 1 n i : type 1 l i + ϵ {\displaystyle {\frac {1}{n}}g_{n}(x)\leq g'(x){\frac {1}{n}}\sum _{i:{\text{type 1}}}l_{i}+\epsilon } Now it would be tempting to continue with g ( x ) 1 n i : type 1 l i g ( x ) {\textstyle g'(x){\frac {1}{n}}\sum _{i:{\text{type 1}}}l_{i}\leq g'(x)} , but unfortunately g 0 {\textstyle g'\leq 0} , so the direction is the exact opposite. We must lower bound the sum i : type 1 l i {\textstyle \sum _{i:{\text{type 1}}}l_{i}} .

The number of type 3 elements is equal to k 0 : n 1 1 B L ( T k x ) {\displaystyle \sum _{k\in 0:n-1}1_{B_{L}}(T^{k}x)} If a number k {\textstyle k} is of type 2, then it must be inside the last L 1 {\textstyle L-1} elements of 0 : n 1 {\textstyle 0:n-1} . Thus the number of type 2 elements is at most L 1 {\textstyle L-1} . Together, we have the lower bound: 1 n i : type 1 l i 1 L 1 n 1 n k 0 : n 1 1 B L ( T k x ) {\displaystyle {\frac {1}{n}}\sum _{i:{\text{type 1}}}l_{i}\geq 1-{\frac {L-1}{n}}-{\frac {1}{n}}\sum _{k\in 0:n-1}1_{B_{L}}(T^{k}x)}

Peeling off the first qualifier

Remove the n > L {\textstyle n>L} qualifier by taking the n {\textstyle n\to \infty } limit.

By Birkhoff's pointwise ergodic theorem, there exists an a.e. pointwise limit lim n 1 n k 0 : n 1 1 B L ( T k x ) 1 ¯ B L ( x ) {\displaystyle \lim _{n}{\frac {1}{n}}\sum _{k\in 0:n-1}1_{B_{L}}(T^{k}x)\to {\bar {1}}_{B_{L}}(x)} satisfying
1 ¯ B L = μ ( B L ) ; 1 ¯ B L ( x ) [ 0 , 1 ] {\displaystyle \int {\bar {1}}_{B_{L}}=\mu (B_{L});\quad {\bar {1}}_{B_{L}}(x)\in } At the limit, we find that for a.e. x X , L N {\textstyle x\in X,L\in \mathbb {N} } , lim sup n g n ( x ) n g ( x ) ( 1 1 ¯ B L ( x ) ) + ϵ {\displaystyle \limsup _{n}{\frac {g_{n}(x)}{n}}\leq g'(x)(1-{\bar {1}}_{B_{L}}(x))+\epsilon }

Peeling off the second qualifier

Remove the L N {\textstyle L\in \mathbb {N} } qualifier by taking the L {\textstyle L\to \infty } limit.

Since we have 1 ¯ B L = μ ( B L ) 0 {\displaystyle \int {\bar {1}}_{B_{L}}=\mu (B_{L})\to 0} and 1 ¯ B L 1 ¯ B L + 1 {\displaystyle {\bar {1}}_{B_{L}}\geq {\bar {1}}_{B_{L+1}}\geq \cdots } as 1 B L 1 B L + 1 {\displaystyle 1_{B_{L}}\geq 1_{B_{L+1}}\geq \cdots } , we can apply the same argument used for proving Markov's inequality, to obtain
lim sup n g n ( x ) n g ( x ) + ϵ {\displaystyle \limsup _{n}{\frac {g_{n}(x)}{n}}\leq g'(x)+\epsilon } for a.e. x X {\textstyle x\in X} .


In detail, the argument is as follows: since 1 ¯ B L 1 ¯ B L + 1 0 {\displaystyle {\bar {1}}_{B_{L}}\geq {\bar {1}}_{B_{L+1}}\geq \cdots \geq 0} , and 1 ¯ B L 0 {\displaystyle \int {\bar {1}}_{B_{L}}\to 0} , we know that for any small δ , δ > 0 {\displaystyle \delta ,\delta '>0} , all large enough L {\displaystyle L} satisfies 1 ¯ B L ( x ) < δ {\displaystyle {\bar {1}}_{B_{L}}(x)<\delta } everywhere except on a set of size δ {\displaystyle \geq \delta '} . Thus, lim sup n g n ( x ) n g ( x ) ( 1 δ ) + ϵ {\displaystyle \limsup _{n}{\frac {g_{n}(x)}{n}}\leq g'(x)(1-\delta )+\epsilon } with probability 1 δ {\displaystyle \geq 1-\delta '} . Now take both δ , δ 0 {\displaystyle \delta ,\delta '\to 0} .

Applications

Taking g n ( x ) := j = 0 n 1 f ( T j x ) {\displaystyle g_{n}(x):=\sum _{j=0}^{n-1}f(T^{j}x)} recovers Birkhoff's pointwise ergodic theorem.

Taking all g n {\displaystyle g_{n}} constant functions, we recover the Fekete's subadditive lemma.

Kingman's subadditive ergodic theorem can be used to prove statements about Lyapunov exponents. It also has applications to percolations and longest increasing subsequence.

Longest increasing subsequence

To study the longest increasing subsequence of a random permutation π {\displaystyle \pi } , we generate it in an equivalent way. A random permutation on 1 : n {\displaystyle 1:n} is equivalently generated by uniformly sampling n {\displaystyle n} points in a square, then find the longest increasing subsequence of that.

Now, define the Poisson point process with density 1 on [ 0 , ) 2 {\displaystyle [0,\infty )^{2}} , and define the random variables M k {\displaystyle M_{k}^{*}} to be the length of the longest increasing subsequence in the square [ 0 , k ) 2 {\displaystyle [0,k)^{2}} . Define the measure-preserving transform T {\displaystyle T} by shifting the plane by ( 1 , 1 ) {\displaystyle (-1,-1)} , then chopping off the parts that have fallen out of [ 0 , ) 2 {\displaystyle [0,\infty )^{2}} .

The process is subadditive, that is, M k + m M k + M m T k {\displaystyle M_{k+m}^{*}\geq M_{k}^{*}+M_{m}^{*}\circ T^{k}} . To see this, notice that the right side constructs an increasing subsequence first in the square [ 0 , k ) 2 {\displaystyle [0,k)^{2}} , then in the square [ k , k + m ) 2 {\displaystyle [k,k+m)^{2}} , and finally concatenate them together. This produces an increasing subsequence in [ 0 , k + m ) 2 {\displaystyle [0,k+m)^{2}} , but not necessarily the longest one.

Also, T {\displaystyle T} is ergodic, so by Kingman's theorem, M k / k {\displaystyle M_{k}^{*}/k} converges to a constant almost surely. Since at the limit, there are n = k 2 {\displaystyle n=k^{2}} points in the square, we have L n / n {\displaystyle L_{n}^{*}/{\sqrt {n}}} converging to a constant almost surely.

References

  1. S. Lalley, Kingman's subadditive ergodic theorem lecture notes, http://galton.uchicago.edu/~lalley/Courses/Graz/Kingman.pdf
  2. Chen. "Subadditive ergodic theorems" (PDF). New York University.
  3. Steele, J. Michael (1989). "Kingman's subadditive ergodic theorem" (PDF). Annales de l'I.H.P. Probabilités et statistiques. 25 (1): 93–98. ISSN 1778-7017.
  4. Pitman, Lecture 12: Subadditive ergodic theory, http://www.stat.berkeley.edu/~pitman/s205s03/lecture12.pdf
Category: