Misplaced Pages

Autocovariance

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Autocovariance function) Concept in probability and statistics
Part of a series on Statistics
Correlation and covariance
For random vectors
For stochastic processes
For deterministic signals

In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the process in question.

Auto-covariance of stochastic processes

Definition

With the usual notation E {\displaystyle \operatorname {E} } for the expectation operator, if the stochastic process { X t } {\displaystyle \left\{X_{t}\right\}} has the mean function μ t = E [ X t ] {\displaystyle \mu _{t}=\operatorname {E} } , then the autocovariance is given by

K X X ( t 1 , t 2 ) = cov [ X t 1 , X t 2 ] = E [ ( X t 1 μ t 1 ) ( X t 2 μ t 2 ) ] = E [ X t 1 X t 2 ] μ t 1 μ t 2 {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {cov} \left=\operatorname {E} =\operatorname {E} -\mu _{t_{1}}\mu _{t_{2}}} (Eq.1)

where t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} are two instances in time.

Definition for weakly stationary process

If { X t } {\displaystyle \left\{X_{t}\right\}} is a weakly stationary (WSS) process, then the following are true:

μ t 1 = μ t 2 μ {\displaystyle \mu _{t_{1}}=\mu _{t_{2}}\triangleq \mu } for all t 1 , t 2 {\displaystyle t_{1},t_{2}}

and

E [ | X t | 2 ] < {\displaystyle \operatorname {E} <\infty } for all t {\displaystyle t}

and

K X X ( t 1 , t 2 ) = K X X ( t 2 t 1 , 0 ) K X X ( t 2 t 1 ) = K X X ( τ ) , {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})=\operatorname {K} _{XX}(t_{2}-t_{1},0)\triangleq \operatorname {K} _{XX}(t_{2}-t_{1})=\operatorname {K} _{XX}(\tau ),}

where τ = t 2 t 1 {\displaystyle \tau =t_{2}-t_{1}} is the lag time, or the amount of time by which the signal has been shifted.

The autocovariance function of a WSS process is therefore given by:

K X X ( τ ) = E [ ( X t μ t ) ( X t τ μ t τ ) ] = E [ X t X t τ ] μ t μ t τ {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} =\operatorname {E} -\mu _{t}\mu _{t-\tau }} (Eq.2)

which is equivalent to

K X X ( τ ) = E [ ( X t + τ μ t + τ ) ( X t μ t ) ] = E [ X t + τ X t ] μ 2 {\displaystyle \operatorname {K} _{XX}(\tau )=\operatorname {E} =\operatorname {E} -\mu ^{2}} .

Normalization

It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.

The definition of the normalized auto-correlation of a stochastic process is

ρ X X ( t 1 , t 2 ) = K X X ( t 1 , t 2 ) σ t 1 σ t 2 = E [ ( X t 1 μ t 1 ) ( X t 2 μ t 2 ) ] σ t 1 σ t 2 {\displaystyle \rho _{XX}(t_{1},t_{2})={\frac {\operatorname {K} _{XX}(t_{1},t_{2})}{\sigma _{t_{1}}\sigma _{t_{2}}}}={\frac {\operatorname {E} }{\sigma _{t_{1}}\sigma _{t_{2}}}}} .

If the function ρ X X {\displaystyle \rho _{XX}} is well-defined, its value must lie in the range [ 1 , 1 ] {\displaystyle } , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.

For a WSS process, the definition is

ρ X X ( τ ) = K X X ( τ ) σ 2 = E [ ( X t μ ) ( X t + τ μ ) ] σ 2 {\displaystyle \rho _{XX}(\tau )={\frac {\operatorname {K} _{XX}(\tau )}{\sigma ^{2}}}={\frac {\operatorname {E} }{\sigma ^{2}}}} .

where

K X X ( 0 ) = σ 2 {\displaystyle \operatorname {K} _{XX}(0)=\sigma ^{2}} .

Properties

Symmetry property

K X X ( t 1 , t 2 ) = K X X ( t 2 , t 1 ) ¯ {\displaystyle \operatorname {K} _{XX}(t_{1},t_{2})={\overline {\operatorname {K} _{XX}(t_{2},t_{1})}}}

respectively for a WSS process:

K X X ( τ ) = K X X ( τ ) ¯ {\displaystyle \operatorname {K} _{XX}(\tau )={\overline {\operatorname {K} _{XX}(-\tau )}}}

Linear filtering

The autocovariance of a linearly filtered process { Y t } {\displaystyle \left\{Y_{t}\right\}}

Y t = k = a k X t + k {\displaystyle Y_{t}=\sum _{k=-\infty }^{\infty }a_{k}X_{t+k}\,}

is

K Y Y ( τ ) = k , l = a k a l K X X ( τ + k l ) . {\displaystyle K_{YY}(\tau )=\sum _{k,l=-\infty }^{\infty }a_{k}a_{l}K_{XX}(\tau +k-l).\,}

Calculating turbulent diffusivity

Autocovariance can be used to calculate turbulent diffusivity. Turbulence in a flow can cause the fluctuation of velocity in space and time. Thus, we are able to identify turbulence through the statistics of those fluctuations.

Reynolds decomposition is used to define the velocity fluctuations u ( x , t ) {\displaystyle u'(x,t)} (assume we are now working with 1D problem and U ( x , t ) {\displaystyle U(x,t)} is the velocity along x {\displaystyle x} direction):

U ( x , t ) = U ( x , t ) + u ( x , t ) , {\displaystyle U(x,t)=\langle U(x,t)\rangle +u'(x,t),}

where U ( x , t ) {\displaystyle U(x,t)} is the true velocity, and U ( x , t ) {\displaystyle \langle U(x,t)\rangle } is the expected value of velocity. If we choose a correct U ( x , t ) {\displaystyle \langle U(x,t)\rangle } , all of the stochastic components of the turbulent velocity will be included in u ( x , t ) {\displaystyle u'(x,t)} . To determine U ( x , t ) {\displaystyle \langle U(x,t)\rangle } , a set of velocity measurements that are assembled from points in space, moments in time or repeated experiments is required.

If we assume the turbulent flux u c {\displaystyle \langle u'c'\rangle } ( c = c c {\displaystyle c'=c-\langle c\rangle } , and c is the concentration term) can be caused by a random walk, we can use Fick's laws of diffusion to express the turbulent flux term:

J turbulence x = u c D T x c x . {\displaystyle J_{{\text{turbulence}}_{x}}=\langle u'c'\rangle \approx D_{T_{x}}{\frac {\partial \langle c\rangle }{\partial x}}.}

The velocity autocovariance is defined as

K X X u ( t 0 ) u ( t 0 + τ ) {\displaystyle K_{XX}\equiv \langle u'(t_{0})u'(t_{0}+\tau )\rangle } or K X X u ( x 0 ) u ( x 0 + r ) , {\displaystyle K_{XX}\equiv \langle u'(x_{0})u'(x_{0}+r)\rangle ,}

where τ {\displaystyle \tau } is the lag time, and r {\displaystyle r} is the lag distance.

The turbulent diffusivity D T x {\displaystyle D_{T_{x}}} can be calculated using the following 3 methods:

  1. If we have velocity data along a Lagrangian trajectory:
    D T x = τ u ( t 0 ) u ( t 0 + τ ) d τ . {\displaystyle D_{T_{x}}=\int _{\tau }^{\infty }u'(t_{0})u'(t_{0}+\tau )\,d\tau .}
  2. If we have velocity data at one fixed (Eulerian) location:
    D T x [ 0.3 ± 0.1 ] [ u u + u 2 u u ] τ u ( t 0 ) u ( t 0 + τ ) d τ . {\displaystyle D_{T_{x}}\approx \left\int _{\tau }^{\infty }u'(t_{0})u'(t_{0}+\tau )\,d\tau .}
  3. If we have velocity information at two fixed (Eulerian) locations:
    D T x [ 0.4 ± 0.1 ] [ 1 u u ] r u ( x 0 ) u ( x 0 + r ) d r , {\displaystyle D_{T_{x}}\approx \left\int _{r}^{\infty }u'(x_{0})u'(x_{0}+r)\,dr,}
    where r {\displaystyle r} is the distance separated by these two fixed locations.

Auto-covariance of random vectors

Main article: Auto-covariance matrix

See also

References

  1. ^ Hsu, Hwei (1997). Probability, random variables, and random processes. McGraw-Hill. ISBN 978-0-07-030644-8.
  2. Lapidoth, Amos (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN 978-0-521-19395-5.
  3. ^ Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3
  4. Taylor, G. I. (1922-01-01). "Diffusion by Continuous Movements" (PDF). Proceedings of the London Mathematical Society. s2-20 (1): 196–212. doi:10.1112/plms/s2-20.1.196. ISSN 1460-244X.

Further reading

Categories: