Misplaced Pages

Antithetic variates

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Monte Carlo method

In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error in the simulated signal (using Monte Carlo methods) has a one-over square root convergence, a very large number of sample paths is required to obtain an accurate result. The antithetic variates method reduces the variance of the simulation results.

Underlying principle

The antithetic variates technique consists, for every sample path obtained, in taking its antithetic path — that is given a path { ε 1 , , ε M } {\displaystyle \{\varepsilon _{1},\dots ,\varepsilon _{M}\}} to also take { ε 1 , , ε M } {\displaystyle \{-\varepsilon _{1},\dots ,-\varepsilon _{M}\}} . The advantage of this technique is twofold: it reduces the number of normal samples to be taken to generate N paths, and it reduces the variance of the sample paths, improving the precision.

Suppose that we would like to estimate

θ = E ( h ( X ) ) = E ( Y ) {\displaystyle \theta =\mathrm {E} (h(X))=\mathrm {E} (Y)\,}

For that we have generated two samples

Y 1  and  Y 2 {\displaystyle Y_{1}{\text{ and }}Y_{2}\,}

An unbiased estimate of θ {\displaystyle {\theta }} is given by

θ ^ = Y 1 + Y 2 2 . {\displaystyle {\hat {\theta }}={\frac {Y_{1}+Y_{2}}{2}}.}

And

Var ( θ ^ ) = Var ( Y 1 ) + Var ( Y 2 ) + 2 Cov ( Y 1 , Y 2 ) 4 {\displaystyle {\text{Var}}({\hat {\theta }})={\frac {{\text{Var}}(Y_{1})+{\text{Var}}(Y_{2})+2{\text{Cov}}(Y_{1},Y_{2})}{4}}}

so variance is reduced if Cov ( Y 1 , Y 2 ) {\displaystyle {\text{Cov}}(Y_{1},Y_{2})} is negative.

Example 1

If the law of the variable X follows a uniform distribution along , the first sample will be u 1 , , u n {\displaystyle u_{1},\ldots ,u_{n}} , where, for any given i, u i {\displaystyle u_{i}} is obtained from U(0, 1). The second sample is built from u 1 , , u n {\displaystyle u'_{1},\ldots ,u'_{n}} , where, for any given i: u i = 1 u i {\displaystyle u'_{i}=1-u_{i}} . If the set u i {\displaystyle u_{i}} is uniform along , so are u i {\displaystyle u'_{i}} . Furthermore, covariance is negative, allowing for initial variance reduction.

Example 2: integral calculation

We would like to estimate

I = 0 1 1 1 + x d x . {\displaystyle I=\int _{0}^{1}{\frac {1}{1+x}}\,\mathrm {d} x.}

The exact result is I = ln 2 0.69314718 {\displaystyle I=\ln 2\approx 0.69314718} . This integral can be seen as the expected value of f ( U ) {\displaystyle f(U)} , where

f ( x ) = 1 1 + x {\displaystyle f(x)={\frac {1}{1+x}}}

and U follows a uniform distribution .

The following table compares the classical Monte Carlo estimate (sample size: 2n, where n = 1500) to the antithetic variates estimate (sample size: n, completed with the transformed sample 1 − ui):

Estimate standard error
Classical Estimate 0.69365 0.00255
Antithetic Variates 0.69399 0.00063

The use of the antithetic variates method to estimate the result shows an important variance reduction.

See also

References

  1. Botev, Z.; Ridder, A. (2017). "Variance Reduction". Wiley StatsRef: Statistics Reference Online: 1–6. doi:10.1002/9781118445112.stat07975. ISBN 9781118445112.
  2. Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo methods. John Wiley & Sons.(Chapter 9.3)
Categories: