Misplaced Pages

Importance sampling: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 19:56, 21 December 2007 edit128.61.30.243 (talk) References← Previous edit Revision as of 18:07, 7 March 2008 edit undoUriah923 (talk | contribs)Extended confirmed users2,621 edits cleanupNext edit →
Line 1: Line 1:
'''Importance sampling''' is a general technique for estimating the properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both. '''Importance sampling''' is a general technique for estimating the properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both.


= Basic theory = == Basic theory ==
More formally, let <math>X</math> be a ] in <math>S</math>. Let <math>p</math> be a More formally, let <math>X</math> be a ] in <math>S</math>. Let <math>p</math> be a
probability measure on <math>S</math>, and <math>f</math> some function on <math>S</math>. Then the expectation of <math>f</math> under <math>p</math> can be written as probability measure on <math>S</math>, and <math>f</math> some function on <math>S</math>. Then the expectation of <math>f</math> under <math>p</math> can be written as
Line 55: Line 55:
There are two main applications of importance sampling methods, which naturally, are interrelated. While the aim of both applications is to estimate statistics of random variables, the field of probabilistic inference focuses more on the estimation of <math>p</math> or related statistics, while the field of simulation focuses more on the choice of the distribution <math>q</math>. Nevertheless, the basic theory and tools are identical. There are two main applications of importance sampling methods, which naturally, are interrelated. While the aim of both applications is to estimate statistics of random variables, the field of probabilistic inference focuses more on the estimation of <math>p</math> or related statistics, while the field of simulation focuses more on the choice of the distribution <math>q</math>. Nevertheless, the basic theory and tools are identical.


= Application to probabilistic inference = == Application to probabilistic inference ==


Such methods are frequently used to estimate posterior densities or expectations in state and/or parameter estimation problems in probabilistic models that are too hard to treat analytically. Such methods are frequently used to estimate posterior densities or expectations in state and/or parameter estimation problems in probabilistic models that are too hard to treat analytically.


= Application to simulation = == Application to simulation ==
'''Importance sampling''' (IS) is a ] reduction technique that can be used in the ]. The idea behind IS is that certain values of the input ] in a ] have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the ] variance can be reduced. Hence, the basic methodology in IS is to choose a distribution which "encourages" the important values. This use of "biased" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new IS estimator is unbiased. The weight is given by the ], that is, the ] of the true underlying distribution with respect to the biased simulation distribution. '''Importance sampling''' (IS) is a ] reduction technique that can be used in the ]. The idea behind IS is that certain values of the input ] in a ] have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the ] variance can be reduced. Hence, the basic methodology in IS is to choose a distribution which "encourages" the important values. This use of "biased" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new IS estimator is unbiased. The weight is given by the ], that is, the ] of the true underlying distribution with respect to the biased simulation distribution.


Line 142: Line 142:
An associated issue is the fact that the ratio <math>\sigma^2_{MC} / \sigma^2_{IS} \,</math> overestimates the run-time savings due to IS since it does not include the extra computing time required to compute the weight function. Hence, some people evaluate the net run-time improvement by various means. Perhaps a more serious overhead to IS is the time taken to devise and program the technique and analytically derive the desired weight function. An associated issue is the fact that the ratio <math>\sigma^2_{MC} / \sigma^2_{IS} \,</math> overestimates the run-time savings due to IS since it does not include the extra computing time required to compute the weight function. Hence, some people evaluate the net run-time improvement by various means. Perhaps a more serious overhead to IS is the time taken to devise and program the technique and analytically derive the desired weight function.


=== References === == References ==
* ''Importance sampling - Applications in communications and detection'', Rajan Srinivasan, Springer-Verlag, Berlin, 2002.

* ''Stochastic Simulation'', B. D. Ripley, 1987, Wiley & Sons
* ''Sequential Monte Carlo Methods in Practice'', by A Doucet, N de Freitas and N Gordon. Springer, 2001. ISBN 978-0387951461
* ''Introduction to rare event simulation'', James Antonio Bucklew, Springer-Verlag, New York, 2004.
* P. J.Smith, M.Shafi, and H. Gao, "Quick simulation: A review of importance sampling techniques in communication systems," IEEE J.Select.Areas Commun., vol. 15, pp. 597-613, May 1997. * P. J.Smith, M.Shafi, and H. Gao, "Quick simulation: A review of importance sampling techniques in communication systems," IEEE J.Select.Areas Commun., vol. 15, pp. 597-613, May 1997.

* M. Ferrari, S. Bellini, "Importance Sampling simulation of turbo product codes," ICC2001, The IEEE International Conference on Communications, vol. 9, pp. 2773-2777, June 2001. * M. Ferrari, S. Bellini, "Importance Sampling simulation of turbo product codes," ICC2001, The IEEE International Conference on Communications, vol. 9, pp. 2773-2777, June 2001.

* Tommy Oberg, Modulation, Detection, and Coding, John Wiley & Sons, Inc., New York, 2001. * Tommy Oberg, Modulation, Detection, and Coding, John Wiley & Sons, Inc., New York, 2001.

* R. Srinivasan., Importance Sampling. New York: Springer, 2002. * R. Srinivasan., Importance Sampling. New York: Springer, 2002.

* Arouna. Adaptative Monte Carlo Method, A Variance Reduction Technique. Monte Carlo Methods and Their Applications. 2004 * Arouna. Adaptative Monte Carlo Method, A Variance Reduction Technique. Monte Carlo Methods and Their Applications. 2004


== See also == == See also ==

* ] * ]
* ] * ]
Line 162: Line 160:


==External links== ==External links==

* , Eric C. Anderson, Lecture notes for Stat 587C * , Eric C. Anderson, Lecture notes for Stat 587C

* homepage on University of Cambridge * homepage on University of Cambridge

* European journal of Physics. PDF document. * European journal of Physics. PDF document.

* Winter Simulation Conference * Winter Simulation Conference

== References ==
* ''Importance sampling - Applications in communications and detection'', Rajan Srinivasan, Springer-Verlag, Berlin, 2002.

* ''Stochastic Simulation'', B. D. Ripley, 1987, Wiley & Sons

* ''Sequential Monte Carlo Methods in Practice'', by A Doucet, N de Freitas and N Gordon. Springer, 2001. ISBN 978-0387951461

* ''Introduction to rare event simulation'', James Antonio Bucklew, Springer-Verlag, New York, 2004.



] ]

Revision as of 18:07, 7 March 2008

Importance sampling is a general technique for estimating the properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both.

Basic theory

More formally, let X {\displaystyle X} be a random variable in S {\displaystyle S} . Let p {\displaystyle p} be a probability measure on S {\displaystyle S} , and f {\displaystyle f} some function on S {\displaystyle S} . Then the expectation of f {\displaystyle f} under p {\displaystyle p} can be written as

E [ f ( X ) | p ] f ( x ) p ( x ) d x . {\displaystyle \mathbf {E} \equiv \int f(x)p(x)\,dx.}

If we have random samples x 1 , , x n {\displaystyle x_{1},\ldots ,x_{n}} , generated according to p {\displaystyle p} , then an empirical estimate of p {\displaystyle p} is

P n ( x ) = 1 n i = 1 n δ x i ( x ) . {\displaystyle P_{n}(x)={\frac {1}{n}}\sum _{i=1}^{n}\delta _{x_{i}}(x).}

In that case, we can easily obtain the Monte-Carlo empirical estimate of E [ f ( X ) | p ] {\displaystyle \mathbf {E} }

E ^ n [ f ] = f ( x ) P n ( x ) d x = 1 n i = 1 n f ( x i ) . {\displaystyle {\hat {\mathbf {E} }}_{n}=\int f(x)P_{n}(x)\,dx={\frac {1}{n}}\sum _{i=1}^{n}f(x_{i}).}

Unfortunately, when the samples are generated from a different distribution than the one that we are interested in, we can no longer use this straightforward estimate. However, we may use the importance sampling technique, which consists of placing different importance on each sample, depending on how likely it was for it to have been generated by the distribution that we're interested in, p {\displaystyle p} , rather than the actual sampling distribution, q {\displaystyle q} .

More formally, consider another probability measure, q {\displaystyle q} , with the same support as p {\displaystyle p} . From the definition of the expectation given above, we have

E [ f ( X ) | p ] = f ( x ) w ( x ) q ( x ) d x w ( x ) q ( x ) d x , {\displaystyle \mathbf {E} ={\frac {\int f(x)w(x)q(x)\,dx}{\int w(x)q(x)dx}},}

where w ( x ) = p ( x ) q ( x ) {\displaystyle w(x)={\frac {p(x)}{q(x)}}} , is known as the importance weight and the distribution q {\displaystyle q} is frequently referred to as the sampling or proposal distribution. Then, if we have random samples x 1 , , x n {\displaystyle x_{1},\ldots ,x_{n}} , generated according to q {\displaystyle q} , a Monte Carlo estimate of E [ f ( X ) | p ] {\displaystyle \mathbf {E} } follows from the above equation by viewing the problem as that of estimating the expectations E [ f ( X ) w ( X ) | q ] {\displaystyle \mathbf {E} } and E [ w ( X ) | q ] {\displaystyle \mathbf {E} } .

E ^ n , q [ f ] = 1 / n i = 1 n f ( x i ) w ( x i ) 1 / n j = 1 n w ( x j ) = i = 1 n f ( x i ) v i , {\displaystyle {\hat {\mathbf {E} }}_{n,q}={\frac {1/n\sum _{i=1}^{n}f(x_{i})w(x_{i})}{1/n\sum _{j=1}^{n}w(x_{j})}}=\sum _{i=1}^{n}f(x_{i})v_{i},}

where v i = w ( x i ) j = 1 n w ( x j ) {\displaystyle v_{i}={\frac {w(x_{i})}{\sum _{j=1}^{n}w(x_{j})}}} are the normalised importance weights.

The technique is completely general and the above analysis can be repeated essentially exactly also for other choices of p {\displaystyle p} , for example when it represents a conditional distribution. Note that when p {\displaystyle p} is the uniform distribution, we are just estimating the (scaled) integral of f {\displaystyle f} over S {\displaystyle S} , so the method can also be used for estimating simple integrals.

There are two main applications of importance sampling methods, which naturally, are interrelated. While the aim of both applications is to estimate statistics of random variables, the field of probabilistic inference focuses more on the estimation of p {\displaystyle p} or related statistics, while the field of simulation focuses more on the choice of the distribution q {\displaystyle q} . Nevertheless, the basic theory and tools are identical.

Application to probabilistic inference

Such methods are frequently used to estimate posterior densities or expectations in state and/or parameter estimation problems in probabilistic models that are too hard to treat analytically.

Application to simulation

Importance sampling (IS) is a variance reduction technique that can be used in the Monte Carlo method. The idea behind IS is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the estimator variance can be reduced. Hence, the basic methodology in IS is to choose a distribution which "encourages" the important values. This use of "biased" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new IS estimator is unbiased. The weight is given by the likelihood ratio, that is, the Radon-Nikodym derivative of the true underlying distribution with respect to the biased simulation distribution.

The fundamental issue in implementing IS simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the "art" of IS. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling.

Mathematical Approach

Consider estimating by simulation the probability p t {\displaystyle p_{t}\,} of an event X t   {\displaystyle {X\geq t\ }} , where X {\displaystyle X} is a random variable with distribution F {\displaystyle F} and probability density function f ( x ) = F ( x ) {\displaystyle f(x)=F'(x)\,} , where prime denotes derivative. A K {\displaystyle K} -length independent and identically distributed (i.i.d.) sequence X i {\displaystyle X_{i}\,} is generated from the distribution F {\displaystyle F} , and the number k t {\displaystyle k_{t}} of random variables that lie above the threshold t {\displaystyle t} are counted. The random variable k t {\displaystyle k_{t}} is characterized by the Binomial distribution

P ( k t = k ) = ( K k ) p t k ( 1 p t ) K k , k = 0 , 1 , , K . {\displaystyle P(k_{t}=k)={K \choose k}p_{t}^{k}(1-p_{t})^{K-k},\,\quad \quad k=0,1,\dots ,K.}

Importance sampling is concerned with the determination and use of an alternate density function f {\displaystyle f_{*}\,} (for X), usually referred to as a biasing density, for the simulation experiment. This density allows the event X t   {\displaystyle {X\geq t\ }} to occur more frequently, so the sequence lengths K {\displaystyle K} gets smaller for a given estimator variance. Alternatively, for a given K {\displaystyle K} , use of the biasing density results in a variance smaller than that of the conventional Monte Carlo estimate. From the definition of p t {\displaystyle p_{t}\,} , we can introduce f {\displaystyle f_{*}\,} as below.

p t = E [ 1 ( X t ) ] {\displaystyle p_{t}={E}}
= 1 ( x t ) f ( x ) f ( x ) f ( x ) d x {\displaystyle =\int 1(x\geq t){\frac {f(x)}{f_{*}(x)}}f_{*}(x)\,dx}
= E [ 1 ( X t ) W ( X ) ] {\displaystyle ={E_{*}}}

where

W ( ) f ( ) f ( ) {\displaystyle W(\cdot )\equiv {\frac {f(\cdot )}{f_{*}(\cdot )}}}

is a likelihood ratio and is referred to as the weighting function. The last equality in the above equation motivates the estimator

p ^ t = 1 K i = 1 K 1 ( X i t ) W ( X i ) , X i f {\displaystyle {\hat {p}}_{t}={\frac {1}{K}}\,\sum _{i=1}^{K}1(X_{i}\geq t)W(X_{i}),\,\quad \quad X_{i}\sim f_{*}}

This is the IS estimator of p t {\displaystyle p_{t}\,} and is unbiased. That is, the estimation procedure is to generate i.i.d. samples from f {\displaystyle f_{*}\,} and for each sample which exceeds t {\displaystyle t\,} , the estimate is incremented by the weight W {\displaystyle W\,} evaluated at the sample value. The results are averaged over K {\displaystyle K\,} trials. The variance of the IS estimator is easily shown to be

v a r p ^ t = 1 K v a r [ 1 ( X t ) W ( X ) ] {\displaystyle var_{*}{\hat {p}}_{t}={\frac {1}{K}}var_{*}}
= 1 K [ E [ 1 ( X t ) 2 W 2 ( X ) ] p t 2 ] {\displaystyle ={\frac {1}{K}}{\Big -p_{t}^{2}{\Big ]}}
= 1 K [ E [ 1 ( X t ) 2 W ( X ) ] p t 2 ] {\displaystyle ={\frac {1}{K}}{\Big -p_{t}^{2}{\Big ]}}

Now, the IS problem then focuses on finding a biasing density f {\displaystyle f_{*}\,} such that the variance of the IS estimator is less than the variance of the general Monte Carlo estimate. For some biasing density function, which minimizes the variance, and under certain conditions reduces it to zero, it is called an optimal biasing density function.

Conventional biasing methods

Although there are many kinds of biasing methods, the following two methods are most widely used in the applications of IS.

Scaling

Shifting probability mass into the event region X t   {\displaystyle {X\geq t\ }} by positive scaling of the random variable X {\displaystyle X\,} with a number greater than unity has the effect of increasing the variance (mean also) of the density function. This results in a heavier tail of the density, leading to an increase in the event probability. Scaling is probably one of the earliest biasing methods known and has been extensively used in practice. It is simple to implement and usually provides conservative simulation gains as compared to other methods.

In IS by scaling, the simulation density is chosen as the density function of the scaled random variable a X {\displaystyle aX\,} , where usually a > 1 {\displaystyle a>1} for tail probability estimation. By transformation,

f ( x ) = 1 a f ( x a ) {\displaystyle f_{*}(x)={\frac {1}{a}}f{\bigg (}{\frac {x}{a}}{\bigg )}\,}

and the weighting function is

W ( x ) = a f ( x ) f ( x / a ) {\displaystyle W(x)=a{\frac {f(x)}{f(x/a)}}\,}

While scaling shifts probability mass into the desired event region, it also pushes mass into the complementary region X < t {\displaystyle X<t\,} which is undesirable. If X {\displaystyle X\,} is a sum of n {\displaystyle n\,} random variables, the spreading of mass takes place in an n {\displaystyle n\,} dimensional space. The consequence of this is a decreasing IS gain for increasing n {\displaystyle n\,} , and is called the dimensionality effect.

Translation

Another simple and effective biasing technique employs translation of the density function (and hence random variable) to place much of its probability mass in the rare event region. Translation does not suffer from a dimensionality effect and has been successfully used in several applications relating to simulation of digital communication systems. It often provides better simulation gains than scaling. In biasing by translation, the simulation density is given by

f ( x ) = f ( x c ) , c > 0 {\displaystyle f_{*}(x)=f(x-c),\quad c>0\,}

where c {\displaystyle c\,} is the amount of shift and is to be chosen to minimize the variance of the IS estimator.

Effects of System Complexity

The fundamental problem with IS is that designing good biased distributions becomes more complicated as the system complexity increases. Complex systems are the systems with long memory since complex processing of a few inputs is much easier to handle. This dimensionality or memory can cause problems in three ways:

In principle, the IS ideas remain the same in these situations, but the design becomes much harder. A successful approach to combat this problem is essentially breaking down a simulation into several smaller, more sharply defined subproblems. Then IS strategies are used to target each of the simpler subproblems. Examples of techniques to break the simulation down are conditioning and error-event simulation (EES) and regenerative simulation.

Evaluation of IS

In order to identify successful IS techniques, it is useful to be able to quantify the run-time savings due to the use of the IS approach. The performance measure commonly used is σ M C 2 / σ I S 2 {\displaystyle \sigma _{MC}^{2}/\sigma _{IS}^{2}\,} , and this can be interpreted as the speed-up factor by which the IS estimator achieves the same precision as the MC estimator. This has to be computed empirically since the estimator variances are not likely to be analytically possible when their mean is intractable. Other useful concepts in quantifying an IS estimator are the variance bounds and the notion of asymptotic efficiency.

Variance Cost Function

Variance is not the only possible cost function for a simulation, and other cost functions, such as the mean absolute deviation, are used in various statistical applications. Nevertheless, the variance is the primary cost function addressed in the literature, probably due to the use of variances in confidence intervals and in the performance measure σ M C 2 / σ I S 2 {\displaystyle \sigma _{MC}^{2}/\sigma _{IS}^{2}\,} .

An associated issue is the fact that the ratio σ M C 2 / σ I S 2 {\displaystyle \sigma _{MC}^{2}/\sigma _{IS}^{2}\,} overestimates the run-time savings due to IS since it does not include the extra computing time required to compute the weight function. Hence, some people evaluate the net run-time improvement by various means. Perhaps a more serious overhead to IS is the time taken to devise and program the technique and analytically derive the desired weight function.

References

  • Importance sampling - Applications in communications and detection, Rajan Srinivasan, Springer-Verlag, Berlin, 2002.
  • Stochastic Simulation, B. D. Ripley, 1987, Wiley & Sons
  • Sequential Monte Carlo Methods in Practice, by A Doucet, N de Freitas and N Gordon. Springer, 2001. ISBN 978-0387951461
  • Introduction to rare event simulation, James Antonio Bucklew, Springer-Verlag, New York, 2004.
  • P. J.Smith, M.Shafi, and H. Gao, "Quick simulation: A review of importance sampling techniques in communication systems," IEEE J.Select.Areas Commun., vol. 15, pp. 597-613, May 1997.
  • M. Ferrari, S. Bellini, "Importance Sampling simulation of turbo product codes," ICC2001, The IEEE International Conference on Communications, vol. 9, pp. 2773-2777, June 2001.
  • Tommy Oberg, Modulation, Detection, and Coding, John Wiley & Sons, Inc., New York, 2001.
  • R. Srinivasan., Importance Sampling. New York: Springer, 2002.
  • Arouna. Adaptative Monte Carlo Method, A Variance Reduction Technique. Monte Carlo Methods and Their Applications. 2004

See also

External links

Category: