Misplaced Pages

Auxiliary particle filter

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (May 2016) (Learn how and when to remove this message)

The auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling (SIR) algorithm when dealing with tailed observation densities.

Motivation

Particle filters approximate continuous random variable by M {\displaystyle M} particles with discrete probability mass π t {\displaystyle \pi _{t}} , say 1 / M {\displaystyle 1/M} for uniform distribution. The random sampled particles can be used to approximate the probability density function of the continuous random variable if the value M {\displaystyle M\rightarrow \infty } .

The empirical prediction density is produced as the weighted summation of these particles:

f ^ ( α t + 1 | Y t ) = j = 1 M f ( α t + 1 | α t j ) π t j {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t})=\sum _{j=1}^{M}f(\alpha _{t+1}|\alpha _{t}^{j})\pi _{t}^{j}} , and we can view it as the "prior" density. Note that the particles are assumed to have the same weight π t j = 1 M {\displaystyle \pi _{t}^{j}={\frac {1}{M}}} .

Combining the prior density f ^ ( α t + 1 | Y t ) {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t})} and the likelihood f ( y t + 1 | α t + 1 ) {\displaystyle f(y_{t+1}|\alpha _{t+1})} , the empirical filtering density can be produced as:

f ^ ( α t + 1 | Y t + 1 ) = f ( y t + 1 | α t + 1 ) f ^ ( α t + 1 | Y t ) f ( y t + 1 | Y t ) f ( y t + 1 | α t + 1 ) j = 1 M f ( α t + 1 | α t j ) π t j {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t+1})={\frac {f(y_{t+1}|\alpha _{t+1}){\widehat {f}}(\alpha _{t+1}|Y_{t})}{f(y_{t+1}|Y_{t})}}\propto f(y_{t+1}|\alpha _{t+1})\sum _{j=1}^{M}f(\alpha _{t+1}|\alpha _{t}^{j})\pi _{t}^{j}} , where f ( y t + 1 | Y t ) = f ( y t + 1 | α t + 1 ) d F ( α t + 1 | Y t ) {\displaystyle f(y_{t+1}|Y_{t})=\int f(y_{t+1}|\alpha _{t+1})dF(\alpha _{t+1}|Y_{t})} .

On the other hand, the true filtering density which we want to estimate is

f ( α t + 1 | Y t + 1 ) = f ( y t + 1 | α t + 1 ) f ( α t + 1 | Y t ) f ( y t + 1 | Y t ) {\displaystyle f(\alpha _{t+1}|Y_{t+1})={\frac {f(y_{t+1}|\alpha _{t+1})f(\alpha _{t+1}|Y_{t})}{f(y_{t+1}|Y_{t})}}} .

The prior density f ^ ( α t + 1 | Y t ) {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t})} can be used to approximate the true filtering density f ( α t + 1 | Y t + 1 ) {\displaystyle f(\alpha _{t+1}|Y_{t+1})} :

  • The particle filters draw R {\displaystyle R} samples from the prior density f ^ ( α t + 1 | Y t ) {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t})} . Each sample are drawn with equal probability.
  • Assign each sample with the weights π j = ω j i = 1 R ω i , ω j = f ( y | α j ) {\displaystyle \pi _{j}={\frac {\omega _{j}}{\sum _{i=1}^{R}\omega _{i}}},\omega _{j}=f(y|\alpha ^{j})} . The weights represent the likelihood function f ( y t + 1 | α t + 1 ) {\displaystyle f(y_{t+1}|\alpha _{t+1})} .
  • If the number R {\displaystyle R\rightarrow \infty } , than the samples converge to the desired true filtering density.
  • The R {\displaystyle R} particles are resampled to M {\displaystyle M} particles with the weight π j {\displaystyle \pi _{j}} .

The weakness of the particle filters includes:

  • If the weight { ω j {\displaystyle \omega _{j}} } has a large variance, the sample amount R {\displaystyle R} must be large enough for the samples to approximate the empirical filtering density. In other words, while the weight is widely distributed, the SIR method will be imprecise and the adaption is difficult.

Therefore, the auxiliary particle filter is proposed to solve this problem.

Auxiliary particle filter

Auxiliary variable

Comparing with the empirical filtering density which has f ^ ( α t + 1 | Y t + 1 ) f ( y t + 1 | α t + 1 ) j = 1 M f ( α t + 1 | α t j ) π t j {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t+1})\propto f(y_{t+1}|\alpha _{t+1})\sum _{j=1}^{M}f(\alpha _{t+1}|\alpha _{t}^{j})\pi _{t}^{j}} ,

we now define f ^ ( α t + 1 , k | Y t + 1 ) f ( y t + 1 | α t + 1 ) f ( α t + 1 | α t k ) π k {\displaystyle {\widehat {f}}(\alpha _{t+1},k|Y_{t+1})\propto f(y_{t+1}|\alpha _{t+1})f(\alpha _{t+1}|\alpha _{t}^{k})\pi ^{k}} , where k = 1 , . . . , M {\displaystyle k=1,...,M} .

Being aware that f ^ ( α t + 1 | Y t + 1 ) {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t+1})} is formed by the summation of M {\displaystyle M} particles, the auxiliary variable k {\displaystyle k} represents one specific particle. With the aid of k {\displaystyle k} , we can form a set of samples which has the distribution g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} . Then, we draw from these sample set g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} instead of directly from f ^ ( α t + 1 | Y t + 1 ) {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t+1})} . In other words, the samples are drawn from f ^ ( α t + 1 | Y t + 1 ) {\displaystyle {\widehat {f}}(\alpha _{t+1}|Y_{t+1})} with different probability. The samples are ultimately utilized to approximate f ( α t + 1 | Y t + 1 ) {\displaystyle f(\alpha _{t+1}|Y_{t+1})} .

Take the SIR method for example:

  • The particle filters draw R {\displaystyle R} samples from g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} .
  • Assign each samples with the weight π j = ω j i = 1 R ω i , ω j = f ( y t + 1 | α t + 1 j ) f ( α t + 1 j | α t k ) g ( α t + 1 j , k j | Y t + 1 ) {\displaystyle \pi _{j}={\frac {\omega _{j}}{\sum _{i=1}^{R}\omega _{i}}},\omega _{j}={\frac {f(y_{t+1}|\alpha _{t+1}^{j})f(\alpha _{t+1}^{j}|\alpha _{t}^{k})}{g(\alpha _{t+1}^{j},k^{j}|Y_{t+1})}}} .
  • By controlling y t + 1 {\displaystyle y_{t+1}} and α t k {\displaystyle \alpha _{t}^{k}} , the weights are adjusted to be even.
  • Similarly, the R {\displaystyle R} particles are resampled to M {\displaystyle M} particles with the weight π j {\displaystyle \pi _{j}} .

The original particle filters draw samples from the prior density, while the auxiliary filters draw from the joint distribution of the prior density and the likelihood. In other words, the auxiliary particle filters avoid the circumstance which the particles are generated in the regions with low likelihood. As a result, the samples can approximate f ( α t + 1 | Y t + 1 ) {\displaystyle f(\alpha _{t+1}|Y_{t+1})} more precisely.

Selection of the auxiliary variable

The selection of the auxiliary variable affects g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} and controls the distribution of the samples. A possible selection of g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} can be:
g ( α t + 1 , k | Y t + 1 ) f ( y t + 1 | μ t + 1 k ) f ( α t + 1 | α t k ) π k {\displaystyle g(\alpha _{t+1},k|Y_{t+1})\propto f(y_{t+1}|\mu _{t+1}^{k})f(\alpha _{t+1}|\alpha _{t}^{k})\pi ^{k}} , where k = 1 , . . . , M {\displaystyle k=1,...,M} and μ t + 1 k {\displaystyle \mu _{t+1}^{k}} is the mean.

We sample from g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} to approximate f ( α t + 1 | Y t + 1 ) {\displaystyle f(\alpha _{t+1}|Y_{t+1})} by the following procedure:

  • First, we assign probabilities to the indexes of f ( α t + 1 | α t k ) {\displaystyle f(\alpha _{t+1}|\alpha _{t}^{k})} . We named these probabilities as the first-stage weights λ k {\displaystyle \lambda _{k}} , which are proportional to g ( k | Y t + 1 ) π k f ( y t + 1 | μ t + 1 k ) {\displaystyle g(k|Y_{t+1})\propto \pi ^{k}f(y_{t+1}|\mu _{t+1}^{k})} .
  • Then, we draw R {\displaystyle R} samples from f ( α t + 1 | α t k ) {\displaystyle f(\alpha _{t+1}|\alpha _{t}^{k})} with the weighted indexes. By doing so, we are actually drawing the samples from g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} .
  • Moreover, we reassign the second-stage weights π j = ω j i = 1 R ω i {\displaystyle \pi _{j}={\frac {\omega _{j}}{\sum _{i=1}^{R}\omega _{i}}}} as the probabilities of the R {\displaystyle R} samples, where ω j = f ( y t + 1 | α t + 1 j ) f ( y t + 1 | μ t + 1 j ) {\displaystyle \omega _{j}={\frac {f(y_{t+1}|\alpha _{t+1}^{j})}{f(y_{t+1}|\mu _{t+1}^{j})}}} . The weights are aim to compensate the effect of μ t + 1 k {\displaystyle \mu _{t+1}^{k}} .
  • Finally, the R {\displaystyle R} particles are resampled to M {\displaystyle M} particles with the weights π j {\displaystyle \pi _{j}} .

Following the procedure, we draw the R {\displaystyle R} samples from g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} . Since g ( α t + 1 , k | Y t + 1 ) {\displaystyle g(\alpha _{t+1},k|Y_{t+1})} is closely related to the mean μ t + 1 k {\displaystyle \mu _{t+1}^{k}} , it has high conditional likelihood. As a result, the sampling procedure is more efficient and the value R {\displaystyle R} can be reduced.

Other point of view

Assume that the filtered posterior is described by the following M weighted samples:

p ( x t | z 1 : t ) i = 1 M ω t ( i ) δ ( x t x t ( i ) ) . {\displaystyle p(x_{t}|z_{1:t})\approx \sum _{i=1}^{M}\omega _{t}^{(i)}\delta \left(x_{t}-x_{t}^{(i)}\right).}

Then, each step in the algorithm consists of first drawing a sample of the particle index k {\displaystyle k} which will be propagated from t 1 {\displaystyle t-1} into the new step t {\displaystyle t} . These indexes are auxiliary variables only used as an intermediary step, hence the name of the algorithm. The indexes are drawn according to the likelihood of some reference point μ t ( i ) {\displaystyle \mu _{t}^{(i)}} which in some way is related to the transition model x t | x t 1 {\displaystyle x_{t}|x_{t-1}} (for example, the mean, a sample, etc.):

k ( i ) P ( i = k | z t ) ω t ( i ) p ( z t | μ t ( i ) ) {\displaystyle k^{(i)}\sim P(i=k|z_{t})\propto \omega _{t}^{(i)}p(z_{t}|\mu _{t}^{(i)})}

This is repeated for i = 1 , 2 , , M {\displaystyle i=1,2,\dots ,M} , and using these indexes we can now draw the conditional samples:

x t ( i ) p ( x | x t 1 k ( i ) ) . {\displaystyle x_{t}^{(i)}\sim p(x|x_{t-1}^{k^{(i)}}).}

Finally, the weights are updated to account for the mismatch between the likelihood at the actual sample and the predicted point μ t k ( i ) {\displaystyle \mu _{t}^{k^{(i)}}} :

ω t ( i ) p ( z t | x t ( i ) ) p ( z t | μ t k ( i ) ) . {\displaystyle \omega _{t}^{(i)}\propto {\frac {p(z_{t}|x_{t}^{(i)})}{p(z_{t}|\mu _{t}^{k^{(i)}})}}.}

References

  1. Pitt, Michael K.; Shephard, Neil. "Filtering Via Simulation: Auxiliary Particle Filters" (PDF). Journal of the American Statistical Association.

Sources

Categories: