Misplaced Pages

Imprecise Dirichlet process

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Bayesian nonparametric model of probability distributions

In probability theory and statistics, the Dirichlet process (DP) is one of the most popular Bayesian nonparametric models. It was introduced by Thomas Ferguson as a prior over probability distributions.

A Dirichlet process D P ( s , G 0 ) {\displaystyle \mathrm {DP} \left(s,G_{0}\right)} is completely defined by its parameters: G 0 {\displaystyle G_{0}} (the base distribution or base measure) is an arbitrary distribution and s {\displaystyle s} (the concentration parameter) is a positive real number (it is often denoted as α {\displaystyle \alpha } ). According to the Bayesian paradigm these parameters should be chosen based on the available prior information on the domain.

The question is: how should we choose the prior parameters ( s , G 0 ) {\displaystyle \left(s,G_{0}\right)} of the DP, in particular the infinite dimensional one G 0 {\displaystyle G_{0}} , in case of lack of prior information?

To address this issue, the only prior that has been proposed so far is the limiting DP obtained for s 0 {\displaystyle s\rightarrow 0} , which has been introduced under the name of Bayesian bootstrap by Rubin; in fact it can be proven that the Bayesian bootstrap is asymptotically equivalent to the frequentist bootstrap introduced by Bradley Efron. The limiting Dirichlet process s 0 {\displaystyle s\rightarrow 0} has been criticized on diverse grounds. From an a-priori point of view, the main criticism is that taking s 0 {\displaystyle s\rightarrow 0} is far from leading to a noninformative prior. Moreover, a-posteriori, it assigns zero probability to any set that does not include the observations.

The imprecise Dirichlet process has been proposed to overcome these issues. The basic idea is to fix s > 0 {\displaystyle s>0} but do not choose any precise base measure G 0 {\displaystyle G_{0}} .

More precisely, the imprecise Dirichlet process (IDP) is defined as follows:

    I D P :   { D P ( s , G 0 ) :     G 0 P } {\displaystyle ~~\mathrm {IDP} :~\left\{\mathrm {DP} \left(s,G_{0}\right):~~G_{0}\in \mathbb {P} \right\}}

where P {\displaystyle \mathbb {P} } is the set of all probability measures. In other words, the IDP is the set of all Dirichlet processes (with a fixed s > 0 {\displaystyle s>0} ) obtained by letting the base measure G 0 {\displaystyle G_{0}} to span the set of all probability measures.

Inferences with the Imprecise Dirichlet Process

Let P {\displaystyle P} a probability distribution on ( X , B ) {\displaystyle (\mathbb {X} ,{\mathcal {B}})} (here X {\displaystyle \mathbb {X} } is a standard Borel space with Borel σ {\displaystyle \sigma } -field B {\displaystyle {\mathcal {B}}} ) and assume that P D P ( s , G 0 ) {\displaystyle P\sim \mathrm {DP} (s,G_{0})} . Then consider a real-valued bounded function f {\displaystyle f} defined on ( X , B ) {\displaystyle (\mathbb {X} ,{\mathcal {B}})} . It is well known that the expectation of E [ f ] {\displaystyle E} with respect to the Dirichlet process is

E [ E ( f ) ] = E [ f d P ] = f d E [ P ] = f d G 0 . {\displaystyle {\mathcal {E}}={\mathcal {E}}\left=\int f\,d{\mathcal {E}}=\int f\,dG_{0}.}

One of the most remarkable properties of the DP priors is that the posterior distribution of P {\displaystyle P} is again a DP. Let X 1 , , X n {\displaystyle X_{1},\dots ,X_{n}} be an independent and identically distributed sample from P {\displaystyle P} and P D p ( s , G 0 ) {\displaystyle P\sim Dp(s,G_{0})} , then the posterior distribution of P {\displaystyle P} given the observations is

P X 1 , , X n D p ( s + n , G n ) ,       with             G n = s s + n G 0 + 1 s + n i = 1 n δ X i , {\displaystyle P\mid X_{1},\dots ,X_{n}\sim Dp\left(s+n,G_{n}\right),~~~{\text{with}}~~~~~~G_{n}={\frac {s}{s+n}}G_{0}+{\frac {1}{s+n}}\sum \limits _{i=1}^{n}\delta _{X_{i}},}

where δ X i {\displaystyle \delta _{X_{i}}} is an atomic probability measure (Dirac's delta) centered at X i {\displaystyle X_{i}} . Hence, it follows that E [ E ( f ) X 1 , , X n ] = f d G n . {\displaystyle {\mathcal {E}}=\int f\,dG_{n}.} Therefore, for any fixed G 0 {\displaystyle G_{0}} , we can exploit the previous equations to derive prior and posterior expectations.

In the IDP G 0 {\displaystyle G_{0}} can span the set of all distributions P {\displaystyle \mathbb {P} } . This implies that we will get a different prior and posterior expectation of E ( f ) {\displaystyle E(f)} for any choice of G 0 {\displaystyle G_{0}} . A way to characterize inferences for the IDP is by computing lower and upper bounds for the expectation of E ( f ) {\displaystyle E(f)} w.r.t. G 0 P {\displaystyle G_{0}\in \mathbb {P} } . A-priori these bounds are:

E _ [ E ( f ) ] = inf G 0 P f d G 0 = inf f ,         E ¯ [ E ( f ) ] = sup G 0 P f d G 0 = sup f , {\displaystyle {\underline {\mathcal {E}}}=\inf \limits _{G_{0}\in \mathbb {P} }\int f\,dG_{0}=\inf f,~~~~{\overline {\mathcal {E}}}=\sup \limits _{G_{0}\in \mathbb {P} }\int f\,dG_{0}=\sup f,}

the lower (upper) bound is obtained by a probability measure that puts all the mass on the infimum (supremum) of f {\displaystyle f} , i.e., G 0 = δ X 0 {\displaystyle G_{0}=\delta _{X_{0}}} with X 0 = arg inf f {\displaystyle X_{0}=\arg \inf f} (or respectively with X 0 = arg sup f {\displaystyle X_{0}=\arg \sup f} ). From the above expressions of the lower and upper bounds, it can be observed that the range of E [ E ( f ) ] {\displaystyle {\mathcal {E}}} under the IDP is the same as the original range of f {\displaystyle f} . In other words, by specifying the IDP, we are not giving any prior information on the value of the expectation of f {\displaystyle f} . A-priori, IDP is therefore a model of prior (near)-ignorance for E ( f ) {\displaystyle E(f)} .

A-posteriori, IDP can learn from data. The posterior lower and upper bounds for the expectation of E ( f ) {\displaystyle E(f)} are in fact given by:

E _ [ E ( f ) X 1 , , X n ] = inf G 0 P f d G n = s s + n inf f + f ( X ) 1 s + n i = 1 n δ X i ( d X ) = s s + n inf f + n s + n i = 1 n f ( X i ) n , E ¯ [ E ( f ) X 1 , , X n ] = sup G 0 P f d G n = s s + n sup f + f ( X ) 1 s + n i = 1 n δ X i ( d X ) = s s + n sup f + n s + n i = 1 n f ( X i ) n . {\displaystyle {\begin{aligned}{\underline {\mathcal {E}}}&=\inf \limits _{G_{0}\in \mathbb {P} }\int f\,dG_{n}={\frac {s}{s+n}}\inf f+\int f(X){\frac {1}{s+n}}\sum \limits _{i=1}^{n}\delta _{X_{i}}(dX)\\&={\frac {s}{s+n}}\inf f+{\frac {n}{s+n}}{\frac {\sum \limits _{i=1}^{n}f(X_{i})}{n}},\\{\overline {\mathcal {E}}}&=\sup \limits _{G_{0}\in \mathbb {P} }\int f\,dG_{n}={\frac {s}{s+n}}\sup f+\int f(X){\frac {1}{s+n}}\sum \limits _{i=1}^{n}\delta _{X_{i}}(dX)\\&={\frac {s}{s+n}}\sup f+{\frac {n}{s+n}}{\frac {\sum \limits _{i=1}^{n}f(X_{i})}{n}}.\end{aligned}}}

It can be observed that the posterior inferences do not depend on G 0 {\displaystyle G_{0}} . To define the IDP, the modeler has only to choose s {\displaystyle s} (the concentration parameter). This explains the meaning of the adjective near in prior near-ignorance, because the IDP requires by the modeller the elicitation of a parameter. However, this is a simple elicitation problem for a nonparametric prior, since we only have to choose the value of a positive scalar (there are not infinitely many parameters left in the IDP model).

Finally, observe that for n {\displaystyle n\rightarrow \infty } , IDP satisfies

E _ [ E ( f ) X 1 , , X n ] , E ¯ [ E ( f ) X 1 , , X n ] S ( f ) , {\displaystyle {\underline {\mathcal {E}}}\left,\quad {\overline {\mathcal {E}}}\left\rightarrow S(f),}

where S ( f ) = lim n 1 n i = 1 n f ( X i ) {\displaystyle S(f)=\lim _{n\rightarrow \infty }{\tfrac {1}{n}}\sum _{i=1}^{n}f(X_{i})} . In other words, the IDP is consistent.

Lower (red) and Upper (blue) cumulative distribution for the observations {−1.17, 0.44, 1.17, 3.28, 1.44, 1.98}

Choice of the prior strength s {\displaystyle s}

The IDP is completely specified by s {\displaystyle s} , which is the only parameter left in the prior model. Since the value of s {\displaystyle s} determines how quickly lower and upper posterior expectations converge at the increase of the number of observations, s {\displaystyle s} can be chosen so to match a certain convergence rate. The parameter s {\displaystyle s} can also be chosen to have some desirable frequentist properties (e.g., credible intervals to be calibrated frequentist intervals, hypothesis tests to be calibrated for the Type I error, etc.), see Example: median test

Example: estimate of the cumulative distribution

Let X 1 , , X n {\displaystyle X_{1},\dots ,X_{n}} be i.i.d. real random variables with cumulative distribution function F ( x ) {\displaystyle F(x)} .

Since F ( x ) = E [ I ( , x ] ] {\displaystyle F(x)=E}]} , where I ( , x ] {\displaystyle \mathbb {I} _{(\infty ,x]}} is the indicator function, we can use IDP to derive inferences about F ( x ) . {\displaystyle F(x).} The lower and upper posterior mean of F ( x ) {\displaystyle F(x)} are

E _ [ F ( x ) X 1 , , X n ] = E _ [ E ( I ( , x ] ) X 1 , , X n ] = n s + n i = 1 n I ( , x ] ( X i ) n = n s + n F ^ ( x ) , E ¯ [ F ( x ) X 1 , , X n ] = E ¯ [ E ( I ( , x ] ) X 1 , , X n ] = s s + n + n s + n i = 1 n I ( , x ] ( X i ) n = s s + n + n s + n F ^ ( x ) . {\displaystyle {\begin{aligned}&{\underline {\mathcal {E}}}\left={\underline {\mathcal {E}}}})\mid X_{1},\dots ,X_{n}]\\={}&{\frac {n}{s+n}}{\frac {\sum \limits _{i=1}^{n}\mathbb {I} _{(\infty ,x]}(X_{i})}{n}}={\frac {n}{s+n}}{\hat {F}}(x),\\&{\overline {\mathcal {E}}}\left={\overline {\mathcal {E}}}\left})\mid X_{1},\dots ,X_{n}\right]\\={}&{\frac {s}{s+n}}+{\frac {n}{s+n}}{\frac {\sum \limits _{i=1}^{n}\mathbb {I} _{(\infty ,x]}(X_{i})}{n}}={\frac {s}{s+n}}+{\frac {n}{s+n}}{\hat {F}}(x).\end{aligned}}}

where F ^ ( x ) {\displaystyle {\hat {F}}(x)} is the empirical distribution function. Here, to obtain the lower we have exploited the fact that inf I ( , x ] = 0 {\displaystyle \inf \mathbb {I} _{(\infty ,x]}=0} and for the upper that sup I ( , x ] = 1 {\displaystyle \sup \mathbb {I} _{(\infty ,x]}=1} .

Beta distributions for the lower (red) and upper (blue) probability corresponding to the observations {-1.17, 0.44, 1.17, 3.28, 1.44, 1.98}. The area in gives the lower (0.891) and the upper (0.9375) probability of the hypothesis "the median is greater than zero".

Note that, for any precise choice of G 0 {\displaystyle G_{0}} (e.g., normal distribution N ( x ; 0 , 1 ) {\displaystyle {\mathcal {N}}(x;0,1)} ), the posterior expectation of F ( x ) {\displaystyle F(x)} will be included between the lower and upper bound.

Example: median test

IDP can also be used for hypothesis testing, for instance to test the hypothesis F ( 0 ) < 0.5 {\displaystyle F(0)<0.5} , i.e., the median of F {\displaystyle F} is greater than zero. By considering the partition ( , 0 ] , ( 0 , ) {\displaystyle (-\infty ,0],(0,\infty )} and the property of the Dirichlet process, it can be shown that the posterior distribution of F ( 0 ) {\displaystyle F(0)} is

F ( 0 ) B e t a ( α 0 + n < 0 , β 0 + n n < 0 ) {\displaystyle F(0)\sim \mathrm {Beta} (\alpha _{0}+n_{<0},\beta _{0}+n-n_{<0})}

where n < 0 {\displaystyle n_{<0}} is the number of observations that are less than zero,

α 0 = s 0 d G 0 {\displaystyle \alpha _{0}=s\int _{-\infty }^{0}dG_{0}} and β 0 = s 0 d G 0 . {\displaystyle \beta _{0}=s\int _{0}^{\infty }dG_{0}.}

By exploiting this property, it follows that

P _ [ F ( 0 ) < 0.5 X 1 , , X n ] = 0 0.5 B e t a ( θ ; s + n < 0 , n n < 0 ) d θ = I 1 / 2 ( s + n < 0 , n n < 0 ) , {\displaystyle {\underline {\mathcal {P}}}=\int \limits _{0}^{0.5}\mathrm {Beta} (\theta ;s+n_{<0},n-n_{<0})d\theta =I_{1/2}(s+n_{<0},n-n_{<0}),}
P ¯ [ F ( 0 ) < 0.5 X 1 , , X n ] = 0 0.5 B e t a ( θ ; n < 0 , s + n n < 0 ) d θ = I 1 / 2 ( n < 0 , s + n n < 0 ) . {\displaystyle {\overline {\mathcal {P}}}=\int \limits _{0}^{0.5}\mathrm {Beta} (\theta ;n_{<0},s+n-n_{<0})d\theta =I_{1/2}(n_{<0},s+n-n_{<0}).}

where I x ( α , β ) {\displaystyle I_{x}(\alpha ,\beta )} is the regularized incomplete beta function. We can thus perform the hypothesis test

P _ [ F ( 0 ) < 0.5 X 1 , , X n ] > 1 γ ,     P ¯ [ F ( 0 ) < 0.5 X 1 , , X n ] > 1 γ , {\displaystyle {\underline {\mathcal {P}}}>1-\gamma ,~~{\overline {\mathcal {P}}}>1-\gamma ,}

(with 1 γ = 0.95 {\displaystyle 1-\gamma =0.95} for instance) and then

  1. if both the inequalities are satisfied we can declare that F ( 0 ) < 0.5 {\displaystyle F(0)<0.5} with probability larger than 1 γ {\displaystyle 1-\gamma } ;
  2. if only one of the inequality is satisfied (which has necessarily to be the one for the upper), we are in an indeterminate situation, i.e., we cannot decide;
  3. if both are not satisfied, we can declare that the probability that F ( 0 ) < 0.5 {\displaystyle F(0)<0.5} is lower than the desired probability of 1 γ {\displaystyle 1-\gamma } .

IDP returns an indeterminate decision when the decision is prior dependent (that is when it would depend on the choice of G 0 {\displaystyle G_{0}} ).

By exploiting the relationship between the cumulative distribution function of the Beta distribution, and the cumulative distribution function of a random variable Z from a binomial distribution, where the "probability of success" is p and the sample size is n:

F ( k ; n , p ) = Pr ( Z k ) = I 1 p ( n k , k + 1 ) = 1 I p ( k + 1 , n k ) , {\displaystyle F(k;n,p)=\Pr(Z\leq k)=I_{1-p}(n-k,k+1)=1-I_{p}(k+1,n-k),}

we can show that the median test derived with th IDP for any choice of s 1 {\displaystyle s\geq 1} encompasses the one-sided frequentist sign test as a test for the median. It can in fact be verified that for s = 1 {\displaystyle s=1} the p {\displaystyle p} -value of the sign test is equal to 1 P _ [ F ( 0 ) < 0.5 X 1 , , X n ] {\displaystyle 1-{\underline {\mathcal {P}}}} . Thus, if P _ [ F ( 0 ) < 0.5 X 1 , , X n ] > 0.95 {\displaystyle {\underline {\mathcal {P}}}>0.95} then the p {\displaystyle p} -value is less than 0.05 {\displaystyle 0.05} and, thus, they two tests have the same power.

Applications of the Imprecise Dirichlet Process

Dirichlet processes are frequently used in Bayesian nonparametric statistics. The Imprecise Dirichlet Process can be employed instead of the Dirichlet processes in any application in which prior information is lacking (it is therefore important to model this state of prior ignorance).

In this respect, the Imprecise Dirichlet Process has been used for nonparametric hypothesis testing, see the Imprecise Dirichlet Process statistical package. Based on the Imprecise Dirichlet Process, Bayesian nonparametric near-ignorance versions of the following classical nonparametric estimators have been derived: the Wilcoxon rank sum test and the Wilcoxon signed-rank test.

A Bayesian nonparametric near-ignorance model presents several advantages with respect to a traditional approach to hypothesis testing.

  1. The Bayesian approach allows us to formulate the hypothesis test as a decision problem. This means that we can verify the evidence in favor of the null hypothesis and not only rejecting it and take decisions which minimize the expected loss.
  2. Because of the nonparametric prior near-ignorance, IDP based tests allows us to start the hypothesis test with very weak prior assumptions, much in the direction of letting data speak for themselves.
  3. Although the IDP test shares several similarities with a standard Bayesian approach, at the same time it embodies a significant change of paradigm when it comes to take decisions. In fact the IDP based tests have the advantage of producing an indeterminate outcome when the decision is prior-dependent. In other words, the IDP test suspends the judgment when the option which minimizes the expected loss changes depending on the Dirichlet Process base measure we focus on.
  4. It has been empirically verified that when the IDP test is indeterminate, the frequentist tests are virtually behaving as random guessers. This surprising result has practical consequences in hypothesis testing. Assume that we are trying to compare the effects of two medical treatments (Y is better than X) and that, given the available data, the IDP test is indeterminate. In such a situation the frequentist test always issues a determinate response (for instance I can tell that Y is better than X), but it turns out that its response is completely random, like if we were tossing of a coin. On the other side, the IDP test acknowledges the impossibility of making a decision in these cases. Thus, by saying "I do not know", the IDP test provides a richer information to the analyst. The analyst could for instance use this information to collect more data.

Categorical variables

For categorical variables, i.e., when X {\displaystyle \mathbb {X} } has a finite number of elements, it is known that the Dirichlet process reduces to a Dirichlet distribution. In this case, the Imprecise Dirichlet Process reduces to the Imprecise Dirichlet model proposed by Walley as a model for prior (near)-ignorance for chances.

See also

Imprecise probability

Robust Bayesian analysis

References

  1. Ferguson, Thomas (1973). "Bayesian analysis of some nonparametric problems". Annals of Statistics. 1 (2): 209–230. doi:10.1214/aos/1176342360. MR 0350949.
  2. ^ Rubin D (1981). The Bayesian bootstrap. Ann. Stat. 9 130–134
  3. Efron B (1979). Bootstrap methods: Another look at the jackknife. Ann. Stat. 7 1–26
  4. Sethuraman, J.; Tiwari, R. C. (1981). "Convergence of Dirichlet measures and the interpretation of their parameter". Defense Technical Information Center.
  5. ^ Benavoli, Alessio; Mangili, Francesca; Ruggeri, Fabrizio; Zaffalon, Marco (2014). "Imprecise Dirichlet Process with application to the hypothesis test on the probability that X< Y". arXiv:1402.2755 .
  6. Benavoli, Alessio; Mangili, Francesca; Corani, Giorgio; Ruggeri, Fabrizio; Zaffalon, Marco (2014). "A Bayesian Wilcoxon signed-rank test based on the Dirichlet process". Proceedings of the 30th International Conference on Machine Learning (ICML 2014). {{cite journal}}: Cite journal requires |journal= (help)
  7. Walley, Peter (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. ISBN 0-412-28660-2.

External links

Category: