Misplaced Pages

Randomised decision rule

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Randomised test)

In statistical decision theory, a randomised decision rule or mixed decision rule is a decision rule that associates probabilities with deterministic decision rules. In finite decision problems, randomised decision rules define a risk set which is the convex hull of the risk points of the nonrandomised decision rules.

As nonrandomised alternatives always exist to randomised Bayes rules, randomisation is not needed in Bayesian statistics, although frequentist statistical theory sometimes requires the use of randomised rules to satisfy optimality conditions such as minimax, most notably when deriving confidence intervals and hypothesis tests about discrete probability distributions.

A statistical test making use of a randomized decision rule is called a randomized test.

Definition and interpretation

Let D = { d 1 , d 2 . . . , d h } {\displaystyle {\mathcal {D}}=\{d_{1},d_{2}...,d_{h}\}} be a set of non-randomised decision rules with associated probabilities p 1 , p 2 , . . . , p h {\displaystyle p_{1},p_{2},...,p_{h}} . Then the randomised decision rule d {\displaystyle d^{*}} is defined as i = 1 h p i d i {\displaystyle \sum _{i=1}^{h}p_{i}d_{i}} and its associated risk function R ( θ , d ) {\displaystyle R(\theta ,d^{*})} is i = 1 h p i R ( θ , d i ) {\displaystyle \sum _{i=1}^{h}p_{i}R(\theta ,d_{i})} . This rule can be treated as a random experiment in which the decision rules d 1 , . . . , d h D {\displaystyle d_{1},...,d_{h}\in {\mathcal {D}}} are selected with probabilities p 1 , . . . p h {\displaystyle p_{1},...p_{h}} respectively.

Alternatively, a randomised decision rule may assign probabilities directly on elements of the actions space A {\displaystyle {\mathcal {A}}} for each member of the sample space. More formally, d ( x , A ) {\displaystyle d^{*}(x,A)} denotes the probability that an action a A {\displaystyle a\in {\mathcal {A}}} is chosen. Under this approach, its loss function is also defined directly as: A A d ( x , A ) L ( θ , A ) d A {\displaystyle \int _{A\in {\mathcal {A}}}d^{*}(x,A)L(\theta ,A)dA} .

The introduction of randomised decision rules thus creates a larger decision space from which the statistician may choose his decision. As non-randomised decision rules are a special case of randomised decision rules where one decision or action has probability 1, the original decision space D {\displaystyle {\mathcal {D}}} is a proper subset of the new decision space D {\displaystyle {\mathcal {D}}^{*}} .

Selection of randomised decision rules

The extreme points of the risk set, denoted by empty circles, correspond to nonrandomised decision rules, whereas the thick lines denote the admissible decision rules.

As with nonrandomised decision rules, randomised decision rules may satisfy favourable properties such as admissibility, minimaxity and Bayes. This shall be illustrated in the case of a finite decision problem, i.e. a problem where the parameter space is a finite set of, say, k {\displaystyle k} elements. The risk set, henceforth denoted as S {\displaystyle {\mathcal {S}}} , is the set of all vectors in which each entry is the value of the risk function associated with a randomised decision rule under a certain parameter: it contains all vectors of the form ( R ( θ 1 , d ) , . . . R ( θ k , d ) ) , d D {\displaystyle (R(\theta _{1},d^{*}),...R(\theta _{k},d^{*})),d^{*}\in {\mathcal {D}}^{*}} . Note that by the definition of the randomised decision rule, the risk set is the convex hull of the risks ( R ( θ 1 , d ) , . . . R ( θ k , d ) ) , d D {\displaystyle (R(\theta _{1},d),...R(\theta _{k},d)),d\in {\mathcal {D}}} .

In the case where the parameter space has only two elements θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} , this constitutes a subset of R 2 {\displaystyle \mathbb {R} ^{2}} , so it may be drawn with respect to the coordinate axes R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} corresponding to the risks under θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} respectively. An example is shown on the right.

Admissibility

An admissible decision rule is one that is not dominated by any other decision rule, i.e. there is no decision rule that has equal risk as or lower risk than it for all parameters and strictly lower risk than it for some parameter. In a finite decision problem, the risk point of an admissible decision rule has either lower x-coordinates or y-coordinates than all other risk points or, more formally, it is the set of rules with risk points of the form ( a , b ) {\displaystyle (a,b)} such that { ( R 1 , R 2 ) : R 1 a , R 2 b } S = ( a , b ) {\displaystyle \{(R_{1},R_{2}):R_{1}\leq a,R_{2}\leq b\}\cap {\mathcal {S}}=(a,b)} . Thus the left side of the lower boundary of the risk set is the set of admissible decision rules.

Minimax

A minimax Bayes rule is one that minimises the supremum risk sup θ Θ R ( θ , d ) {\displaystyle \sup _{\theta \in \Theta }R(\theta ,d^{*})} among all decision rules in D {\displaystyle {\mathcal {D}}^{*}} . Sometimes, a randomised decision rule may perform better than all other nonrandomised decision rules in this regard.

In a finite decision problem with two possible parameters, the minimax rule can be found by considering the family of squares Q ( c ) = { ( R 1 , R 2 ) : 0 R 1 c , 0 R 2 c } {\displaystyle Q(c)=\{(R_{1},R_{2}):0\leq R_{1}\leq c,0\leq R_{2}\leq c\}} . The value of c {\displaystyle c} for the smallest of such squares that touches S {\displaystyle {\mathcal {S}}} is the minimax risk and the corresponding point or points on the risk set is the minimax rule.

If the risk set intersects the line R 1 = R 2 {\displaystyle R_{1}=R_{2}} , then the admissible decision rule lying on the line is minimax. If R 2 > R 1 {\displaystyle R_{2}>R_{1}} or R 1 > R 2 {\displaystyle R_{1}>R_{2}} holds for every point in the risk set, then the minimax rule can either be an extreme point (i.e. a nonrandomised decision rule) or a line connecting two extreme points (nonrandomised decision rules).

  • The minimax rule is the randomised decision rule '"`UNIQ--postMath-00000024-QINU`"'. The minimax rule is the randomised decision rule ( 1 p ) d 1 + p d 2 {\displaystyle (1-p)d_{1}+pd_{2}} .
  • The minimax rule is '"`UNIQ--postMath-00000025-QINU`"'. The minimax rule is d 2 {\displaystyle d_{2}} .
  • The minimax rules are all rules of the form '"`UNIQ--postMath-00000026-QINU`"', '"`UNIQ--postMath-00000027-QINU`"'. The minimax rules are all rules of the form ( 1 p ) d 1 + p d 2 {\displaystyle (1-p)d_{1}+pd_{2}} , 0 p 1 {\displaystyle 0\leq p\leq 1} .

Bayes

A randomised Bayes rule is one that has infimum Bayes risk r ( π , d ) {\displaystyle r(\pi ,d^{*})} among all decision rules. In the special case where the parameter space has two elements, the line π 1 R 1 + ( 1 π 1 ) R 2 = c {\displaystyle \pi _{1}R_{1}+(1-\pi _{1})R_{2}=c} , where π 1 {\displaystyle \pi _{1}} and π 2 {\displaystyle \pi _{2}} denote the prior probabilities of θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} respectively, is a family of points with Bayes risk c {\displaystyle c} . The minimum Bayes risk for the decision problem is therefore the smallest c {\displaystyle c} such that the line touches the risk set. This line may either touch only one extreme point of the risk set, i.e. correspond to a nonrandomised decision rule, or overlap with an entire side of the risk set, i.e. correspond to two nonrandomised decision rules and randomised decision rules combining the two. This is illustrated by the three situations below:

  • The Bayes rules are the set of decision rules of the form '"`UNIQ--postMath-00000030-QINU`"', '"`UNIQ--postMath-00000031-QINU`"'. The Bayes rules are the set of decision rules of the form ( 1 p ) d 1 + p d 2 {\displaystyle (1-p)d_{1}+pd_{2}} , 0 p 1 {\displaystyle 0\leq p\leq 1} .
  • The Bayes rule is '"`UNIQ--postMath-00000032-QINU`"'. The Bayes rule is d 1 {\displaystyle d_{1}} .
  • The Bayes rule is '"`UNIQ--postMath-00000033-QINU`"'. The Bayes rule is d 2 {\displaystyle d_{2}} .

As different priors result in different slopes, the set of all rules that are Bayes with respect to some prior are the same as the set of admissible rules.

Note that no situation is possible where a nonrandomised Bayes rule does not exist but a randomised Bayes rule does. The existence of a randomised Bayes rule implies the existence of a nonrandomised Bayes rule. This is also true in the general case, even with infinite parameter space, infinite Bayes risk, and regardless of whether the infimum Bayes risk can be attained. This supports the intuitive notion that the statistician need not utilise randomisation to arrive at statistical decisions.

In practice

As randomised Bayes rules always have nonrandomised alternatives, they are unnecessary in Bayesian statistics. However, in frequentist statistics, randomised rules are theoretically necessary under certain situations, and were thought to be useful in practice when they were first invented: Egon Pearson forecast that they 'will not meet with strong objection'. However, few statisticians actually implement them nowadays.

Randomised test

Not to be confused with Randomness test or Permutation test.

Randomized tests should not be confused with permutation tests.

In the usual formulation of the likelihood ratio test, the null hypothesis is rejected whenever the likelihood ratio Λ {\displaystyle \Lambda } is smaller than some constant K {\displaystyle K} , and accepted otherwise. However, this is sometimes problematic when Λ {\displaystyle \Lambda } is discrete under the null hypothesis, when Λ = K {\displaystyle \Lambda =K} is possible.

A solution is to define a test function ϕ ( x ) {\displaystyle \phi (x)} , whose value is the probability at which the null hypothesis is accepted:

ϕ ( x ) = { 1  if  Λ > K p ( x )  if  Λ = K 0  if  Λ < K {\displaystyle \phi (x)=\left\{{\begin{array}{l}1&{\text{ if }}\Lambda >K\\p(x)&{\text{ if }}\Lambda =K\\0&{\text{ if }}\Lambda <K\end{array}}\right.}

This can be interpreted as flipping a biased coin with a probability p ( x ) {\displaystyle p(x)} of returning heads whenever Λ = k {\displaystyle \Lambda =k} and rejecting the null hypothesis if a heads turns up.

A generalised form of the Neyman–Pearson lemma states that this test has maximum power among all tests at the same significance level α {\displaystyle \alpha } , that such a test must exist for any significance level α {\displaystyle \alpha } , and that the test is unique under normal situations.

As an example, consider the case where the underlying distribution is Bernoulli with probability p {\displaystyle p} , and we would like to test the null hypothesis p λ {\displaystyle p\leq \lambda } against the alternative hypothesis p > λ {\displaystyle p>\lambda } . It is natural to choose some k {\displaystyle k} such that P ( p ^ > k | H 0 ) = α {\displaystyle P({\hat {p}}>k|H_{0})=\alpha } , and reject the null whenever p ^ > k {\displaystyle {\hat {p}}>k} , where p ^ {\displaystyle {\hat {p}}} is the test statistic. However, to take into account cases where p ^ = k {\displaystyle {\hat {p}}=k} , we define the test function:

ϕ ( x ) = { 1  if  p ^ > k γ  if  p ^ = k 0  if  p ^ < k {\displaystyle \phi (x)=\left\{{\begin{array}{l}1&{\text{ if }}{\hat {p}}>k\\\gamma &{\text{ if }}{\hat {p}}=k\\0&{\text{ if }}{\hat {p}}<k\end{array}}\right.}

where γ {\displaystyle \gamma } is chosen such that P ( p ^ > k | H 0 ) + γ P ( p ^ = k | H 0 ) = α {\displaystyle P({\hat {p}}>k|H_{0})+\gamma P({\hat {p}}=k|H_{0})=\alpha } .

Randomised confidence intervals

An analogous problem arises in the construction of confidence intervals. For instance, the Clopper-Pearson interval is always conservative because of the discrete nature of the binomial distribution. An alternative is to find the upper and lower confidence limits U {\displaystyle U} and L {\displaystyle L} by solving the following equations:

{ P r ( p ^ < k | p = U ) + γ P ( p ^ = k | p = U ) = α / 2 P r ( p ^ > k | p = L ) + γ P ( p ^ = k | p = L ) = α / 2 {\displaystyle \left\{{\begin{array}{l}Pr({\hat {p}}<k|p=U)+\gamma P({\hat {p}}=k|p=U)&=\alpha /2\\Pr({\hat {p}}>k|p=L)+\gamma P({\hat {p}}=k|p=L)&=\alpha /2\end{array}}\right.}

where γ {\displaystyle \gamma } is a uniform random variable on (0, 1).

See also

Footnotes

  1. ^ Young and Smith, p. 11
  2. Bickel and Doksum, p. 28
  3. ^ Parmigiani, p. 132
  4. ^ DeGroot, p.128-129
  5. Bickel and Doksum, p.29
  6. ^ Young and Smith, p.12
  7. Bickel and Doksum, p. 32
  8. Bickel and Doksum, p.30
  9. Young and Smith, pp.14–16
  10. Young and Smith, p. 13
  11. Bickel and Doksum, pp. 29–30
  12. ^ Bickel and Doksum, p.31
  13. Robert, p.66
  14. ^ Agresti and Gottard, p.367
  15. ^ Bickel and Doksum, p.224
  16. Onghena, Patrick (2017-10-30), Berger, Vance W. (ed.), "Randomization Tests or Permutation Tests? A Historical and Terminological Clarification", Randomization, Masking, and Allocation Concealment (1 ed.), Boca Raton, FL: Chapman and Hall/CRC, pp. 209–228, doi:10.1201/9781315305110-14, ISBN 978-1-315-30511-0, retrieved 2021-10-08
  17. Young and Smith, p.68
  18. Robert, p.243
  19. Young and Smith, p.68

Bibliography

  • Agresti, Alan; Gottard, Anna (2005). "Comment: Randomized Confidence Intervals and the Mid-P Approach" (PDF). Statistical Science. 5 (4): 367–371. doi:10.1214/088342305000000403.
  • Bickel, Peter J.; Doksum, Kjell A. (2001). Mathematical statistics : basic ideas and selected topics (2nd ed.). Upper Saddle River, NJ: Prentice-Hall. ISBN 978-0138503635.
  • DeGroot, Morris H. (2004). Optimal statistical decisions. Hoboken, N.J: Wiley-Interscience. ISBN 978-0471680291.
  • Parmigiani, Giovanni; Inoue, Lurdes Y T (2009). Decision theory : principles and approaches. Chichester, West Sussex: John Wiley and Sons. ISBN 9780470746684.
  • Robert, Christian P (2007). The Bayesian choice : from decision-theoretic foundations to computational implementation. New York: Springer. ISBN 9780387715988.
  • Young, G.A.; Smith, R.L. (2005). Essentials of Statistical Inference. Cambridge: Cambridge University Press. ISBN 9780521548663.
Categories: