Misplaced Pages

Wilks' theorem

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Statistical theorem
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Wilks' theorem" – news · newspapers · books · scholar · JSTOR (September 2009) (Learn how and when to remove this message)
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (November 2010) (Learn how and when to remove this message)
(Learn how and when to remove this message)

In statistics, Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test.

Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine.

A convenient result by Samuel S. Wilks says that as the sample size approaches {\displaystyle \infty } , the distribution of the test statistic 2 log ( Λ ) {\displaystyle -2\log(\Lambda )} asymptotically approaches the chi-squared ( χ 2 {\displaystyle \chi ^{2}} ) distribution under the null hypothesis H 0 {\displaystyle H_{0}} . Here, Λ {\displaystyle \Lambda } denotes the likelihood ratio, and the χ 2 {\displaystyle \chi ^{2}} distribution has degrees of freedom equal to the difference in dimensionality of Θ {\displaystyle \Theta } and Θ 0 {\displaystyle \Theta _{0}} , where Θ {\displaystyle \Theta } is the full parameter space and Θ 0 {\displaystyle \Theta _{0}} is the subset of the parameter space associated with H 0 {\displaystyle H_{0}} . This result means that for large samples and a great variety of hypotheses, a practitioner can compute the likelihood ratio Λ {\displaystyle \Lambda } for the data and compare 2 log ( Λ ) {\displaystyle -2\log(\Lambda )} to the χ 2 {\displaystyle \chi ^{2}} value corresponding to a desired statistical significance as an approximate statistical test.

The theorem no longer applies when the true value of the parameter is on the boundary of the parameter space: Wilks’ theorem assumes that the ‘true’ but unknown values of the estimated parameters lie within the interior of the supported parameter space. In practice, one will notice the problem if the estimate lies on that boundary. In that event, the likelihood test is still a sensible test statistic and even possess some asymptotic optimality properties, but the significance (the p-value) can not be reliably estimated using the chi-squared distribution with the number of degrees of freedom prescribed by Wilks. In some cases, the asymptotic null-hypothesis distribution of the statistic is a mixture of chi-square distributions with different numbers of degrees of freedom.

Use

Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by D) is twice the log of the likelihoods ratio, i.e., it is twice the difference in the log-likelihoods:

D = 2 ln ( likelihood for null model likelihood for alternative model ) = 2 ln ( likelihood for alternative model likelihood for null model ) = 2 × [ ln ( likelihood for alternative model ) ln ( likelihood for null model ) ] {\displaystyle {\begin{aligned}D&=-2\ln \left({\frac {\text{likelihood for null model}}{\text{likelihood for alternative model}}}\right)\\&=2\ln \left({\frac {\text{likelihood for alternative model}}{\text{likelihood for null model}}}\right)\\&=2\times \\\end{aligned}}}

The model with more parameters (here alternative) will always fit at least as well — i.e., have the same or greater log-likelihood — than the model with fewer parameters (here null). Whether the fit is significantly better and should thus be preferred is determined by deriving how likely (p-value) it is to observe such a difference D by chance alone, if the model with fewer parameters were true. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to df alt df null {\displaystyle \,{\text{df}}_{\text{alt}}-{\text{df}}_{\text{null}}\,} , respectively the number of free parameters of models alternative and null.

For example: If the null model has 1 parameter and a log-likelihood of −8024 and the alternative model has 3 parameters and a log-likelihood of −8012, then the probability of this difference is that of chi-squared value of 2 × ( 8012 ( 8024 ) ) = 24 {\displaystyle 2\times (-8012-(-8024))=24} with 3 1 = 2 {\displaystyle 3-1=2} degrees of freedom, and is equal to 6 × 10 6 {\displaystyle 6\times 10^{-6}} . Certain assumptions must be met for the statistic to follow a chi-squared distribution, but empirical p-values may also be computed if those conditions are not met.

Examples

Coin tossing

An example of Pearson's test is a comparison of two coins to determine whether they have the same probability of coming up heads. The observations can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times each coin came up heads or tails. The contents of this table are our observations X.

X Heads Tails Coin 1 k 1 H k 1 T Coin 2 k 2 H k 2 T {\displaystyle {\begin{array}{c|cc}X&{\text{Heads}}&{\text{Tails}}\\\hline {\text{Coin 1}}&k_{\mathrm {1H} }&k_{\mathrm {1T} }\\{\text{Coin 2}}&k_{\mathrm {2H} }&k_{\mathrm {2T} }\end{array}}}

Here Θ consists of the possible combinations of values of the parameters p 1 H {\displaystyle p_{\mathrm {1H} }} , p 1 T {\displaystyle p_{\mathrm {1T} }} , p 2 H {\displaystyle p_{\mathrm {2H} }} , and p 2 T {\displaystyle p_{\mathrm {2T} }} , which are the probability that coins 1 and 2 come up heads or tails. In what follows, i = 1 , 2 {\displaystyle i=1,2} and j = H , T {\displaystyle j=\mathrm {H,T} } . The hypothesis space H is constrained by the usual constraints on a probability distribution, 0 p i j 1 {\displaystyle 0\leq p_{ij}\leq 1} , and p i H + p i T = 1 {\displaystyle p_{i\mathrm {H} }+p_{i\mathrm {T} }=1} . The space of the null hypothesis H 0 {\displaystyle H_{0}} is the subspace where p 1 j = p 2 j {\displaystyle p_{1j}=p_{2j}} . The dimensionality of the full parameter space Θ is 2 (either of the p 1 j {\displaystyle p_{1j}} and either of the p 2 j {\displaystyle p_{2j}} may be treated as free parameters under the hypothesis H {\displaystyle H} ), and the dimensionality of Θ 0 {\displaystyle \Theta _{0}} is 1 (only one of the p i j {\displaystyle p_{ij}} may be considered a free parameter under the null hypothesis H 0 {\displaystyle H_{0}} ).

Writing n i j {\displaystyle n_{ij}} for the best estimates of p i j {\displaystyle p_{ij}} under the hypothesis H, the maximum likelihood estimate is given by

n i j = k i j k i H + k i T . {\displaystyle n_{ij}={\frac {k_{ij}}{k_{i\mathrm {H} }+k_{i\mathrm {T} }}}\,.}

Similarly, the maximum likelihood estimates of p i j {\displaystyle p_{ij}} under the null hypothesis H 0 {\displaystyle H_{0}} are given by

m i j = k 1 j + k 2 j k 1 H + k 2 H + k 1 T + k 2 T , {\displaystyle m_{ij}={\frac {k_{1j}+k_{2j}}{k_{\mathrm {1H} }+k_{\mathrm {2H} }+k_{\mathrm {1T} }+k_{\mathrm {2T} }}}\,,}

which does not depend on the coin i.

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional H 0 {\displaystyle H_{0}} , the asymptotic distribution for the test will be χ 2 ( 1 ) {\displaystyle \chi ^{2}(1)} , the χ 2 {\displaystyle \chi ^{2}} distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

2 log Λ = 2 i , j k i j log n i j m i j . {\displaystyle -2\log \Lambda =2\sum _{i,j}k_{ij}\log {\frac {n_{ij}}{m_{ij}}}\,.}

Invalidity for random or mixed effects models

Wilks’ theorem assumes that the true but unknown values of the estimated parameters are in the interior of the parameter space. This is commonly violated in random or mixed effects models, for example, when one of the variance components is negligible relative to the others. In some such cases, one variance component can be effectively zero, relative to the others, or in other cases the models can be improperly nested.

To be clear: These limitations on Wilks’ theorem do not negate any power properties of a particular likelihood ratio test. The only issue is that a χ 2 {\displaystyle \chi ^{2}} distribution is sometimes a poor choice for estimating the statistical significance of the result.

Bad examples

Pinheiro and Bates (2000) showed that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naïve χ 2 {\displaystyle \chi ^{2}} – often dramatically so. The naïve assumptions could give significance probabilities (p-values) that are, on average, far too large in some cases and far too small in others.

In general, to test random effects, they recommend using Restricted maximum likelihood (REML). For fixed-effects testing, they say, “a likelihood ratio test for REML fits is not feasible”, because changing the fixed effects specification changes the meaning of the mixed effects, and the restricted model is therefore not nested within the larger model. As a demonstration, they set either one or two random effects variances to zero in simulated tests. In those particular examples, the simulated p-values with k restrictions most closely matched a 50–50 mixture of χ 2 ( k ) {\displaystyle \chi ^{2}(k)} and χ 2 ( k 1 ) {\displaystyle \chi ^{2}(k-1)} . (With k = 1 , χ 2 ( 0 ) {\displaystyle \chi ^{2}(0)} is 0 with probability 1. This means that a good approximation was 0.5 χ 2 ( 1 ) . {\displaystyle \,0.5\,\chi ^{2}(1)\,.} )

Pinheiro and Bates also simulated tests of different fixed effects. In one test of a factor with 4 levels (degrees of freedom = 3), they found that a 50–50 mixture of χ 2 ( 3 ) {\displaystyle \chi ^{2}(3)} and χ 2 ( 4 ) {\displaystyle \chi ^{2}(4)} was a good match for actual p-values obtained by simulation – and the error in using the naïve χ 2 ( 3 ) {\displaystyle \chi ^{2}(3)} “may not be too alarming.”

However, in another test of a factor with 15 levels, they found a reasonable match to χ 2 ( 18 ) {\displaystyle \chi ^{2}(18)} – 4 more degrees of freedom than the 14 that one would get from a naïve (inappropriate) application of Wilks’ theorem, and the simulated p-value was several times the naïve χ 2 ( 14 ) {\displaystyle \chi ^{2}(14)} . They conclude that for testing fixed effects, “it's wise to use simulation.”

See also

Notes

  1. Pinheiro and Bates (2000) provided a simulate.lme function in their nlme package for S-PLUS and R to support REML simulation; see ref.

References

  1. ^ Wilks, Samuel S. (1938). "The large-sample distribution of the likelihood ratio for testing composite hypotheses". The Annals of Mathematical Statistics. 9 (1): 60–62. doi:10.1214/aoms/1177732360.
  2. Huelsenbeck, J.P.; Crandall, K.A. (1997). "Phylogeny Estimation and Hypothesis Testing Using Maximum Likelihood". Annual Review of Ecology and Systematics. 28: 437–466. doi:10.1146/annurev.ecolsys.28.1.437.
  3. Neyman, Jerzy; Pearson, Egon S. (1933). "On the problem of the most efficient tests of statistical hypotheses" (PDF). Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 231 (694–706): 289–337. Bibcode:1933RSPTA.231..289N. doi:10.1098/rsta.1933.0009. JSTOR 91247.
  4. ^ Pinheiro, José C.; Bates, Douglas M. (2000). Mixed-Effects Models in S and S-PLUS. Springer-Verlag. pp. 82–93. ISBN 0-387-98957-9.
  5. "Simulate results from lme models" (PDF). R-project.org (software documentation). Package nlme. 12 May 2019. pp. 281–282. Retrieved 8 June 2019.

Other sources

  • Casella, George; Berger, Roger L. (2001). Statistical Inference (Second ed.). ISBN 0-534-24312-6.
  • Mood, A.M.; Graybill, F.A. (1963). Introduction to the Theory of Statistics (2nd ed.). McGraw-Hill. ISBN 978-0070428638.
  • Cox, D.R.; Hinkley, D.V. (1974). Theoretical Statistics. Chapman and Hall. ISBN 0-412-12420-3.
  • Stuart, A.; Ord, K.; Arnold, S. (1999). Kendall's Advanced Theory of Statistics. Vol. 2A. London: Arnold. ISBN 978-0-340-66230-4.

External links

Statistics
Descriptive statistics
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Data collection
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical inference
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical / Multivariate / Time-series / Survival analysis
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Applications
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Categories: