Misplaced Pages

Shapiro–Wilk test: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 21:20, 23 April 2018 edit190.46.36.182 (talk)No edit summary← Previous edit Revision as of 21:54, 23 April 2018 edit undoProtonk (talk | contribs)Extended confirmed users, Pending changes reviewers24,727 edits Reverted to revision 832669883 by 2A02:587:3912:EA00:428D:5CFF:FE51:D575 (talk): Probably not. (TW)Tag: UndoNext edit →
Line 1: Line 1:
The '''Shapiro–Wilk test''' is a ] in frequentist ]. It was published in 1965 by ], ] and ], who is considered by the most of the cientific community as the father of statistics.<ref name="Shapiro–Wilk" /> The '''Shapiro–Wilk test''' is a ] in frequentist ]. It was published in 1965 by ] and ].<ref name="Shapiro–Wilk" />


==Theory== ==Theory==

Revision as of 21:54, 23 April 2018

The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.

Theory

The Shapiro–Wilk test tests the null hypothesis that a sample x1, ..., xn came from a normally distributed population. The test statistic is

W = ( i = 1 n a i x ( i ) ) 2 i = 1 n ( x i x ¯ ) 2 , {\displaystyle W={\left(\sum _{i=1}^{n}a_{i}x_{(i)}\right)^{2} \over \sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}},}

where

  • x ( i ) {\displaystyle x_{(i)}} (with parentheses enclosing the subscript index i; not to be confused with x i {\displaystyle x_{i}} ) is the ith order statistic, i.e., the ith-smallest number in the sample;
  • x ¯ = ( x 1 + + x n ) / n {\displaystyle {\overline {x}}=\left(x_{1}+\cdots +x_{n}\right)/n} is the sample mean;
  • the constants a i {\displaystyle a_{i}} are given by
( a 1 , , a n ) = m T V 1 ( m T V 1 V 1 m ) 1 / 2 , {\displaystyle (a_{1},\dots ,a_{n})={m^{\mathsf {T}}V^{-1} \over (m^{\mathsf {T}}V^{-1}V^{-1}m)^{1/2}},}
where
m = ( m 1 , , m n ) T {\displaystyle m=(m_{1},\dots ,m_{n})^{\mathsf {T}}\,}
and m 1 , , m n {\displaystyle m_{1},\ldots ,m_{n}} are the expected values of the order statistics of independent and identically distributed random variables sampled from the standard normal distribution, and V {\displaystyle V} is the covariance matrix of those order statistics.

Interpretation

The null-hypothesis of this test is that the population is normally distributed. Thus, on the one hand, if the p-value is less than the chosen alpha level, then the null hypothesis is rejected and there is evidence that the data tested are not from a normally distributed population. On the other hand, if the p-value is greater than the chosen alpha level, then the null hypothesis that the data came from a normally distributed population can not be rejected (e.g., for an alpha level of 0.05, a data set with a p-value of 0.05 rejects the null hypothesis that the data are from a normally distributed population). Like most statistical significance tests, if the sample size is sufficiently large this test may detect even trivial departures from the null hypothesis (i.e., although there may be some statistically significant effect, it may be too small to be of any practical significance); thus, additional investigation of the effect size is typically advisable, e.g., a Q–Q plot in this case.

Power analysis

Monte Carlo simulation has found that Shapiro–Wilk has the best power for a given significance, followed closely by Anderson–Darling when comparing the Shapiro–Wilk, Kolmogorov–Smirnov and Lilliefors tests.

Approximation

Royston proposed an alternative method of calculating the coefficients vector by providing an algorithm for calculating values, which extended the sample size to 2000. This technique is used in several software packages including Stata, SPSS and SAS. Rahman and Govidarajulu extended the sample size further up to 5000.

See also

References

  1. ^ Shapiro, S. S.; Wilk, M. B. (1965). "An analysis of variance test for normality (complete samples)". Biometrika. 52 (3–4): 591–611. doi:10.1093/biomet/52.3-4.591. JSTOR 2333709. MR 0205384. p. 593
  2. "How do I interpret the Shapiro–Wilk test for normality?". JMP. 2004. Retrieved March 24, 2012.
  3. Field, Andy (2009). Discovering statistics using SPSS (3rd ed.). Los Angeles : SAGE Publications. p. 143. ISBN 978-1-84787-906-6.
  4. Razali, Nornadiah; Wah, Yap Bee (2011). "Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors and Anderson–Darling tests" (PDF). Journal of Statistical Modeling and Analytics. 2 (1): 21–33. Retrieved 30 March 2017.
  5. Royston, Patrick (September 1992). "Approximating the Shapiro–Wilk W-test for non-normality". Statistics and Computing. 2 (3): 117–119. doi:10.1007/BF01891203.
  6. Royston, Patrick. "Shapiro–Wilk and Shapiro–Francia Tests". Stata Technical Bulletin, StataCorp LP. 1 (3).
  7. Shapiro–Wilk and Shapiro–Francia tests for normality
  8. Park, Hun Myoung (2002–2008). "Univariate Analysis and Normality Test Using SAS, Stata, and SPSS" (PDF). . Retrieved 26 February 2014.
  9. Rahman und Govidarajulu (1997). "A modification of the test of Shapiro and Wilk for normality". Journal of Applied Statistics. 24 (2): 219–236. doi:10.1080/02664769723828.

External links

Statistics
Descriptive statistics
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Data collection
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical inference
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical / Multivariate / Time-series / Survival analysis
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Applications
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Categories: