Misplaced Pages

68–95–99.7 rule

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Three sigma rule) Shorthand used in statistics
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "68–95–99.7 rule" – news · newspapers · books · scholar · JSTOR (September 2023) (Learn how and when to remove this message)
For an approximately normal data set, the values within one standard deviation of the mean account for about 68% of the set; while within two standard deviations account for about 95%; and within three standard deviations account for about 99.7%. Shown percentages are rounded theoretical probabilities intended only to approximate the empirical data derived from a normal population.
Prediction interval (on the y-axis) given from the standard score (on the x-axis). The y-axis is logarithmically scaled (but the values on it are not modified).

In statistics, the 68–95–99.7 rule, also known as the empirical rule, and sometimes abbreviated 3sr, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: approximately 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively.

In mathematical notation, these facts can be expressed as follows, where Pr() is the probability function, Χ is an observation from a normally distributed random variable, μ (mu) is the mean of the distribution, and σ (sigma) is its standard deviation: Pr ( μ 1 σ X μ + 1 σ ) 68.27 % Pr ( μ 2 σ X μ + 2 σ ) 95.45 % Pr ( μ 3 σ X μ + 3 σ ) 99.73 % {\displaystyle {\begin{aligned}\Pr(\mu -1\sigma \leq X\leq \mu +1\sigma )&\approx 68.27\%\\\Pr(\mu -2\sigma \leq X\leq \mu +2\sigma )&\approx 95.45\%\\\Pr(\mu -3\sigma \leq X\leq \mu +3\sigma )&\approx 99.73\%\end{aligned}}}

The usefulness of this heuristic especially depends on the question under consideration.

In the empirical sciences, the so-called three-sigma rule of thumb (or 3σ rule) expresses a conventional heuristic that nearly all values are taken to lie within three standard deviations of the mean, and thus it is empirically useful to treat 99.7% probability as near certainty.

In the social sciences, a result may be considered statistically significant if its confidence level is of the order of a two-sigma effect (95%), while in particle physics and astrophysics, there is a convention of requiring statistical significance of a five-sigma effect (99.99994% confidence) to qualify as a discovery.

A weaker three-sigma rule can be derived from Chebyshev's inequality, stating that even for non-normally distributed variables, at least 88.8% of cases should fall within properly calculated three-sigma intervals. For unimodal distributions, the probability of being within the interval is at least 95% by the Vysochanskij–Petunin inequality. There may be certain assumptions for a distribution that force this probability to be at least 98%.

Proof

We have that Pr ( μ n σ X μ + n σ ) = μ n σ μ + n σ 1 2 π σ e 1 2 ( x μ σ ) 2 d x , {\displaystyle {\begin{aligned}\Pr(\mu -n\sigma \leq X\leq \mu +n\sigma )=\int _{\mu -n\sigma }^{\mu +n\sigma }{\frac {1}{{\sqrt {2\pi }}\sigma }}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}dx,\end{aligned}}} doing the change of variable in terms of the standard score z = x μ σ {\displaystyle z={\frac {x-\mu }{\sigma }}} , we have 1 2 π n n e z 2 2 d z , {\displaystyle {\begin{aligned}{\frac {1}{\sqrt {2\pi }}}\int _{-n}^{n}e^{-{\frac {z^{2}}{2}}}dz\end{aligned}},} and this integral is independent of μ {\displaystyle \mu } and σ {\displaystyle \sigma } . We only need to calculate each integral for the cases n = 1 , 2 , 3 {\displaystyle n=1,2,3} . Pr ( μ 1 σ X μ + 1 σ ) = 1 2 π 1 1 e z 2 2 d z 0.6827 Pr ( μ 2 σ X μ + 2 σ ) = 1 2 π 2 2 e z 2 2 d z 0.9545 Pr ( μ 3 σ X μ + 3 σ ) = 1 2 π 3 3 e z 2 2 d z 0.9973. {\displaystyle {\begin{aligned}\Pr(\mu -1\sigma \leq X\leq \mu +1\sigma )&={\frac {1}{\sqrt {2\pi }}}\int _{-1}^{1}e^{-{\frac {z^{2}}{2}}}dz\approx 0.6827\\\Pr(\mu -2\sigma \leq X\leq \mu +2\sigma )&={\frac {1}{\sqrt {2\pi }}}\int _{-2}^{2}e^{-{\frac {z^{2}}{2}}}dz\approx 0.9545\\\Pr(\mu -3\sigma \leq X\leq \mu +3\sigma )&={\frac {1}{\sqrt {2\pi }}}\int _{-3}^{3}e^{-{\frac {z^{2}}{2}}}dz\approx 0.9973.\end{aligned}}}

Cumulative distribution function

Main article: Prediction interval § Known mean, known variance
Diagram showing the cumulative distribution function for the normal distribution with mean (μ) 0 and variance (σ) 1

These numerical values "68%, 95%, 99.7%" come from the cumulative distribution function of the normal distribution.

The prediction interval for any standard score z corresponds numerically to (1 − (1 − Φμ,σ(z)) · 2).

For example, Φ(2) ≈ 0.9772, or Pr(Xμ + 2σ) ≈ 0.9772, corresponding to a prediction interval of (1 − (1 − 0.97725)·2) = 0.9545 = 95.45%. This is not a symmetrical interval – this is merely the probability that an observation is less than μ + 2σ. To compute the probability that an observation is within two standard deviations of the mean (small differences due to rounding): Pr ( μ 2 σ X μ + 2 σ ) = Φ ( 2 ) Φ ( 2 ) 0.9772 ( 1 0.9772 ) 0.9545 {\displaystyle \Pr(\mu -2\sigma \leq X\leq \mu +2\sigma )=\Phi (2)-\Phi (-2)\approx 0.9772-(1-0.9772)\approx 0.9545}

This is related to confidence interval as used in statistics: X ¯ ± 2 σ n {\displaystyle {\bar {X}}\pm 2{\frac {\sigma }{\sqrt {n}}}} is approximately a 95% confidence interval when X ¯ {\displaystyle {\bar {X}}} is the average of a sample of size n {\displaystyle n} .

Normality tests

Main article: Normality test

The "68–95–99.7 rule" is often used to quickly get a rough probability estimate of something, given its standard deviation, if the population is assumed to be normal. It is also used as a simple test for outliers if the population is assumed normal, and as a normality test if the population is potentially not normal.

To pass from a sample to a number of standard deviations, one first computes the deviation, either the error or residual depending on whether one knows the population mean or only estimates it. The next step is standardizing (dividing by the population standard deviation), if the population parameters are known, or studentizing (dividing by an estimate of the standard deviation), if the parameters are unknown and only estimated.

To use as a test for outliers or a normality test, one computes the size of deviations in terms of standard deviations, and compares this to expected frequency. Given a sample set, one can compute the studentized residuals and compare these to the expected frequency: points that fall more than 3 standard deviations from the norm are likely outliers (unless the sample size is significantly large, by which point one expects a sample this extreme), and if there are many points more than 3 standard deviations from the norm, one likely has reason to question the assumed normality of the distribution. This holds ever more strongly for moves of 4 or more standard deviations.

One can compute more precisely, approximating the number of extreme moves of a given magnitude or greater by a Poisson distribution, but simply, if one has multiple 4 standard deviation moves in a sample of size 1,000, one has strong reason to consider these outliers or question the assumed normality of the distribution.

For example, a 6σ event corresponds to a chance of about two parts per billion. For illustration, if events are taken to occur daily, this would correspond to an event expected every 1.4 million years. This gives a simple normality test: if one witnesses a 6σ in daily data and significantly fewer than 1 million years have passed, then a normal distribution most likely does not provide a good model for the magnitude or frequency of large deviations in this respect.

In The Black Swan, Nassim Nicholas Taleb gives the example of risk models according to which the Black Monday crash would correspond to a 36-σ event: the occurrence of such an event should instantly suggest that the model is flawed, i.e. that the process under consideration is not satisfactorily modeled by a normal distribution. Refined models should then be considered, e.g. by the introduction of stochastic volatility. In such discussions it is important to be aware of the problem of the gambler's fallacy, which states that a single observation of a rare event does not contradict that the event is in fact rare. It is the observation of a plurality of purportedly rare events that increasingly undermines the hypothesis that they are rare, i.e. the validity of the assumed model. A proper modelling of this process of gradual loss of confidence in a hypothesis would involve the designation of prior probability not just to the hypothesis itself but to all possible alternative hypotheses. For this reason, statistical hypothesis testing works not so much by confirming a hypothesis considered to be likely, but by refuting hypotheses considered unlikely.

Table of numerical values

Because of the exponentially decreasing tails of the normal distribution, odds of higher deviations decrease very quickly. From the rules for normally distributed data for a daily event:

Range Expected fraction of

population inside range

Expected fraction of

population outside range

Approx. expected
frequency outside range
Approx. frequency outside range for daily event
μ ± 0.5σ 0.382924922548026 0.6171 = 61.71 % 3 in  5 Four or five times a week
μ ± σ 0.682689492137086 0.3173 = 31.73 % 1 in  3 Twice or thrice a week
μ ± 1.5σ 0.866385597462284 0.1336 = 13.36 % 2 in  15 Weekly
μ ± 2σ 0.954499736103642 0.04550 = 4.550 % 1 in  22 Every three weeks
μ ± 2.5σ 0.987580669348448 0.01242 = 1.242 % 1 in  81 Quarterly
μ ± 3σ 0.997300203936740 0.002700 = 0.270 % = 2.700 ‰ 1 in  370 Yearly
μ ± 3.5σ 0.999534741841929 0.0004653 = 0.04653 % = 465.3 ppm 1 in  2149 Every 6 years
μ ± 4σ 0.999936657516334 6.334×10 = 63.34 ppm 1 in  15787 Every 43 years (twice in a lifetime)
μ ± 4.5σ 0.999993204653751 6.795×10 = 6.795 ppm 1 in  147160 Every 403 years (once in the modern era)
μ ± 5σ 0.999999426696856 5.733×10 = 0.5733 ppm = 573.3 ppb 1 in  1744278 Every 4776 years (once in recorded history)
μ ± 5.5σ 0.999999962020875 3.798×10 = 37.98 ppb 1 in  26330254 Every 72090 years (thrice in history of modern humankind)
μ ± 6σ 0.999999998026825 1.973×10 = 1.973 ppb 1 in  506797346 Every 1.38 million years (twice in history of humankind)
μ ± 6.5σ 0.999999999919680 8.032×10 = 0.08032 ppb = 80.32 ppt 1 in  12450197393 Every 34 million years (twice since the extinction of dinosaurs)
μ ± 7σ 0.999999999997440 2.560×10 = 2.560 ppt 1 in  390682215445 Every 1.07 billion years (four occurrences in history of Earth)
μ ± 7.5σ 0.999999999999936 6.382×10 = 63.82 ppq 1 in  15669601204101 Once every 43 billion years (never in the history of the Universe, twice in the future of the Local Group before its merger)
μ ± 8σ 0.999999999999999 1.244×10 = 1.244 ppq 1 in  803734397655348 Once every 2.2 trillion years (never in the history of the Universe, once during the life of a red dwarf)
μ ± xσ erf ( x 2 ) {\displaystyle \operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)} 1 erf ( x 2 ) {\displaystyle 1-\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)} 1 in  1 1 erf ( x 2 ) {\displaystyle {\tfrac {1}{1-\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)}}} Every 1 1 erf ( x 2 ) {\displaystyle {\tfrac {1}{1-\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)}}} days

See also

References

  1. Huber, Franz (2018). A Logical Introduction to Probability and Induction. New York: Oxford University Press. p. 80. ISBN 9780190845414.
  2. This usage of "three-sigma rule" entered common usage in the 2000s, e.g. cited in
  3. Lyons, Louis (October 7, 2013). "DISCOVERING THE SIGIFICANCE OF 5σ". arXiv.
  4. See:
  5. Sloane, N. J. A. (ed.). "Sequence A178647". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
  6. Sloane, N. J. A. (ed.). "Sequence A110894". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
  7. Sloane, N. J. A. (ed.). "Sequence A270712". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.

External links

Probability distributions (list)
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Univariate (circular) directional
Circular uniform
Univariate von Mises
Wrapped normal
Wrapped Cauchy
Wrapped exponential
Wrapped asymmetric Laplace
Wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
Bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
Categories: