Misplaced Pages

Studentized residual

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Studentized residuals) Kind of ratio For broader coverage of this topic, see Studentization.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Studentized residual" – news · newspapers · books · scholar · JSTOR (May 2015) (Learn how and when to remove this message)
This article's factual accuracy is disputed. Relevant discussion may be found on the talk page. Please help to ensure that disputed statements are reliably sourced. (February 2014) (Learn how and when to remove this message)
(Learn how and when to remove this message)
Part of a series on
Regression analysis
Models
Estimation
Background

In statistics, a studentized residual is the dimensionless ratio resulting from the division of a residual by an estimate of its standard deviation, both expressed in the same units. It is a form of a Student's t-statistic, with the estimate of error varying between points.

This is an important technique in the detection of outliers. It is among several named in honor of William Sealey Gosset, who wrote under the pseudonym "Student" (e.g., Student's distribution). Dividing a statistic by a sample standard deviation is called studentizing, in analogy with standardizing and normalizing.

Motivation

See also: Errors and residuals in statistics

The key reason for studentizing is that, in regression analysis of a multivariate distribution, the variances of the residuals at different input variable values may differ, even if the variances of the errors at these different input variable values are equal. The issue is the difference between errors and residuals in statistics, particularly the behavior of residuals in regressions.

Consider the simple linear regression model

Y = α 0 + α 1 X + ε . {\displaystyle Y=\alpha _{0}+\alpha _{1}X+\varepsilon .\,}

Given a random sample (XiYi), i = 1, ..., n, each pair (XiYi) satisfies

Y i = α 0 + α 1 X i + ε i , {\displaystyle Y_{i}=\alpha _{0}+\alpha _{1}X_{i}+\varepsilon _{i},\,}

where the errors ε i {\displaystyle \varepsilon _{i}} , are independent and all have the same variance σ 2 {\displaystyle \sigma ^{2}} . The residuals are not the true errors, but estimates, based on the observable data. When the method of least squares is used to estimate α 0 {\displaystyle \alpha _{0}} and α 1 {\displaystyle \alpha _{1}} , then the residuals ε ^ {\displaystyle {\widehat {\varepsilon \,}}} , unlike the errors ε {\displaystyle \varepsilon } , cannot be independent since they satisfy the two constraints

i = 1 n ε ^ i = 0 {\displaystyle \sum _{i=1}^{n}{\widehat {\varepsilon \,}}_{i}=0}

and

i = 1 n ε ^ i x i = 0. {\displaystyle \sum _{i=1}^{n}{\widehat {\varepsilon \,}}_{i}x_{i}=0.}

(Here εi is the ith error, and ε ^ i {\displaystyle {\widehat {\varepsilon \,}}_{i}} is the ith residual.)

The residuals, unlike the errors, do not all have the same variance: the variance decreases as the corresponding x-value gets farther from the average x-value. This is not a feature of the data itself, but of the regression better fitting values at the ends of the domain. It is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence. This can also be seen because the residuals at endpoints depend greatly on the slope of a fitted line, while the residuals at the middle are relatively insensitive to the slope. The fact that the variances of the residuals differ, even though the variances of the true errors are all equal to each other, is the principal reason for the need for studentization.

It is not simply a matter of the population parameters (mean and standard deviation) being unknown – it is that regressions yield different residual distributions at different data points, unlike point estimators of univariate distributions, which share a common distribution for residuals.

Background

For this simple model, the design matrix is

X = [ 1 x 1 1 x n ] {\displaystyle X=\left}

and the hat matrix H is the matrix of the orthogonal projection onto the column space of the design matrix:

H = X ( X T X ) 1 X T . {\displaystyle H=X(X^{T}X)^{-1}X^{T}.\,}

The leverage hii is the ith diagonal entry in the hat matrix. The variance of the ith residual is

var ( ε ^ i ) = σ 2 ( 1 h i i ) . {\displaystyle \operatorname {var} ({\widehat {\varepsilon \,}}_{i})=\sigma ^{2}(1-h_{ii}).}

In case the design matrix X has only two columns (as in the example above), this is equal to

var ( ε ^ i ) = σ 2 ( 1 1 n ( x i x ¯ ) 2 j = 1 n ( x j x ¯ ) 2 ) . {\displaystyle \operatorname {var} ({\widehat {\varepsilon \,}}_{i})=\sigma ^{2}\left(1-{\frac {1}{n}}-{\frac {(x_{i}-{\bar {x}})^{2}}{\sum _{j=1}^{n}(x_{j}-{\bar {x}})^{2}}}\right).}

In the case of an arithmetic mean, the design matrix X has only one column (a vector of ones), and this is simply:

var ( ε ^ i ) = σ 2 ( 1 1 n ) . {\displaystyle \operatorname {var} ({\widehat {\varepsilon \,}}_{i})=\sigma ^{2}\left(1-{\frac {1}{n}}\right).}

Calculation

Given the definitions above, the Studentized residual is then

t i = ε ^ i σ ^ 1 h i i   {\displaystyle t_{i}={{\widehat {\varepsilon \,}}_{i} \over {\widehat {\sigma }}{\sqrt {1-h_{ii}\ }}}}

where hii is the leverage, and σ ^ {\displaystyle {\widehat {\sigma }}} is an appropriate estimate of σ (see below).

In the case of a mean, this is equal to:

t i = ε ^ i σ ^ ( n 1 ) / n {\displaystyle t_{i}={{\widehat {\varepsilon \,}}_{i} \over {\widehat {\sigma }}{\sqrt {(n-1)/n}}}}

Internal and external studentization

The usual estimate of σ is the internally studentized residual

σ ^ 2 = 1 n m j = 1 n ε ^ j 2 . {\displaystyle {\widehat {\sigma }}^{2}={1 \over n-m}\sum _{j=1}^{n}{\widehat {\varepsilon \,}}_{j}^{\,2}.}

where m is the number of parameters in the model (2 in our example).

But if the i th case is suspected of being improbably large, then it would also not be normally distributed. Hence it is prudent to exclude the i th observation from the process of estimating the variance when one is considering whether the i th case may be an outlier, and instead use the externally studentized residual, which is

σ ^ ( i ) 2 = 1 n m 1 j = 1 j i n ε ^ j 2 , {\displaystyle {\widehat {\sigma }}_{(i)}^{2}={1 \over n-m-1}\sum _{\begin{smallmatrix}j=1\\j\neq i\end{smallmatrix}}^{n}{\widehat {\varepsilon \,}}_{j}^{\,2},}

based on all the residuals except the suspect i th residual. Here is to emphasize that ε ^ j 2 ( j i ) {\displaystyle {\widehat {\varepsilon \,}}_{j}^{\,2}(j\neq i)} for suspect i are computed with i th case excluded.

If the estimate σ includes the i th case, then it is called the internally studentized residual, t i {\displaystyle t_{i}} (also known as the standardized residual ). If the estimate σ ^ ( i ) 2 {\displaystyle {\widehat {\sigma }}_{(i)}^{2}} is used instead, excluding the i th case, then it is called the externally studentized, t i ( i ) {\displaystyle t_{i(i)}} .

Distribution

"Tau distribution" redirects here. Not to be confused with Tau coefficient.

If the errors are independent and normally distributed with expected value 0 and variance σ, then the probability distribution of the ith externally studentized residual t i ( i ) {\displaystyle t_{i(i)}} is a Student's t-distribution with n − m − 1 degrees of freedom, and can range from {\displaystyle \scriptstyle -\infty } to + {\displaystyle \scriptstyle +\infty } .

On the other hand, the internally studentized residuals are in the range 0 ± ν {\displaystyle 0\,\pm \,{\sqrt {\nu }}} , where ν = n − m is the number of residual degrees of freedom. If ti represents the internally studentized residual, and again assuming that the errors are independent identically distributed Gaussian variables, then:

t i ν t t 2 + ν 1 {\displaystyle t_{i}\sim {\sqrt {\nu }}{t \over {\sqrt {t^{2}+\nu -1}}}}

where t is a random variable distributed as Student's t-distribution with ν − 1 degrees of freedom. In fact, this implies that ti /ν follows the beta distribution B(1/2,(ν − 1)/2). The distribution above is sometimes referred to as the tau distribution; it was first derived by Thompson in 1935.

When ν = 3, the internally studentized residuals are uniformly distributed between 3 {\displaystyle \scriptstyle -{\sqrt {3}}} and + 3 {\displaystyle \scriptstyle +{\sqrt {3}}} . If there is only one residual degree of freedom, the above formula for the distribution of internally studentized residuals doesn't apply. In this case, the ti are all either +1 or −1, with 50% chance for each.

The standard deviation of the distribution of internally studentized residuals is always 1, but this does not imply that the standard deviation of all the ti of a particular experiment is 1. For instance, the internally studentized residuals when fitting a straight line going through (0, 0) to the points (1, 4), (2, −1), (2, −1) are 2 ,   5 / 5 ,   5 / 5 {\displaystyle {\sqrt {2}},\ -{\sqrt {5}}/5,\ -{\sqrt {5}}/5} , and the standard deviation of these is not 1.

Note that any pair of studentized residual ti and tj (where i j {\displaystyle i\neq j} ), are NOT i.i.d. They have the same distribution, but are not independent due to constraints on the residuals having to sum to 0 and to have them be orthogonal to the design matrix.

Software implementations

Many programs and statistics packages, such as R, Python, etc., include implementations of Studentized residual.

Language/Program Function Notes
R rstandard(model, ...) internally studentized. See
R rstudent(model, ...) externally studentized. See


See also

References

  1. Regression Deletion Diagnostics R docs
  2. ^ Allen J. Pope (1976), "The statistics of residuals and the detection of outliers", U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, National Ocean Survey, Geodetic Research and Development Laboratory, 136 pages, , eq.(6)
  3. Thompson, William R. (1935). "On a Criterion for the Rejection of Observations and the Distribution of the Ratio of Deviation to Sample Standard Deviation". The Annals of Mathematical Statistics. 6 (4): 214–219. doi:10.1214/aoms/1177732567.

Further reading

Categories: