Misplaced Pages

Cook's distance

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Measure of the influence of a data point in regression analysis

In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis. In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.

Definition

Data points with large residuals (outliers) and/or high leverage may distort the outcome and accuracy of a regression. Cook's distance measures the effect of deleting a given observation. Points with a large Cook's distance are considered to merit closer examination in the analysis.

For the algebraic expression, first define

y n × 1 = X n × p β p × 1 + ε n × 1 {\displaystyle {\underset {n\times 1}{\mathbf {y} }}={\underset {n\times p}{\mathbf {X} }}\quad {\underset {p\times 1}{\boldsymbol {\beta }}}\quad +\quad {\underset {n\times 1}{\boldsymbol {\varepsilon }}}}

where ε N ( 0 , σ 2 I ) {\displaystyle {\boldsymbol {\varepsilon }}\sim {\mathcal {N}}\left(0,\sigma ^{2}\mathbf {I} \right)} is the error term, β = [ β 0 β 1 β p 1 ] T {\displaystyle {\boldsymbol {\beta }}=\left^{\mathsf {T}}} is the coefficient matrix, p {\displaystyle p} is the number of covariates or predictors for each observation, and X {\displaystyle \mathbf {X} } is the design matrix including a constant. The least squares estimator then is b = ( X T X ) 1 X T y {\displaystyle \mathbf {b} =\left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} } , and consequently the fitted (predicted) values for the mean of y {\displaystyle \mathbf {y} } are

y ^ = X b = X ( X T X ) 1 X T y = H y {\displaystyle \mathbf {\widehat {y}} =\mathbf {X} \mathbf {b} =\mathbf {X} \left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\mathbf {y} =\mathbf {H} \mathbf {y} }

where H X ( X T X ) 1 X T {\displaystyle \mathbf {H} \equiv \mathbf {X} (\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\mathsf {T}}} is the projection matrix (or hat matrix). The i {\displaystyle i} -th diagonal element of H {\displaystyle \mathbf {H} \,} , given by h i i x i T ( X T X ) 1 x i {\displaystyle h_{ii}\equiv \mathbf {x} _{i}^{\mathsf {T}}(\mathbf {X} ^{\mathsf {T}}\mathbf {X} )^{-1}\mathbf {x} _{i}} , is known as the leverage of the i {\displaystyle i} -th observation. Similarly, the i {\displaystyle i} -th element of the residual vector e = y y ^ = ( I H ) y {\displaystyle \mathbf {e} =\mathbf {y} -\mathbf {\widehat {y\,}} =\left(\mathbf {I} -\mathbf {H} \right)\mathbf {y} } is denoted by e i {\displaystyle e_{i}} .

Cook's distance D i {\displaystyle D_{i}} of observation i ( for  i = 1 , , n ) {\displaystyle i\;({\text{for }}i=1,\dots ,n)} is defined as the sum of all the changes in the regression model when observation i {\displaystyle i} is removed from it

D i = j = 1 n ( y ^ j y ^ j ( i ) ) 2 p s 2 {\displaystyle D_{i}={\frac {\sum _{j=1}^{n}\left({\widehat {y\,}}_{j}-{\widehat {y\,}}_{j(i)}\right)^{2}}{ps^{2}}}}

where p is the rank of the model (i.e., number of independent variables in the design matrix) and y ^ j ( i ) {\displaystyle {\widehat {y\,}}_{j(i)}} is the fitted response value obtained when excluding i {\displaystyle i} , and s 2 = e e n p {\displaystyle s^{2}={\frac {\mathbf {e} ^{\top }\mathbf {e} }{n-p}}} is the mean squared error of the regression model.

Equivalently, it can be expressed using the leverage ( h i i {\displaystyle h_{ii}} ):

D i = e i 2 p s 2 [ h i i ( 1 h i i ) 2 ] . {\displaystyle D_{i}={\frac {e_{i}^{2}}{ps^{2}}}\left.}

Detecting highly influential observations

There are different opinions regarding what cut-off values to use for spotting highly influential points. Since Cook's distance is in the metric of an F distribution with p {\displaystyle p} and n p {\displaystyle n-p} (as defined for the design matrix X {\displaystyle \mathbf {X} } above) degrees of freedom, the median point (i.e., F 0.5 ( p , n p ) {\displaystyle F_{0.5}(p,n-p)} ) can be used as a cut-off. Since this value is close to 1 for large n {\displaystyle n} , a simple operational guideline of D i > 1 {\displaystyle D_{i}>1} has been suggested.

The p {\displaystyle p} -dimensional random vector b b ( i ) {\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}} , which is the change of b {\displaystyle \mathbf {b} } due to a deletion of the i {\displaystyle i} -th observation, has a covariance matrix of rank one and therefore it is distributed entirely over one dimensional subspace (a line, say L {\displaystyle L} ) of the p {\displaystyle p} -dimensional space. The distributional property of b b ( i ) {\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}} mentioned above implies that information about the influence of the i {\displaystyle i} -th observation provided by b b ( i ) {\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}} should be obtained not from outside of the line L {\displaystyle L} but from the line L {\displaystyle L} itself. However, in the introduction of Cook’s distance, a scaling matrix of full rank p {\displaystyle p} is chosen and as a result b b ( i ) {\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}} is treated as if it is a random vector distributed over the whole space of p {\displaystyle p} dimensions. This means that information about the influence of the i {\displaystyle i} -th observation provided by b b ( i ) {\displaystyle \mathbf {b} -\mathbf {b\,} _{(i)}} through the Cook’s distance comes from the whole space of p {\displaystyle p} dimensions. Hence the Cook's distance measure is likely to distort the real influence of observations, misleading the right identification of influential observations.


Relationship to other influence measures (and interpretation)

D i {\displaystyle D_{i}} can be expressed using the leverage ( 0 h i i 1 {\displaystyle 0\leq h_{ii}\leq 1} ) and the square of the internally Studentized residual ( 0 t i 2 {\displaystyle 0\leq t_{i}^{2}} ), as follows:

D i = e i 2 p s 2 h i i ( 1 h i i ) 2 = 1 p e i 2 1 n p j = 1 n ε ^ j 2 ( 1 h i i ) h i i 1 h i i = 1 p t i 2 h i i 1 h i i . {\displaystyle {\begin{aligned}D_{i}&={\frac {e_{i}^{2}}{ps^{2}}}\cdot {\frac {h_{ii}}{(1-h_{ii})^{2}}}={\frac {1}{p}}\cdot {\frac {e_{i}^{2}}{{1 \over n-p}\sum _{j=1}^{n}{\widehat {\varepsilon \,}}_{j}^{\,2}(1-h_{ii})}}\cdot {\frac {h_{ii}}{1-h_{ii}}}\\&={\frac {1}{p}}\cdot t_{i}^{2}\cdot {\frac {h_{ii}}{1-h_{ii}}}.\end{aligned}}}

The benefit in the last formulation is that it clearly shows the relationship between t i 2 {\displaystyle t_{i}^{2}} and h i i {\displaystyle h_{ii}} to D i {\displaystyle D_{i}} (while p and n are the same for all observations). If t i 2 {\displaystyle t_{i}^{2}} is large then it (for non-extreme values of h i i {\displaystyle h_{ii}} ) will increase D i {\displaystyle D_{i}} . If h i i {\displaystyle h_{ii}} is close to 0 then D i {\displaystyle D_{i}} will be small, while if h i i {\displaystyle h_{ii}} is close to 1 then D i {\displaystyle D_{i}} will become very large (as long as t i 2 > 0 {\displaystyle t_{i}^{2}>0} , i.e.: that the observation i {\displaystyle i} is not exactly on the regression line that was fitted without observation i {\displaystyle i} ).

D i {\displaystyle D_{i}} is related to DFFITS through the following relationship (note that σ ^ σ ^ ( i ) t i = t i ( i ) {\displaystyle {{\widehat {\sigma }} \over {\widehat {\sigma }}_{(i)}}t_{i}=t_{i(i)}} is the externally studentized residual, and σ ^ , σ ^ ( i ) {\displaystyle {\widehat {\sigma }},{\widehat {\sigma }}_{(i)}} are defined here):

D i = 1 p t i 2 h i i 1 h i i = 1 p σ ^ ( i ) 2 σ ^ 2 σ ^ 2 σ ^ ( i ) 2 t i 2 h i i 1 h i i = 1 p σ ^ ( i ) 2 σ ^ 2 ( t i ( i ) h i i 1 h i i ) 2 = 1 p σ ^ ( i ) 2 σ ^ 2 DFFITS 2 {\displaystyle {\begin{aligned}D_{i}&={\frac {1}{p}}\cdot t_{i}^{2}\cdot {\frac {h_{ii}}{1-h_{ii}}}\\&={\frac {1}{p}}\cdot {\frac {{\widehat {\sigma }}_{(i)}^{2}}{{\widehat {\sigma }}^{2}}}\cdot {\frac {{\widehat {\sigma }}^{2}}{{\widehat {\sigma }}_{(i)}^{2}}}\cdot t_{i}^{2}\cdot {\frac {h_{ii}}{1-h_{ii}}}={\frac {1}{p}}\cdot {\frac {{\widehat {\sigma }}_{(i)}^{2}}{{\widehat {\sigma }}^{2}}}\cdot \left(t_{i(i)}{\sqrt {\frac {h_{ii}}{1-h_{ii}}}}\right)^{2}\\&={\frac {1}{p}}\cdot {\frac {{\widehat {\sigma }}_{(i)}^{2}}{{\widehat {\sigma }}^{2}}}\cdot {\text{DFFITS}}^{2}\end{aligned}}}

D i {\displaystyle D_{i}} can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters. This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases, where the particular observation is either included or excluded from the regression analysis.

An alternative to D i {\displaystyle D_{i}} has been proposed. Instead of considering the influence a single observation has on the overall model, the statistics S i {\displaystyle S_{i}} serves as a measure of how sensitive the prediction of the i {\displaystyle i} -th observation is to the deletion of each observation in the original data set. It can be formulated as a weighted linear combination of the D j {\displaystyle D_{j}} 's of all data points. Again, the projection matrix is involved in the calculation to obtain the required weights:

S i = j = 1 n ( y ^ i y ^ i ( j ) ) 2 p s 2 h i i = j = 1 n h i j 2 D j h i i h j j = j = 1 n ρ i j 2 D j {\displaystyle S_{i}={\frac {\sum _{j=1}^{n}\left({\widehat {y}}_{i}-{{\widehat {y}}_{i}}_{(j)}\right)^{2}}{ps^{2}h_{ii}}}=\sum _{j=1}^{n}{\frac {h_{ij}^{2}\cdot D_{j}}{h_{ii}\cdot h_{jj}}}=\sum _{j=1}^{n}\rho _{ij}^{2}\cdot D_{j}}

In this context, ρ i j {\displaystyle \rho _{ij}} ( 1 {\displaystyle \leq 1} ) resembles the correlation between the predictions y ^ i {\displaystyle {\widehat {y\,}}_{i}} and y ^ j {\displaystyle {\widehat {y\,}}_{j}} .
In contrast to D i {\displaystyle D_{i}} , the distribution of S i {\displaystyle S_{i}} is asymptotically normal for large sample sizes and models with many predictors. In absence of outliers the expected value of S i {\displaystyle S_{i}} is approximately p 1 {\displaystyle p^{-1}} . An influential observation can be identified if

| S i med ( S ) | 4.5 MAD ( S ) {\displaystyle \left|S_{i}-\operatorname {med} (S)\right|\geq 4.5\cdot \operatorname {MAD} (S)}

with med ( S ) {\displaystyle \operatorname {med} (S)} as the median and MAD ( S ) {\displaystyle \operatorname {MAD} (S)} as the median absolute deviation of all S {\displaystyle S} -values within the original data set, i.e., a robust measure of location and a robust measure of scale for the distribution of S i {\displaystyle S_{i}} . The factor 4.5 covers approx. 3 standard deviations of S {\displaystyle S} around its centre.
When compared to Cook's distance, S i {\displaystyle S_{i}} was found to perform well for high- and intermediate-leverage outliers, even in presence of masking effects for which D i {\displaystyle D_{i}} failed.
Interestingly, D i {\displaystyle D_{i}} and S i {\displaystyle S_{i}} are closely related because they can both be expressed in terms of the matrix T {\displaystyle \mathbf {T} } which holds the effects of the deletion of the j {\displaystyle j} -th data point on the i {\displaystyle i} -th prediction:

T = [ y ^ 1 y ^ 1 ( 1 ) y ^ 1 y ^ 1 ( 2 ) y ^ 1 y ^ 1 ( 3 ) y ^ 1 y ^ 1 ( n 1 ) y ^ 1 y ^ 1 ( n ) y ^ 2 y ^ 2 ( 1 ) y ^ 2 y ^ 2 ( 2 ) y ^ 2 y ^ 2 ( 3 ) y ^ 2 y ^ 2 ( n 1 ) y ^ 2 y ^ 2 ( n ) y ^ n 1 y ^ n 1 ( 1 ) y ^ n 1 y ^ n 1 ( 2 ) y ^ n 1 y ^ n 1 ( 3 ) y ^ n 1 y ^ n 1 ( n 1 ) y ^ n 1 y ^ n 1 ( n ) y ^ n y ^ n ( 1 ) y ^ n y ^ n ( 2 ) y ^ n y ^ n ( 3 ) y ^ n y ^ n ( n 1 ) y ^ n y ^ n ( n ) ]     = H E G = H [ e 1 0 0 0 0 0 e 2 0 0 0 0 0 0 e n 1 0 0 0 0 0 e n ] [ 1 1 h 11 0 0 0 0 0 1 1 h 22 0 0 0 0 0 0 1 1 h n 1 , n 1 0 0 0 0 0 1 1 h n n ] {\displaystyle {\begin{aligned}&\mathbf {T} =\left\\\\&\ \ =\mathbf {H} \mathbf {E} \mathbf {G} =\mathbf {H} \left\left\end{aligned}}}

With T {\displaystyle \mathbf {T} } at hand, D {\displaystyle \mathbf {D} } is given by:

D = [ D 1 D 2 D n 1 D n ] = 1 p s 2 diag ( T T T ) = 1 p s 2 diag ( G E H T H E G ) = diag ( M ) {\displaystyle \mathbf {D} =\left={\frac {1}{ps^{2}}}\operatorname {diag} \left(\mathbf {T} ^{\mathsf {T}}\mathbf {T} \right)={\frac {1}{ps^{2}}}\operatorname {diag} \left(\mathbf {G} \mathbf {E} \mathbf {H} ^{\mathsf {T}}\mathbf {H} \mathbf {E} \mathbf {G} \right)=\operatorname {diag} (\mathbf {M} )}

where H T H = H {\displaystyle \mathbf {H} ^{\mathsf {T}}\mathbf {H} =\mathbf {H} } if H {\displaystyle \mathbf {H} } is symmetric and idempotent, which is not necessarily the case. In contrast, S {\displaystyle \mathbf {S} } can be calculated as:

S = [ S 1 S 2 S n 1 S n ] = 1 p s 2 F diag ( T T T ) = 1 p s 2 [ 1 h 11 0 0 0 0 0 1 h 22 0 0 0 0 0 0 1 h n 1 n 1 0 0 0 0 0 1 h n n ] diag ( T T T )     = 1 p s 2 F diag ( H E G G E H T ) = F diag ( P ) {\displaystyle {\begin{aligned}&\mathbf {S} =\left={\frac {1}{ps^{2}}}\mathbf {F} \operatorname {diag} \left(\mathbf {T} \mathbf {T} ^{\mathsf {T}}\right)={\frac {1}{ps^{2}}}\left\operatorname {diag} \left(\mathbf {T} \mathbf {T} ^{\mathsf {T}}\right)\\\\&\ \ ={\frac {1}{ps^{2}}}\mathbf {F} \operatorname {diag} \left(\mathbf {H} \mathbf {E} \mathbf {G} \mathbf {G} \mathbf {E} \mathbf {H} ^{\mathsf {T}}\right)=\mathbf {F} \operatorname {diag} (\mathbf {P} )\end{aligned}}}

where diag ( A ) {\displaystyle \operatorname {diag} (\mathbf {A} )} extracts the main diagonal of a square matrix A {\displaystyle \mathbf {A} } . In this context, M = p 1 s 2 G E H T H E G {\displaystyle \mathbf {M} =p^{-1}s^{-2}\mathbf {G} \mathbf {E} \mathbf {H} ^{\mathsf {T}}\mathbf {H} \mathbf {E} \mathbf {G} } is referred to as the influence matrix whereas P = p 1 s 2 H E G G E H T {\displaystyle \mathbf {P} =p^{-1}s^{-2}\mathbf {H} \mathbf {E} \mathbf {G} \mathbf {G} \mathbf {E} \mathbf {H} ^{\mathsf {T}}} resembles the so-called sensitivity matrix. An eigenvector analysis of M {\displaystyle \mathbf {M} } and P {\displaystyle \mathbf {P} } - which both share the same eigenvalues – serves as a tool in outlier detection, although the eigenvectors of the sensitivity matrix are more powerful.

Software implementations

Many programs and statistics packages, such as R, Python, Julia, etc., include implementations of Cook's distance.

Language/Program Function Notes
Stata predict, cooksd See
R cooks.distance(model, ...) See
Python CooksDistance().fit(X, y) See
Julia cooksdistance(model, ...) See

Extensions

High-dimensional Influence Measure (HIM) is an alternative to Cook's distance for when p > n {\displaystyle p>n} (i.e., when there are more predictors than observations). While the Cook's distance quantifies the individual observation's influence on the least squares regression coefficient estimate, the HIM measures the influence of an observation on the marginal correlations.

See also

Notes

  1. The indices i {\displaystyle i} and j {\displaystyle j} are often interchanged in the original publication as the projection matrix H {\displaystyle \mathbf {H} } is symmetric in ordinary linear regression, i.e., h i j = h j i {\displaystyle h_{ij}=h_{ji}} . Since this is not always the case, e.g., in weighted linear regression, the indices have been written consistently here to account for potential asymmetry and thus allow for direct usage.

References

  1. Mendenhall, William; Sincich, Terry (1996). A Second Course in Statistics: Regression Analysis (5th ed.). Upper Saddle River, NJ: Prentice-Hall. p. 422. ISBN 0-13-396821-9. A measure of overall influence an outlying observation has on the estimated β {\displaystyle \beta } coefficients was proposed by R. D. Cook (1979). Cook's distance, Di, is calculated...
  2. Cook, R. Dennis (February 1977). "Detection of Influential Observations in Linear Regression". Technometrics. 19 (1). American Statistical Association: 15–18. doi:10.2307/1268249. JSTOR 1268249. MR 0436478.
  3. Cook, R. Dennis (March 1979). "Influential Observations in Linear Regression". Journal of the American Statistical Association. 74 (365). American Statistical Association: 169–174. doi:10.2307/2286747. hdl:11299/199280. JSTOR 2286747. MR 0529533.
  4. Hayashi, Fumio (2000). Econometrics. Princeton University Press. pp. 21–23. ISBN 1400823838.
  5. ^ "Cook's Distance".
  6. "Statistics 512: Applied Linear Models" (PDF). Purdue University. Archived from the original (PDF) on 2016-11-30. Retrieved 2016-03-25.
  7. Bollen, Kenneth A.; Jackman, Robert W. (1990). "Regression Diagnostics: An Expository Treatment of Outliers and Influential Cases". In Fox, John; Long, J. Scott (eds.). Modern Methods of Data Analysis. Newbury Park, CA: Sage. pp. 266. ISBN 0-8039-3366-5.
  8. Cook, R. Dennis; Weisberg, Sanford (1982). Residuals and Influence in Regression. New York, NY: Chapman & Hall. hdl:11299/37076. ISBN 0-412-24280-X.
  9. Kim, Myung Geun (31 May 2017). "A cautionary note on the use of Cook's distance". Communications for Statistical Applications and Methods. 24 (3): 317–324. doi:10.5351/csam.2017.24.3.317. ISSN 2383-4757.
  10. On deletion diagnostic statistic in regression
  11. Peña 2005, p. 2.
  12. Peña, Daniel (2005). "A New Statistic for Influence in Linear Regression". Technometrics. 47 (1). American Society for Quality and the American Statistical Association: 1–12. doi:10.1198/004017004000000662. S2CID 1802937.
  13. Peña, Daniel (2006). Pham, Hoang (ed.). Springer Handbook of Engineering Statistics. Springer London. pp. 523–536. doi:10.1007/978-1-84628-288-1. ISBN 978-1-84628-288-1. S2CID 60460007.
  14. High-dimensional influence measure

Further reading

Categories: