Misplaced Pages

Mills ratio

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Inverse Mills ratio) In probability, a theory

In probability theory, the Mills ratio (or Mills's ratio) of a continuous random variable X {\displaystyle X} is the function

m ( x ) := F ¯ ( x ) f ( x ) , {\displaystyle m(x):={\frac {{\bar {F}}(x)}{f(x)}},}

where f ( x ) {\displaystyle f(x)} is the probability density function, and

F ¯ ( x ) := Pr [ X > x ] = x + f ( u ) d u {\displaystyle {\bar {F}}(x):=\Pr=\int _{x}^{+\infty }f(u)\,du}

is the complementary cumulative distribution function (also called survival function). The concept is named after John P. Mills. The Mills ratio is related to the hazard rate h(x) which is defined as

h ( x ) := lim δ 0 1 δ Pr [ x < X x + δ | X > x ] {\displaystyle h(x):=\lim _{\delta \to 0}{\frac {1}{\delta }}\Pr}

by

m ( x ) = 1 h ( x ) . {\displaystyle m(x)={\frac {1}{h(x)}}.}

Upper and lower bounds

When X {\displaystyle X} has a standard normal distribution then the following bounds hold for x > 0 {\displaystyle x>0} :

x x 2 + 1 < m ( x ) < 1 x {\displaystyle {\frac {x}{x^{2}+1}}<m(x)<{\frac {1}{x}}}


Example

If X {\displaystyle X} has standard normal distribution then

m ( x ) 1 / x , {\displaystyle m(x)\sim 1/x,\,}

where the sign {\displaystyle \sim } means that the quotient of the two functions converges to 1 as x + {\displaystyle x\to +\infty } , see Q-function for details. More precise asymptotics can be given.

Inverse Mills ratio

The inverse Mills ratio is the ratio of the probability density function to the complementary cumulative distribution function of a distribution. Its use is often motivated by the following property of the truncated normal distribution. If X is a random variable having a normal distribution with mean μ and variance σ, then

E [ X |   X > α ] = μ + σ ϕ ( α μ σ ) 1 Φ ( α μ σ ) , E [ X |   X < α ] = μ σ ϕ ( α μ σ ) Φ ( α μ σ ) , {\displaystyle {\begin{aligned}&\operatorname {E} =\mu +\sigma {\frac {\phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}{1-\Phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}},\\&\operatorname {E} =\mu -\sigma {\frac {\phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}{\Phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}},\end{aligned}}}

where α {\displaystyle \alpha } is a constant, ϕ {\displaystyle \phi } denotes the standard normal density function, and Φ {\displaystyle \Phi } is the standard normal cumulative distribution function. The two fractions are the inverse Mills ratios.

Use in regression

A common application of the inverse Mills ratio (sometimes also called “non-selection hazard”) arises in regression analysis to take account of a possible selection bias. If a dependent variable is censored (i.e., not for all observations a positive outcome is observed) it causes a concentration of observations at zero values. This problem was first acknowledged by Tobin (1958), who showed that if this is not taken into consideration in the estimation procedure, an ordinary least squares estimation will produce biased parameter estimates. With censored dependent variables there is a violation of the Gauss–Markov assumption of zero correlation between independent variables and the error term.

James Heckman proposed a two-stage estimation procedure using the inverse Mills ratio to correct for the selection bias. In a first step, a regression for observing a positive outcome of the dependent variable is modeled with a probit model. The inverse Mills ratio must be generated from the estimation of a probit model, a logit cannot be used. The probit model assumes that the error term follows a standard normal distribution. The estimated parameters are used to calculate the inverse Mills ratio, which is then included as an additional explanatory variable in the OLS estimation.

See also

References

  1. Grimmett, G.; Stirzaker, S. (2001). Probability Theory and Random Processes (3rd ed.). Cambridge. p. 98. ISBN 0-19-857223-9.
  2. Mills, John P. (1926). "Table of the Ratio: Area to Bounding Ordinate, for Any Portion of Normal Curve". Biometrika. 18 (3/4): 395–400. doi:10.1093/biomet/18.3-4.395. JSTOR 2331957.
  3. Klein, J. P.; Moeschberger, M. L. (2003). Survival Analysis: Techniques for Censored and Truncated Data. New York: Springer. p. 27. ISBN 0-387-95399-X.
  4. "Upper & lower bounds for the normal distribution function". www.johndcook.com. 2018-06-02. Retrieved 2023-12-20.
  5. Wainwright MJ. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge: Cambridge University Press; 2019. doi:10.1017/9781108627771
  6. Small, Christopher G. (2010). Expansions and Asymptotics for Statistics. Monographs on Statistics & Applied Probability. Vol. 115. CRC Press. pp. 48, 50–51, 88–90. ISBN 978-1-4200-1102-9..
  7. Greene, W. H. (2003). Econometric Analysis (Fifth ed.). Prentice-Hall. p. 759. ISBN 0-13-066189-9.
  8. Tobin, J. (1958). "Estimation of relationships for limited dependent variables" (PDF). Econometrica. 26 (1): 24–36. doi:10.2307/1907382. JSTOR 1907382.
  9. Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 366–368. ISBN 0-674-00560-0.
  10. ^ Heckman, J. J. (1979). "Sample Selection as a Specification Error". Econometrica. 47 (1): 153–161. doi:10.2307/1912352. JSTOR 1912352.
  11. Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 368–373. ISBN 0-674-00560-0.
  12. Heckman, J. J. (1976). "The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models". Annals of Economic and Social Measurement. 5 (4): 475–492.

External links

Categories: