Misplaced Pages

Berndt–Hall–Hall–Hausman algorithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The Berndt–Hall–Hall–Hausman (BHHH) algorithm is a numerical optimization algorithm similar to the Newton–Raphson algorithm, but it replaces the observed negative Hessian matrix with the outer product of the gradient. This approximation is based on the information matrix equality and therefore only valid while maximizing a likelihood function. The BHHH algorithm is named after the four originators: Ernst R. Berndt, Bronwyn Hall, Robert Hall, and Jerry Hausman.

Usage

If a nonlinear model is fitted to the data one often needs to estimate coefficients through optimization. A number of optimisation algorithms have the following general structure. Suppose that the function to be optimized is Q(β). Then the algorithms are iterative, defining a sequence of approximations, βk given by

β k + 1 = β k λ k A k Q β ( β k ) , {\displaystyle \beta _{k+1}=\beta _{k}-\lambda _{k}A_{k}{\frac {\partial Q}{\partial \beta }}(\beta _{k}),} ,

where β k {\displaystyle \beta _{k}} is the parameter estimate at step k, and λ k {\displaystyle \lambda _{k}} is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithm λk is determined by calculations within a given iterative step, involving a line-search until a point βk+1 is found satisfying certain criteria. In addition, for the BHHH algorithm, Q has the form

Q = i = 1 N Q i {\displaystyle Q=\sum _{i=1}^{N}Q_{i}}

and A is calculated using

A k = [ i = 1 N ln Q i β ( β k ) ln Q i β ( β k ) ] 1 . {\displaystyle A_{k}=\left^{-1}.}

In other cases, e.g. Newton–Raphson, A k {\displaystyle A_{k}} can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed.

See also

References

  1. Henningsen, A.; Toomet, O. (2011). "maxLik: A package for maximum likelihood estimation in R". Computational Statistics. 26 (3): 443–458 . doi:10.1007/s00180-010-0217-1.
  2. Berndt, E.; Hall, B.; Hall, R.; Hausman, J. (1974). "Estimation and Inference in Nonlinear Structural Models" (PDF). Annals of Economic and Social Measurement. 3 (4): 653–665.

Further reading

  • V. Martin, S. Hurn, and D. Harris, Econometric Modelling with Time Series, Chapter 3 'Numerical Estimation Methods'. Cambridge University Press, 2015.
  • Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 137–138. ISBN 0-674-00560-0.
  • Gill, P.; Murray, W.; Wright, M. (1981). Practical Optimization. London: Harcourt Brace.
  • Gourieroux, Christian; Monfort, Alain (1995). "Gradient Methods and ML Estimation". Statistics and Econometric Models. New York: Cambridge University Press. pp. 452–458. ISBN 0-521-40551-3.
  • Harvey, A. C. (1990). The Econometric Analysis of Time Series (Second ed.). Cambridge: MIT Press. pp. 137–138. ISBN 0-262-08189-X.
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
Constrained nonlinear
General
Differentiable
Convex optimization
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Combinatorial
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows
Metaheuristics
Categories: