Misplaced Pages

Projection pursuit regression

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Projection Pursuit Regression) Method for nonparametric multiple regression
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. Please help improve this article by introducing more precise citations. (November 2010) (Learn how and when to remove this message)

In statistics, projection pursuit regression (PPR) is a statistical model developed by Jerome H. Friedman and Werner Stuetzle that extends additive models. This model adapts the additive models in that it first projects the data matrix of explanatory variables in the optimal direction before applying smoothing functions to these explanatory variables.

Model overview

The model consists of linear combinations of ridge functions: non-linear transformations of linear combinations of the explanatory variables. The basic model takes the form

y i = β 0 + j = 1 r f j ( β j T x i ) + ε i , {\displaystyle y_{i}=\beta _{0}+\sum _{j=1}^{r}f_{j}(\beta _{j}^{\mathrm {T} }x_{i})+\varepsilon _{i},}

where xi is a 1 × p row of the design matrix containing the explanatory variables for example i, yi is a 1 × 1 prediction, {βj} is a collection of r vectors (each a unit vector of length p) which contain the unknown parameters, {fj} is a collection of r initially unknown smooth functions that map from R R {\displaystyle \mathbb {R} \rightarrow \mathbb {R} } , and r is a hyperparameter. Good values for r can be determined through cross-validation or a forward stage-wise strategy which stops when the model fit cannot be significantly improved. As r approaches infinity and with an appropriate set of functions {fj}, the PPR model is a universal estimator, as it can approximate any continuous function in R p {\displaystyle \mathbb {R} ^{p}} .

Model estimation

For a given set of data { ( y i , x i ) } i = 1 n {\displaystyle \{(y_{i},x_{i})\}_{i=1}^{n}} , the goal is to minimize the error function

S = i = 1 n [ y i j = 1 r f j ( β j T x i ) ] 2 {\displaystyle S=\sum _{i=1}^{n}\left^{2}}

over the functions f j {\displaystyle f_{j}} and vectors β j {\displaystyle \beta _{j}} . No method exists for solving over all variables at once, but it can be solved via alternating optimization. First, consider each ( f j , β j ) {\displaystyle (f_{j},\beta _{j})} pair individually: Let all other parameters be fixed, and find a "residual", the variance of the output not accounted for by those other parameters, given by

r i = y i l j f l ( β l T x i ) {\displaystyle r_{i}=y_{i}-\sum _{l\neq j}f_{l}(\beta _{l}^{\mathrm {T} }x_{i})}

The task of minimizing the error function now reduces to solving

min f j , β j S = min f j , β j i = 1 n [ r i f j ( β j T x i ) ] 2 {\displaystyle \min _{f_{j},\beta _{j}}S'=\min _{f_{j},\beta _{j}}\sum _{i=1}^{n}\left^{2}}

for each j in turn. Typically new ( f j , β j ) {\displaystyle (f_{j},\beta _{j})} pairs are added to the model in a forward stage-wise fashion.

Aside: Previously fitted pairs can be readjusted after new fit-pairs are determined by an algorithm known as backfitting, which entails reconsidering a previous pair, recalculating the residual given how other pairs have changed, refitting to account for that new information, and then cycling through all fit-pairs this way until parameters converge. This process typically results in a model that performs better with fewer fit-pairs, though it takes longer to train, and it is usually possible to achieve the same performance by skipping backfitting and simply adding more fits to the model (increasing r).

Solving the simplified error function to determine an ( f j , β j ) {\displaystyle (f_{j},\beta _{j})} pair can be done with alternating optimization, where first a random β j {\displaystyle \beta _{j}} is used to project X {\displaystyle X} in to 1D space, and then the optimal f j {\displaystyle f_{j}} is found to describe the relationship between that projection and the residuals via your favorite scatter plot regression method. Then if f j {\displaystyle f_{j}} is held constant, assuming f j {\displaystyle f_{j}} is once differentiable, the optimal updated weights β j {\displaystyle \beta _{j}} can be found via the Gauss–Newton method—a quasi-Newton method in which the part of the Hessian involving the second derivative is discarded. To derive this, first Taylor expand f j ( β j T x i ) f j ( β j , o l d T x i ) + f j ˙ ( β j , o l d T x i ) ( β j T x i β j , o l d T x i ) {\displaystyle f_{j}(\beta _{j}^{T}x_{i})\approx f_{j}(\beta _{j,old}^{T}x_{i})+{\dot {f_{j}}}(\beta _{j,old}^{T}x_{i})(\beta _{j}^{T}x_{i}-\beta _{j,old}^{T}x_{i})} , then plug the expansion back in to the simplified error function S {\displaystyle S'} and do some algebraic manipulation to put it in the form

min β j S min β j i = 1 n f j ˙ ( β j , o l d T x i ) 2 w [ ( β j , o l d T x i + r i f j ( β j , o l d T x i ) f j ˙ ( β j , o l d T x i ) b ^ ) β j T x i ] 2 {\displaystyle \min _{\beta _{j}}S'\approx \min _{\beta _{j}}\sum _{i=1}^{n}\underbrace {{\dot {f_{j}}}(\beta _{j,old}^{T}x_{i})^{2}} _{w}{\Bigg }^{2}}

This is a weighted least squares problem. If we solve for all weights w {\displaystyle w} and put them in a diagonal matrix W {\displaystyle W} , stack all the new targets b ^ {\displaystyle {\hat {b}}} in to a vector, and use the full data matrix X {\displaystyle X} instead of a single example x i {\displaystyle x_{i}} , then the optimal β j {\displaystyle \beta _{j}} is given by the closed-form

a r g m i n β j b ^ X β j W 2 = ( X T W X ) 1 X T W b ^ {\displaystyle {\underset {\beta _{j}}{\operatorname {arg\,min} }}{\Big \|}{\vec {\hat {b}}}-X\beta _{j}{\Big \|}_{W}^{2}=(X^{\mathrm {T} }WX)^{-1}X^{\mathrm {T} }W{\vec {\hat {b}}}}

Use this updated β j {\displaystyle \beta _{j}} to find a new projection of X {\displaystyle X} and refit f j {\displaystyle f_{j}} to the new scatter plot. Then use that new f j {\displaystyle f_{j}} to update β j {\displaystyle \beta _{j}} by resolving the above, and continue this alternating process until ( f j , β j ) {\displaystyle (f_{j},\beta _{j})} converges.

It has been shown that the convergence rate, the bias and the variance are affected by the estimation of β j {\displaystyle \beta _{j}} and f j {\displaystyle f_{j}} .

Discussion

The PPR model takes the form of a basic additive model but with the additional β j {\displaystyle \beta _{j}} component, so each f j {\displaystyle f_{j}} fits a scatter plot of β j T X T {\displaystyle \beta _{j}^{T}X^{T}} vs the residual (unexplained variance) during training rather than using the raw inputs themselves. This constrains the problem of finding each f j {\displaystyle f_{j}} to low dimension, making it solvable with common least squares or spline fitting methods and sidestepping the curse of dimensionality during training. Because f j {\displaystyle f_{j}} is taken of a projection of X {\displaystyle X} , the result looks like a "ridge" orthogonal to the projection dimension, so { f j } {\displaystyle \{f_{j}\}} are often called "ridge functions". The directions β j {\displaystyle \beta _{j}} are chosen to optimize the fit of their corresponding ridge functions.

Note that because PPR attempts to fit projections of the data, it can be difficult to interpret the fitted model as a whole, because each input variable has been accounted for in a complex and multifaceted way. This can make the model more useful for prediction than for understanding the data, though visualizing individual ridge functions and considering which projections the model is discovering can yield some insight.

Advantages of PPR estimation

  • It uses univariate regression functions instead of their multivariate form, thus effectively dealing with the curse of dimensionality
  • Univariate regression allows for simple and efficient estimation
  • Relative to generalized additive models, PPR can estimate a much richer class of functions
  • Unlike local averaging methods (such as k-nearest neighbors), PPR can ignore variables with low explanatory power.

Disadvantages of PPR estimation

  • PPR requires examining an M-dimensional parameter space in order to estimate β j {\displaystyle \beta _{j}} .
  • One must select the smoothing parameter for f j {\displaystyle f_{j}} .
  • The model is often difficult to interpret

Extensions of PPR

  • Alternate smoothers, such as the radial function, harmonic function and additive function, have been suggested and their performances vary depending on the data sets used.
  • Alternate optimization criteria have been used as well, such as standard absolute deviations and mean absolute deviations.
  • Ordinary least squares can be used to simplify calculations as often the data does not have strong non-linearities.
  • Sliced Inverse Regression (SIR) has been used to choose the direction vectors for PPR.
  • Generalized PPR combines regular PPR with iteratively reweighted least squares (IRLS) and a link function to estimate binary data.

PPR vs neural networks (NN)

Both projection pursuit regression and fully connected neural networks with a single hidden layer project the input vector onto a one-dimensional hyperplane and then apply a nonlinear transformation of the input variables that are then added in a linear fashion. Thus both follow the same steps to overcome the curse of dimensionality. The main difference is that the functions f j {\displaystyle f_{j}} being fitted in PPR can be different for each combination of input variables and are estimated one at a time and then updated with the weights, whereas in NN these are all specified upfront and estimated simultaneously.

Thus, in PPR estimation the transformations of variables in PPR are data driven whereas in a single-layer neural network these transformations are fixed.

See also

References

Category: