Misplaced Pages

Functional principal component analysis

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Statistical method for investigating the dominant modes of variation of functional data

Functional principal component analysis (FPCA) is a statistical method for investigating the dominant modes of variation of functional data. Using this method, a random function is represented in the eigenbasis, which is an orthonormal basis of the Hilbert space L that consists of the eigenfunctions of the autocovariance operator. FPCA represents functional data in the most parsimonious way, in the sense that when using a fixed number of basis functions, the eigenfunction basis explains more variation than any other basis expansion. FPCA can be applied for representing random functions, or in functional regression and classification.

Formulation

For a square-integrable stochastic process X(t), t ∈ 𝒯, let

μ ( t ) = E ( X ( t ) ) {\displaystyle \mu (t)={\text{E}}(X(t))}

and

G ( s , t ) = Cov ( X ( s ) , X ( t ) ) = k = 1 λ k φ k ( s ) φ k ( t ) , {\displaystyle G(s,t)={\text{Cov}}(X(s),X(t))=\sum _{k=1}^{\infty }\lambda _{k}\varphi _{k}(s)\varphi _{k}(t),}

where λ 1 λ 2 . . . 0 {\displaystyle \lambda _{1}\geq \lambda _{2}\geq ...\geq 0} are the eigenvalues and φ 1 {\displaystyle \varphi _{1}} , φ 2 {\displaystyle \varphi _{2}} , ... are the orthonormal eigenfunctions of the linear Hilbert–Schmidt operator

G : L 2 ( T ) L 2 ( T ) , G ( f ) = T G ( s , t ) f ( s ) d s . {\displaystyle G:L^{2}({\mathcal {T}})\rightarrow L^{2}({\mathcal {T}}),\,G(f)=\int _{\mathcal {T}}G(s,t)f(s)ds.}

By the Karhunen–Loève theorem, one can express the centered process in the eigenbasis,

X ( t ) μ ( t ) = k = 1 ξ k φ k ( t ) , {\displaystyle X(t)-\mu (t)=\sum _{k=1}^{\infty }\xi _{k}\varphi _{k}(t),}

where

ξ k = T ( X ( t ) μ ( t ) ) φ k ( t ) d t {\displaystyle \xi _{k}=\int _{\mathcal {T}}(X(t)-\mu (t))\varphi _{k}(t)dt}

is the principal component associated with the k-th eigenfunction φ k {\displaystyle \varphi _{k}} , with the properties

E ( ξ k ) = 0 , Var ( ξ k ) = λ k  and  E ( ξ k ξ l ) = 0  for  k l . {\displaystyle {\text{E}}(\xi _{k})=0,{\text{Var}}(\xi _{k})=\lambda _{k}{\text{ and }}{\text{E}}(\xi _{k}\xi _{l})=0{\text{ for }}k\neq l.}

The centered process is then equivalent to ξ1, ξ2, .... A common assumption is that X can be represented by only the first few eigenfunctions (after subtracting the mean function), i.e.

X ( t ) X m ( t ) = μ ( t ) + k = 1 m ξ k φ k ( t ) , {\displaystyle X(t)\approx X_{m}(t)=\mu (t)+\sum _{k=1}^{m}\xi _{k}\varphi _{k}(t),}

where

E ( T ( X ( t ) X m ( t ) ) 2 d t ) = j > m λ j 0  as  m . {\displaystyle \mathrm {E} \left(\int _{\mathcal {T}}\left(X(t)-X_{m}(t)\right)^{2}dt\right)=\sum _{j>m}\lambda _{j}\rightarrow 0{\text{ as }}m\rightarrow \infty .}

Interpretation of eigenfunctions

The first eigenfunction φ 1 {\displaystyle \varphi _{1}} depicts the dominant mode of variation of X.

φ 1 = a r g m a x φ = 1 { Var ( T ( X ( t ) μ ( t ) ) φ ( t ) d t ) } , {\displaystyle \varphi _{1}={\underset {\Vert \mathbf {\varphi } \Vert =1}{\operatorname {arg\,max} }}\left\{\operatorname {Var} (\int _{\mathcal {T}}(X(t)-\mu (t))\varphi (t)dt)\right\},}

where

φ = ( T φ ( t ) 2 d t ) 1 2 . {\displaystyle \Vert \mathbf {\varphi } \Vert =\left(\int _{\mathcal {T}}\varphi (t)^{2}dt\right)^{\frac {1}{2}}.}

The k-th eigenfunction φ k {\displaystyle \varphi _{k}} is the dominant mode of variation orthogonal to φ 1 {\displaystyle \varphi _{1}} , φ 2 {\displaystyle \varphi _{2}} , ... , φ k 1 {\displaystyle \varphi _{k-1}} ,

φ k = a r g m a x φ = 1 , φ , φ j = 0  for  j = 1 , , k 1 { Var ( T ( X ( t ) μ ( t ) ) φ ( t ) d t ) } , {\displaystyle \varphi _{k}={\underset {\Vert \mathbf {\varphi } \Vert =1,\langle \varphi ,\varphi _{j}\rangle =0{\text{ for }}j=1,\dots ,k-1}{\operatorname {arg\,max} }}\left\{\operatorname {Var} (\int _{\mathcal {T}}(X(t)-\mu (t))\varphi (t)dt)\right\},}

where

φ , φ j = T φ ( t ) φ j ( t ) d t ,  for  j = 1 , , k 1. {\displaystyle \langle \varphi ,\varphi _{j}\rangle =\int _{\mathcal {T}}\varphi (t)\varphi _{j}(t)dt,{\text{ for }}j=1,\dots ,k-1.}

Estimation

Let Yij = Xi(tij) + εij be the observations made at locations (usually time points) tij, where Xi is the i-th realization of the smooth stochastic process that generates the data, and εij are identically and independently distributed normal random variable with mean 0 and variance σ, j = 1, 2, ..., mi. To obtain an estimate of the mean function μ(tij), if a dense sample on a regular grid is available, one may take the average at each location tij:

μ ^ ( t i j ) = 1 n i = 1 n Y i j . {\displaystyle {\hat {\mu }}(t_{ij})={\frac {1}{n}}\sum _{i=1}^{n}Y_{ij}.}

If the observations are sparse, one needs to smooth the data pooled from all observations to obtain the mean estimate, using smoothing methods like local linear smoothing or spline smoothing.

Then the estimate of the covariance function G ^ ( s , t ) {\displaystyle {\hat {G}}(s,t)} is obtained by averaging (in the dense case) or smoothing (in the sparse case) the raw covariances

G i ( t i j , t i l ) = ( Y i j μ ^ ( t i j ) ) ( Y i l μ ^ ( t i l ) ) , j l , i = 1 , , n . {\displaystyle G_{i}(t_{ij},t_{il})=(Y_{ij}-{\hat {\mu }}(t_{ij}))(Y_{il}-{\hat {\mu }}(t_{il})),j\neq l,i=1,\dots ,n.}

Note that the diagonal elements of Gi should be removed because they contain measurement error.

In practice, G ^ ( s , t ) {\displaystyle {\hat {G}}(s,t)} is discretized to an equal-spaced dense grid, and the estimation of eigenvalues λk and eigenvectors vk is carried out by numerical linear algebra. The eigenfunction estimates φ ^ k {\displaystyle {\hat {\varphi }}_{k}} can then be obtained by interpolating the eigenvectors v k ^ . {\displaystyle {\hat {v_{k}}}.}

The fitted covariance should be positive definite and symmetric and is then obtained as

G ~ ( s , t ) = λ k > 0 λ ^ k φ ^ k ( s ) φ ^ k ( t ) . {\displaystyle {\tilde {G}}(s,t)=\sum _{\lambda _{k}>0}{\hat {\lambda }}_{k}{\hat {\varphi }}_{k}(s){\hat {\varphi }}_{k}(t).}

Let V ^ ( t ) {\displaystyle {\hat {V}}(t)} be a smoothed version of the diagonal elements Gi(tij, tij) of the raw covariance matrices. Then V ^ ( t ) {\displaystyle {\hat {V}}(t)} is an estimate of (G(t, t) + σ). An estimate of σ is obtained by

σ ^ 2 = 2 | T | T ( V ^ ( t ) G ~ ( t , t ) ) d t , {\displaystyle {\hat {\sigma }}^{2}={\frac {2}{|{\mathcal {T}}|}}\int _{\mathcal {T}}({\hat {V}}(t)-{\tilde {G}}(t,t))dt,} if σ ^ 2 > 0 ; {\displaystyle {\hat {\sigma }}^{2}>0;} otherwise σ ^ 2 = 0. {\displaystyle {\hat {\sigma }}^{2}=0.}

If the observations Xij, j=1, 2, ..., mi are dense in 𝒯, then the k-th FPC ξk can be estimated by numerical integration, implementing

ξ ^ k = X μ ^ , φ ^ k . {\displaystyle {\hat {\xi }}_{k}=\langle X-{\hat {\mu }},{\hat {\varphi }}_{k}\rangle .}

However, if the observations are sparse, this method will not work. Instead, one can use best linear unbiased predictors, yielding

ξ ^ k = λ ^ k φ ^ k T Σ ^ Y i 1 ( Y i μ ^ ) , {\displaystyle {\hat {\xi }}_{k}={\hat {\lambda }}_{k}{\hat {\varphi }}_{k}^{T}{\hat {\Sigma }}_{Y_{i}}^{-1}(Y_{i}-{\hat {\mu }}),}

where

Σ ^ Y i = G ~ + σ ^ 2 I m i {\displaystyle {\hat {\Sigma }}_{Y_{i}}={\tilde {G}}+{\hat {\sigma }}^{2}\mathbf {I} _{m_{i}}} ,

and G ~ {\displaystyle {\tilde {G}}} is evaluated at the grid points generated by tij, j = 1, 2, ..., mi. The algorithm, PACE, has an available Matlab package and R package

Asymptotic convergence properties of these estimates have been investigated.

Applications

FPCA can be applied for displaying the modes of functional variation, in scatterplots of FPCs against each other or of responses against FPCs, for modeling sparse longitudinal data, or for functional regression and classification (e.g., functional linear regression). Scree plots and other methods can be used to determine the number of components included. Functional Principal component analysis has varied applications in time series analysis. At present, this method is being adapted from traditional multivariate techniques to analyze financial data sets such as stock market indices and generate implied volatility graphs. A good example of advantages of the functional approach is the Smoothed FPCA (SPCA), developed by Silverman and studied by Pezzulli and Silverman , that enables direct combination of FPCA along with a general smoothing approach that makes using the information stored in some linear differential operators possible. An important application of the FPCA already known from multivariate PCA is motivated by the Karhunen-Loève decomposition of a random function to the set of functional parameters – factor functions and corresponding factor loadings (scalar random variables). This application is much more important than in the standard multivariate PCA since the distribution of the random function is in general too complex to be directly analyzed and the Karhunen-Loève decomposition reduces the analysis to the interpretation of the factor functions and the distribution of scalar random variables. Due to dimensionality reduction as well as its accuracy to represent data, there is a wide scope for further developments of functional principal component techniques in the financial field.

Applications of PCA in automotive engineering.

Connection with principal component analysis

The following table shows a comparison of various elements of principal component analysis (PCA) and FPCA. The two methods are both used for dimensionality reduction. In implementations, FPCA uses a PCA step.

However, PCA and FPCA differ in some critical aspects. First, the order of multivariate data in PCA can be permuted, which has no effect on the analysis, but the order of functional data carries time or space information and cannot be reordered. Second, the spacing of observations in FPCA matters, while there is no spacing issue in PCA. Third, regular PCA does not work for high-dimensional data without regularization, while FPCA has a built-in regularization due to the smoothness of the functional data and the truncation to a finite number of included components.

Element In PCA In FPCA
Data X R p {\displaystyle X\in \mathbb {R} ^{p}} X L 2 ( T ) {\displaystyle X\in L^{2}({\mathcal {T}})}
Dimension p < {\displaystyle p<\infty } {\displaystyle \infty }
Mean μ = E ( X ) {\displaystyle \mu ={\text{E}}(X)} μ ( t ) = E ( X ( t ) ) {\displaystyle \mu (t)={\text{E}}(X(t))}
Covariance Cov ( X ) = Σ p × p {\displaystyle {\text{Cov}}(X)=\Sigma _{p\times p}} Cov ( X ( s ) , X ( t ) ) = G ( s , t ) {\displaystyle {\text{Cov}}(X(s),X(t))=G(s,t)}
Eigenvalues λ 1 , λ 2 , , λ p {\displaystyle \lambda _{1},\lambda _{2},\dots ,\lambda _{p}} λ 1 , λ 2 , {\displaystyle \lambda _{1},\lambda _{2},\dots }
Eigenvectors/Eigenfunctions v 1 , v 2 , , v p {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{p}} φ 1 ( t ) , φ 2 ( t ) , {\displaystyle \varphi _{1}(t),\varphi _{2}(t),\dots }
Inner Product X , Y = k = 1 p X k Y k {\displaystyle \langle \mathbf {X} ,\mathbf {Y} \rangle =\sum _{k=1}^{p}X_{k}Y_{k}} X , Y = T X ( t ) Y ( t ) d t {\displaystyle \langle X,Y\rangle =\int _{\mathcal {T}}X(t)Y(t)dt}
Principal Components z k = X μ , v k , k = 1 , 2 , , p {\displaystyle z_{k}=\langle X-\mu ,\mathbf {v_{k}} \rangle ,k=1,2,\dots ,p} ξ k = X μ , φ k , k = 1 , 2 , {\displaystyle \xi _{k}=\langle X-\mu ,\varphi _{k}\rangle ,k=1,2,\dots }

See also

Notes

  1. ^ Jones, M. C.; Rice, J. A. (1992). "Displaying the Important Features of Large Collections of Similar Curves". The American Statistician. 46 (2): 140. doi:10.1080/00031305.1992.10475870.
  2. ^ Yao, F.; Müller, H. G.; Wang, J. L. (2005). "Functional linear regression analysis for longitudinal data". The Annals of Statistics. 33 (6): 2873. arXiv:math/0603132. doi:10.1214/009053605000000660.
  3. ^ Yao, F.; Müller, H. G.; Wang, J. L. (2005). "Functional Data Analysis for Sparse Longitudinal Data". Journal of the American Statistical Association. 100 (470): 577. doi:10.1198/016214504000001745.
  4. Staniswalis, J. G.; Lee, J. J. (1998). "Nonparametric Regression Analysis of Longitudinal Data". Journal of the American Statistical Association. 93 (444): 1403. doi:10.1080/01621459.1998.10473801.
  5. Rice, John; Silverman, B. (1991). "Estimating the Mean and Covariance Structure Nonparametrically When the Data are Curves". Journal of the Royal Statistical Society. Series B (Methodological). 53 (1): 233–243. doi:10.1111/j.2517-6161.1991.tb01821.x.
  6. "PACE: Principal Analysis by Conditional Expectation".
  7. "fdapace: Functional Data Analysis and Empirical Dynamics". 2018-02-25.
  8. Hall, P.; Müller, H. G.; Wang, J. L. (2006). "Properties of principal component methods for functional and longitudinal data analysis". The Annals of Statistics. 34 (3): 1493. arXiv:math/0608022. doi:10.1214/009053606000000272.
  9. Li, Y.; Hsing, T. (2010). "Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data". The Annals of Statistics. 38 (6): 3321. arXiv:1211.2137. doi:10.1214/10-AOS813.
  10. Madrigal, Pedro; Krajewski, Paweł (2015). "Uncovering correlated variability in epigenomic datasets using the Karhunen-Loeve transform". BioData Mining. 8: 20. doi:10.1186/s13040-015-0051-7. PMC 4488123. PMID 26140054.
  11. Functional Data Analysis with Applications in Finance by Michal Benko
  12. Lee, Sangdon (2012). "Variation modes of vehicle acceleration and development of ideal vehicle acceleration". Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering. 226 (9): 1185–1201. doi:10.1177/0954407012442775.
  13. Lee, Sangdon (2010). "Characterization and Development of the Ideal Pedal Force, Pedal Travel, and Response Time in the Brake System for the Translation of the Voice of the Customer to Engineering Specifications". Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering. 224 (11): 1433–1450. doi:10.1243/09544070JAUTO1585.
  14. Lee, Sangdon (2008). "Principal component analysis of vehicle acceleration gain and translation of voice of the customer". Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering. 222 (2): 191–203. doi:10.1243/09544070JAUTO351.
  15. Lee, Sangdon (2006). "Multivariate statistical analyses of idle noise and vehicle positioning". International Journal of Vehicle Noise and Vibration. 2 (2): 156–175. doi:10.1504/IJVNV.2006.011052.

References

Categories: