Misplaced Pages

Generalized linear array model

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In statistics, the generalized linear array model (GLAM) is used for analyzing data sets with array structures. It based on the generalized linear model with the design matrix written as a Kronecker product.

Overview

The generalized linear array model or GLAM was introduced in 2006. Such models provide a structure and a computational procedure for fitting generalized linear models or GLMs whose model matrix can be written as a Kronecker product and whose data can be written as an array. In a large GLM, the GLAM approach gives very substantial savings in both storage and computational time over the usual GLM algorithm.

Suppose that the data Y {\displaystyle \mathbf {Y} } is arranged in a d {\displaystyle d} -dimensional array with size n 1 × n 2 × × n d {\displaystyle n_{1}\times n_{2}\times \dots \times n_{d}} ; thus, the corresponding data vector y = vec ( Y ) {\displaystyle \mathbf {y} =\operatorname {vec} (\mathbf {Y} )} has size n 1 n 2 n 3 n d {\displaystyle n_{1}n_{2}n_{3}\cdots n_{d}} . Suppose also that the design matrix is of the form

X = X d X d 1 X 1 . {\displaystyle \mathbf {X} =\mathbf {X} _{d}\otimes \mathbf {X} _{d-1}\otimes \dots \otimes \mathbf {X} _{1}.}

The standard analysis of a GLM with data vector y {\displaystyle \mathbf {y} } and design matrix X {\displaystyle \mathbf {X} } proceeds by repeated evaluation of the scoring algorithm

X W ~ δ X θ ^ = X W ~ δ θ ~ , {\displaystyle \mathbf {X} '{\tilde {\mathbf {W} }}_{\delta }\mathbf {X} {\hat {\boldsymbol {\theta }}}=\mathbf {X} '{\tilde {\mathbf {W} }}_{\delta }{\tilde {\boldsymbol {\theta }}},}

where θ ~ {\displaystyle {\tilde {\boldsymbol {\theta }}}} represents the approximate solution of θ {\displaystyle {\boldsymbol {\theta }}} , and θ ^ {\displaystyle {\hat {\boldsymbol {\theta }}}} is the improved value of it; W δ {\displaystyle \mathbf {W} _{\delta }} is the diagonal weight matrix with elements

w i i 1 = ( η i μ i ) 2 v a r ( y i ) , {\displaystyle w_{ii}^{-1}=\left({\frac {\partial \eta _{i}}{\partial \mu _{i}}}\right)^{2}\mathrm {var} (y_{i}),}

and

z = η + W δ 1 ( y μ ) {\displaystyle \mathbf {z} ={\boldsymbol {\eta }}+\mathbf {W} _{\delta }^{-1}(\mathbf {y} -{\boldsymbol {\mu }})}

is the working variable.

Computationally, GLAM provides array algorithms to calculate the linear predictor,

η = X θ {\displaystyle {\boldsymbol {\eta }}=\mathbf {X} {\boldsymbol {\theta }}}

and the weighted inner product

X W ~ δ X {\displaystyle \mathbf {X} '{\tilde {\mathbf {W} }}_{\delta }\mathbf {X} }

without evaluation of the model matrix X . {\displaystyle \mathbf {X} .}

Example

In 2 dimensions, let X = X 2 X 1 {\displaystyle \mathbf {X} =\mathbf {X} _{2}\otimes \mathbf {X} _{1}} , then the linear predictor is written X 1 Θ X 2 {\displaystyle \mathbf {X} _{1}{\boldsymbol {\Theta }}\mathbf {X} _{2}'} where Θ {\displaystyle {\boldsymbol {\Theta }}} is the matrix of coefficients; the weighted inner product is obtained from G ( X 1 ) W G ( X 2 ) {\displaystyle G(\mathbf {X} _{1})'\mathbf {W} G(\mathbf {X} _{2})} and W {\displaystyle \mathbf {W} } is the matrix of weights; here G ( M ) {\displaystyle G(\mathbf {M} )} is the row tensor function of the r × c {\displaystyle r\times c} matrix M {\displaystyle \mathbf {M} } given by

G ( M ) = ( M 1 ) ( 1 M ) {\displaystyle G(\mathbf {M} )=(\mathbf {M} \otimes \mathbf {1} ')\circ (\mathbf {1} '\otimes \mathbf {M} )}

where {\displaystyle \circ } means element by element multiplication and 1 {\displaystyle \mathbf {1} } is a vector of 1's of length c {\displaystyle c} .

On the other hand, the row tensor function G ( M ) {\displaystyle G(\mathbf {M} )} of the r × c {\displaystyle r\times c} matrix M {\displaystyle \mathbf {M} } is the example of Face-splitting product of matrices, which was proposed by Vadym Slyusar in 1996:

M M = ( M 1 T ) ( 1 T M ) , {\displaystyle \mathbf {M} \bullet \mathbf {M} =\left(\mathbf {M} \otimes \mathbf {1} ^{\textsf {T}}\right)\circ \left(\mathbf {1} ^{\textsf {T}}\otimes \mathbf {M} \right),}

where {\displaystyle \bullet } means Face-splitting product.

These low storage high speed formulae extend to d {\displaystyle d} -dimensions.

Applications

GLAM is designed to be used in d {\displaystyle d} -dimensional smoothing problems where the data are arranged in an array and the smoothing matrix is constructed as a Kronecker product of d {\displaystyle d} one-dimensional smoothing matrices.

References

  1. ^ Currie, I. D.; Durban, M.; Eilers, P. H. C. (2006). "Generalized linear array models with applications to multidimensional smoothing". Journal of the Royal Statistical Society. 68 (2): 259–280. doi:10.1111/j.1467-9868.2006.00543.x. S2CID 10261944.
  2. Slyusar, V. I. (December 27, 1996). "End products in matrices in radar applications" (PDF). Radioelectronics and Communications Systems. 41 (3): 50–53.
  3. Slyusar, V. I. (1997-05-20). "Analytical model of the digital antenna array on a basis of face-splitting matrix products" (PDF). Proc. ICATT-97, Kyiv: 108–109.
  4. Slyusar, V. I. (1997-09-15). "New operations of matrices product for applications of radars" (PDF). Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv.: 73–74.
  5. Slyusar, V. I. (March 13, 1998). "A Family of Face Products of Matrices and its Properties" (PDF). Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz. 1999. 35 (3): 379–384. doi:10.1007/BF02733426. S2CID 119661450.
Categories: