Misplaced Pages

Two-way analysis of variance

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Statistical test examining influence of two categorical variables on one continuous variable

In statistics, the two-way analysis of variance (ANOVA) is an extension of the one-way ANOVA that examines the influence of two different categorical independent variables on one continuous dependent variable. The two-way ANOVA not only aims at assessing the main effect of each independent variable but also if there is any interaction between them.

History

In 1925, Ronald Fisher mentions the two-way ANOVA in his celebrated book, Statistical Methods for Research Workers (chapters 7 and 8). In 1934, Frank Yates published procedures for the unbalanced case. Since then, an extensive literature has been produced. The topic was reviewed in 1993 by Yasunori Fujikoshi. In 2005, Andrew Gelman proposed a different approach of ANOVA, viewed as a multilevel model.

Data set

Let us imagine a data set for which a dependent variable may be influenced by two factors which are potential sources of variation. The first factor has I {\displaystyle I} levels ( i { 1 , , I } {\displaystyle i\in \{1,\ldots ,I\}} ) and the second has J {\displaystyle J} levels ( j { 1 , , J } {\displaystyle j\in \{1,\ldots ,J\}} ). Each combination ( i , j ) {\displaystyle (i,j)} defines a treatment, for a total of I × J {\displaystyle I\times J} treatments. We represent the number of replicates for treatment ( i , j ) {\displaystyle (i,j)} by n i j {\displaystyle n_{ij}} , and let k {\displaystyle k} be the index of the replicate in this treatment ( k { 1 , , n i j } {\displaystyle k\in \{1,\ldots ,n_{ij}\}} ).

From these data, we can build a contingency table, where n i + = j = 1 J n i j {\displaystyle n_{i+}=\sum _{j=1}^{J}n_{ij}} and n + j = i = 1 I n i j {\displaystyle n_{+j}=\sum _{i=1}^{I}n_{ij}} , and the total number of replicates is equal to n = i , j n i j = i n i + = j n + j {\displaystyle n=\sum _{i,j}n_{ij}=\sum _{i}n_{i+}=\sum _{j}n_{+j}} .

The experimental design is balanced if each treatment has the same number of replicates, K {\displaystyle K} . In such a case, the design is also said to be orthogonal, allowing to fully distinguish the effects of both factors. We hence can write i , j n i j = K {\displaystyle \forall i,j\;n_{ij}=K} , and i , j n i j = n i + n + j n {\displaystyle \forall i,j\;n_{ij}={\frac {n_{i+}\cdot n_{+j}}{n}}} .

Model

Upon observing variation among all n {\displaystyle n} data points, for instance via a histogram, "probability may be used to describe such variation". Let us hence denote by Y i j k {\displaystyle Y_{ijk}} the random variable which observed value y i j k {\displaystyle y_{ijk}} is the k {\displaystyle k} -th measure for treatment ( i , j ) {\displaystyle (i,j)} . The two-way ANOVA models all these variables as varying independently and normally around a mean, μ i j {\displaystyle \mu _{ij}} , with a constant variance, σ 2 {\displaystyle \sigma ^{2}} (homoscedasticity):

Y i j k | μ i j , σ 2 i . i . d . N ( μ i j , σ 2 ) {\displaystyle Y_{ijk}\,|\,\mu _{ij},\sigma ^{2}\;{\overset {\mathrm {i.i.d.} }{\sim }}\;{\mathcal {N}}(\mu _{ij},\sigma ^{2})} .

Specifically, the mean of the response variable is modeled as a linear combination of the explanatory variables:

μ i j = μ + α i + β j + γ i j {\displaystyle \mu _{ij}=\mu +\alpha _{i}+\beta _{j}+\gamma _{ij}} ,

where μ {\displaystyle \mu } is the grand mean, α i {\displaystyle \alpha _{i}} is the additive main effect of level i {\displaystyle i} from the first factor (i-th row in the contingency table), β j {\displaystyle \beta _{j}} is the additive main effect of level j {\displaystyle j} from the second factor (j-th column in the contingency table) and γ i j {\displaystyle \gamma _{ij}} is the non-additive interaction effect of treatment ( i , j ) {\displaystyle (i,j)} for samples k = 1 , . . . , n i j {\displaystyle k=1,...,n_{ij}} from both factors (cell at row i and column j in the contingency table).

Another equivalent way of describing the two-way ANOVA is by mentioning that, besides the variation explained by the factors, there remains some statistical noise. This amount of unexplained variation is handled via the introduction of one random variable per data point, ϵ i j k {\displaystyle \epsilon _{ijk}} , called error. These n {\displaystyle n} random variables are seen as deviations from the means, and are assumed to be independent and normally distributed:

Y i j k = μ i j + ϵ i j k  with  ϵ i j k i . i . d . N ( 0 , σ 2 ) {\displaystyle Y_{ijk}=\mu _{ij}+\epsilon _{ijk}{\text{ with }}\epsilon _{ijk}{\overset {\mathrm {i.i.d.} }{\sim }}{\mathcal {N}}(0,\sigma ^{2})} .

Assumptions

Following Gelman and Hill, the assumptions of the ANOVA, and more generally the general linear model, are, in decreasing order of importance:

  1. the data points are relevant with respect to the scientific question under investigation;
  2. the mean of the response variable is influenced additively (if not interaction term) and linearly by the factors;
  3. the errors are independent;
  4. the errors have the same variance;
  5. the errors are normally distributed.

Parameter estimation

To ensure identifiability of parameters, we can add the following "sum-to-zero" constraints:

i α i = j β j = i γ i j = j γ i j = 0 {\displaystyle \sum _{i}\alpha _{i}=\sum _{j}\beta _{j}=\sum _{i}\gamma _{ij}=\sum _{j}\gamma _{ij}=0}

Hypothesis testing

In the classical approach, testing null hypotheses (that the factors have no effect) is achieved via their significance which requires calculating sums of squares.

Testing if the interaction term is significant can be difficult because of the potentially-large number of degrees of freedom.

Example

The following hypothetical example gives the yields of 15 plants subject to two different environmental variations, and three different fertilisers.

Extra CO2 Extra humidity
No fertiliser 7, 2, 1 7, 6
Nitrate 11, 6 10, 7, 3
Phosphate 5, 3, 4 11, 4

Five sums of squares are calculated:

Factor Calculation Sum N
Individual 7 2 + 2 2 + 1 2 + 7 2 + 6 2 + 11 2 + 6 2 + 10 2 + 7 2 + 3 2 + 5 2 + 3 2 + 4 2 + 11 2 + 4 2 {\displaystyle 7^{2}+2^{2}+1^{2}+7^{2}+6^{2}+11^{2}+6^{2}+10^{2}+7^{2}+3^{2}+5^{2}+3^{2}+4^{2}+11^{2}+4^{2}} 641 15
Fertilizer × Environment ( 7 + 2 + 1 ) 2 3 + ( 7 + 6 ) 2 2 + ( 11 + 6 ) 2 2 + ( 10 + 7 + 3 ) 2 3 + ( 5 + 3 + 4 ) 2 3 + ( 11 + 4 ) 2 2 {\displaystyle {\frac {(7+2+1)^{2}}{3}}+{\frac {(7+6)^{2}}{2}}+{\frac {(11+6)^{2}}{2}}+{\frac {(10+7+3)^{2}}{3}}+{\frac {(5+3+4)^{2}}{3}}+{\frac {(11+4)^{2}}{2}}} 556.1667 6
Fertilizer ( 7 + 2 + 1 + 7 + 6 ) 2 5 + ( 11 + 6 + 10 + 7 + 3 ) 2 5 + ( 5 + 3 + 4 + 11 + 4 ) 2 5 {\displaystyle {\frac {(7+2+1+7+6)^{2}}{5}}+{\frac {(11+6+10+7+3)^{2}}{5}}+{\frac {(5+3+4+11+4)^{2}}{5}}} 525.4 3
Environment ( 7 + 2 + 1 + 11 + 6 + 5 + 3 + 4 ) 2 8 + ( 7 + 6 + 10 + 7 + 3 + 11 + 4 ) 2 7 {\displaystyle {\frac {(7+2+1+11+6+5+3+4)^{2}}{8}}+{\frac {(7+6+10+7+3+11+4)^{2}}{7}}} 519.2679 2
Composite ( 7 + 2 + 1 + 11 + 6 + 5 + 3 + 4 + 7 + 6 + 10 + 7 + 3 + 11 + 4 ) 2 15 {\displaystyle {\frac {(7+2+1+11+6+5+3+4+7+6+10+7+3+11+4)^{2}}{15}}} 504.6 1

Finally, the sums of squared deviations required for the analysis of variance can be calculated.

Factor Sum N Total Environment Fertiliser Fertiliser × Environment Residual
Individual 641 15 1 1
Fertiliser × Environment 556.1667 6 1 −1
Fertiliser 525.4 3 1 −1
Environment 519.2679 2 1 −1
Composite (correction factor) 504.6 1 −1 −1 −1 1
Squared deviations ( σ 2 {\displaystyle \sigma ^{2}} ) 136.4 14.668 20.8 16.099 84.833
Degrees of freedom 14 1 2 2 9
Mean square variance 14.668 10.4 8.0495 9.426

See also

Notes

  1. Yates, Frank (March 1934). "The analysis of multiple classifications with unequal numbers in the different classes". Journal of the American Statistical Association. 29 (185): 51–66. doi:10.1080/01621459.1934.10502686. JSTOR 2278459.
  2. Fujikoshi, Yasunori (1993). "Two-way ANOVA models with unbalanced data". Discrete Mathematics. 116 (1): 315–334. doi:10.1016/0012-365X(93)90410-U.
  3. Gelman, Andrew (February 2005). "Analysis of variance? why it is more important than ever". The Annals of Statistics. 33 (1): 1–53. arXiv:math/0504499. doi:10.1214/009053604000001048. S2CID 125025956.
  4. Kass, Robert E (1 February 2011). "Statistical inference: The big picture". Statistical Science. 26 (1): 1–9. arXiv:1106.2895. doi:10.1214/10-sts337. PMC 3153074. PMID 21841892.
  5. Gelman, Andrew; Hill, Jennifer (18 December 2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press. pp. 45–46. ISBN 978-0521867061.
  6. Yi-An Ko; et al. (September 2013). "Novel Likelihood Ratio Tests for Screening Gene-Gene and Gene-Environment Interactions with Unbalanced Repeated-Measures Data". Genetic Epidemiology. 37 (6): 581–591. doi:10.1002/gepi.21744. PMC 4009698. PMID 23798480.
  7. Mecklin, Christopher (20 October 2020). "Chapter 7: ANOVA with Interaction". STA 265 Notes (Methods of Statistics and Data Science). Retrieved 6 December 2024 – via bookdown.org.
  8. Moore, Ken; Mowers, Ron; Harbur, M.L.; Merrick, Laura; Mahama, Anthony Assibi (2023). "Chapter 8: The Analysis of Variance (ANOVA)". In Suza, W.P.; Lamkey, K.R. (eds.). Quantitative Methods for Plant Breeding. Iowa State University Digital Press. Retrieved 6 December 2024.

References

Category: