Misplaced Pages

Expected mean squares

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In statistics, expected mean squares (EMS) are the expected values of certain statistics arising in partitions of sums of squares in the analysis of variance (ANOVA). They can be used for ascertaining which statistic should appear in the denominator in an F-test for testing a null hypothesis that a particular effect is absent.

Definition

When the total corrected sum of squares in an ANOVA is partitioned into several components, each attributed to the effect of a particular predictor variable, each of the sums of squares in that partition is a random variable that has an expected value. That expected value divided by the corresponding number of degrees of freedom is the expected mean square for that predictor variable.

Example

The following example is from Longitudinal Data Analysis by Donald Hedeker and Robert D. Gibbons.

Each of s treatments (one of which may be a placebo) is administered to a sample of (capital) N randomly chosen patients, on whom certain measurements Y h i j {\textstyle Y_{hij}} are observed at each of (lower-case) n specified times, for h = 1 , , s , i = 1 , , N h {\textstyle h=1,\ldots ,s,\quad i=1,\ldots ,N_{h}} (thus the numbers of patients receiving different treatments may differ), and j = 1 , , n . {\textstyle j=1,\ldots ,n.} We assume the sets of patients receiving different treatments are disjoint, so patients are nested within treatments and not crossed with treatments. We have

Y h i j = μ + γ h + τ j + ( γ τ ) h j + π i ( h ) + ε h i j {\displaystyle Y_{hij}=\mu +\gamma _{h}+\tau _{j}+(\gamma \tau )_{hj}+\pi _{i(h)}+\varepsilon _{hij}}

where

  • μ {\displaystyle \mu } = grand mean, (fixed)
  • γ h {\displaystyle \gamma _{h}} = effect of treatment h {\displaystyle h} , (fixed)
  • τ j {\displaystyle \tau _{j}} = effect of time j {\displaystyle j} , (fixed)
  • ( γ τ ) h j {\displaystyle (\gamma \tau )_{hj}} = interaction effect of treatment h {\displaystyle h} and time j {\displaystyle j} , (fixed)
  • π i ( h ) {\displaystyle \pi _{i(h)}} = individual difference effect for patient i {\displaystyle i} nested within treatment h {\displaystyle h} , (random)
  • ε h i j {\displaystyle \varepsilon _{hij}} = error for patient i {\displaystyle i} in treatment h {\displaystyle h} at time j {\displaystyle j} . (random)
  • σ π 2 {\displaystyle \sigma _{\pi }^{2}} = variance of the random effect of patients nested within treatments,
  • σ ε {\displaystyle \sigma _{\varepsilon }} = error variance.

The total corrected sum of squares is

h i j ( Y h i j Y ¯ ) 2 where  Y ¯ = 1 n h i j Y h i j . {\displaystyle \sum _{hij}(Y_{hij}-{\overline {Y}})^{2}\quad {\text{where }}{\overline {Y}}={\frac {1}{n}}\sum _{hij}Y_{hij}.}

The ANOVA table below partitions the sum of squares (where N = h N h {\textstyle N=\sum _{h}N_{h}} ):

source of variability degrees of freedom sum of squares mean square expected mean square
treatment s 1 {\displaystyle s-1} SS Tr = n h = 1 s N h ( Y ¯ h Y ¯ ) 2 {\displaystyle {\text{SS}}_{\text{Tr}}=n\sum _{h=1}^{s}N_{h}({\overline {Y}}_{h\cdot \cdot }-{\overline {Y}}_{\cdot \cdot \cdot })^{2}} SS Tr s 1 {\displaystyle {\dfrac {{\text{SS}}_{\text{Tr}}}{s-1}}} σ ε 2 + n σ π 2 + D Tr {\displaystyle \sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}+D_{\text{Tr}}}
time n 1 {\displaystyle n-1} SS T = N j = 1 n ( Y ¯ j Y ¯ ) 2 {\displaystyle {\text{SS}}_{\text{T}}=N\sum _{j=1}^{n}({\overline {Y}}_{\cdot \cdot j}-{\overline {Y}}_{\cdot \cdot \cdot })^{2}} SS T n 1 {\displaystyle {\dfrac {{\text{SS}}_{\text{T}}}{n-1}}} σ ε 2 + D T {\displaystyle \sigma _{\varepsilon }^{2}+D_{\text{T}}}
treatment × time ( s 1 ) ( n 1 ) {\displaystyle (s-1)(n-1)} SS Tr T = h = 1 s j = 1 n N h ( Y ¯ h j Y ¯ h Y ¯ j + Y ¯ ) 2 {\displaystyle {\text{SS}}_{\text{Tr T}}=\sum _{h=1}^{s}\sum _{j=1}^{n}N_{h}({\overline {Y}}_{h\cdot j}-{\overline {Y}}_{h\cdot \cdot }-{\overline {Y}}_{\cdot \cdot j}+{\overline {Y}}_{\cdot \cdot \cdot })^{2}} SS Tr T ( n 1 ) ( s 1 ) {\displaystyle {\dfrac {{\text{SS}}_{\text{Tr T}}}{(n-1)(s-1)}}} σ ε 2 + D Tr T {\displaystyle \sigma _{\varepsilon }^{2}+D_{\text{Tr T}}}
patients within treatments N s {\displaystyle N-s} SS S ( Tr ) = n h = 1 s i = 1 N h ( Y ¯ h i Y ¯ h ) 2 {\displaystyle {\text{SS}}_{{\text{S}}({\text{Tr}})}=n\sum _{h=1}^{s}\sum _{i=1}^{N_{h}}({\overline {Y}}_{hi\cdot }-{\overline {Y}}_{h\cdot \cdot })^{2}} SS S ( Tr ) N s {\displaystyle {\dfrac {{\text{SS}}_{{\text{S}}({\text{Tr}})}}{N-s}}} σ ε 2 + n σ π 2 {\displaystyle \sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}}
error ( N s ) ( n 1 ) {\displaystyle (N-s)(n-1)} SS E = h = 1 s i = 1 N h j = 1 n ( Y h i j Y ¯ h j Y ¯ h i + Y ¯ h ) 2 {\displaystyle {\text{SS}}_{\text{E}}=\sum _{h=1}^{s}\sum _{i=1}^{N_{h}}\sum _{j=1}^{n}(Y_{hij}-{\overline {Y}}_{h\cdot j}-{\overline {Y}}_{hi\cdot }+{\overline {Y}}_{h\cdot \cdot })^{2}} SS E ( N s ) ( n 1 ) {\displaystyle {\dfrac {{\text{SS}}_{\text{E}}}{(N-s)(n-1)}}} σ ε 2 {\displaystyle \sigma _{\varepsilon }^{2}}

Use in F-tests

A null hypothesis of interest is that there is no difference between effects of different treatments—thus no difference among treatment means. This may be expressed by saying D Tr = 0 , {\textstyle D_{\text{Tr}}=0,} (with the notation as used in the table above). Under this null hypothesis, the expected mean square for effects of treatments is σ ε 2 + n σ π 2 . {\textstyle \sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}.}

The numerator in the F-statistic for testing this hypothesis is the mean square due to differences among treatments, i.e. it is SS Tr / ( s 1 ) . {\textstyle \left.{\text{SS}}_{\text{Tr}}\right/(s-1).} The denominator, however, is not SS E / ( ( N s ) ( n 1 ) ) . {\textstyle \left.{\text{SS}}_{\text{E}}\right/{\big (}(N-s)(n-1){\big )}.} The reason is that the random variable below, although under the null hypothesis it has an F-distribution, is not observable—it is not a statistic—because its value depends on the unobservable parameters σ π 2 {\textstyle \sigma _{\pi }^{2}} and σ ε 2 . {\textstyle \sigma _{\varepsilon }^{2}.}

SS Tr σ ε 2 + n σ π 2 / ( s 1 ) SS E σ ε 2 / ( ( N s ) ( n 1 ) ) SS Tr / ( s 1 ) SS E / ( ( N s ) ( n 1 ) ) {\displaystyle {\frac {\left.{\frac {{\text{SS}}_{\text{Tr}}}{\sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}}}\right/(s-1)}{\left.{\frac {{\text{SS}}_{\text{E}}}{\sigma _{\varepsilon }^{2}}}\right/{\big (}(N-s)(n-1){\big )}}}\neq {\frac {{\text{SS}}_{\text{Tr}}/(s-1)}{{\text{SS}}_{\text{E}}/{\big (}(N-s)(n-1){\big )}}}}

Instead, one uses as the test statistic the following random variable that is not defined in terms of SS E {\textstyle {\text{SS}}_{\text{E}}} :

F = SS Tr σ ε 2 + n σ π 2 / ( s 1 ) SS S ( Tr ) σ ε 2 + n σ π 2 / ( N s ) = SS Tr / ( s 1 ) SS S(Tr) / ( N s ) {\displaystyle F={\frac {\left.{\frac {{\text{SS}}_{\text{Tr}}}{\sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}}}\right/(s-1)}{\left.{\frac {{\text{SS}}_{{\text{S}}({\text{Tr}})}}{\sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}}}\right/(N-s)}}={\frac {\left.{\text{SS}}_{\text{Tr}}\right/(s-1)}{\left.{\text{SS}}_{\text{S(Tr)}}\right/(N-s)}}}

Notes and references

  1. Donald Hedeker, Robert D. Gibbons. Longitudinal Data Analysis. Wiley Interscience. 2006. pp. 21–24
Category: