This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Controlling for a variable" – news · newspapers · books · scholar · JSTOR (February 2017) (Learn how and when to remove this message) |
In causal models, controlling for a variable means binning data according to measured values of the variable. This is typically done so that the variable can no longer act as a confounder in, for example, an observational study or experiment.
When estimating the effect of explanatory variables on an outcome by regression, controlled-for variables are included as inputs in order to separate their effects from the explanatory variables.
A limitation of controlling for variables is that a causal model is needed to identify important confounders (backdoor criterion is used for the identification). Without having one, a possible confounder might remain unnoticed. Another associated problem is that if a variable which is not a real confounder is controlled for, it may in fact make other variables (possibly not taken into account) become confounders while they were not confounders before. In other cases, controlling for a non-confounding variable may cause underestimation of the true causal effect of the explanatory variables on an outcome (e.g. when controlling for a mediator or its descendant). Counterfactual reasoning mitigates the influence of confounders without this drawback.
Experiments
This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2020) (Learn how and when to remove this message) |
Experiments attempt to assess the effect of manipulating one or more independent variables on one or more dependent variables. To ensure the measured effect is not influenced by external factors, other variables must be held constant. The variables made to remain constant during an experiment are referred to as control variables.
For example, if an outdoor experiment were to be conducted to compare how different wing designs of a paper airplane (the independent variable) affect how far it can fly (the dependent variable), one would want to ensure that the experiment is conducted at times when the weather is the same, because one would not want weather to affect the experiment. In this case, the control variables may be wind speed, direction and precipitation. If the experiment were conducted when it was sunny with no wind, but the weather changed, one would want to postpone the completion of the experiment until the control variables (the wind and precipitation level) were the same as when the experiment began.
In controlled experiments of medical treatment options on humans, researchers randomly assign individuals to a treatment group or control group. This is done to reduce the confounding effect of irrelevant variables that are not being studied, such as the placebo effect.
Observational studies
In an observational study, researchers have no control over the values of the independent variables, such as who receives the treatment. Instead, they must control for variables using statistics.
Observational studies are used when controlled experiments may be unethical or impractical. For instance, if a researcher wished to study the effect of unemployment (the independent variable) on health (the dependent variable), it would be considered unethical by institutional review boards to randomly assign some participants to have jobs and some not to. Instead, the researcher will have to create a sample which includes some employed people and some unemployed people. However, there could be factors that affect both whether someone is employed and how healthy he or she is. Part of any observed association between the independent variable (employment status) and the dependent variable (health) could be due to these outside, spurious factors rather than indicating a true link between them. This can be problematic even in a true random sample. By controlling for the extraneous variables, the researcher can come closer to understanding the true effect of the independent variable on the dependent variable.
In this context the extraneous variables can be controlled for by using multiple regression. The regression uses as independent variables not only the one or ones whose effects on the dependent variable are being studied, but also any potential confounding variables, thus avoiding omitted variable bias. "Confounding variables" in this context means other factors that not only influence the dependent variable (the outcome) but also influence the main independent variable.
OLS Regressions and control variables
The simplest examples of control variables in regression analysis comes from Ordinary Least Squares (OLS) estimators. The OLS framework assumes the following:
- Linear relationship - OLS statistical models are linear. Hence the relationship between explanatory variables and the mean of Y must be linear.
- Homoscedasticity - This requires homogeneity of variances, that is equal or similar variances across these data.
- Independence/No Autocorrelation - Error terms from one (or more) observation can not be influenced by error terms of other observations.
- Normality of Errors - The errors are jointly normal and uncorrelated, this implies that i.e. that the error terms are an independently and identically distributed set (iid). This implies that the unobservables between different groups or observations are independent.
- No multicollinearity - Independent variables must not be highly correlated with each other. For regressions using matrix notation, the matrix must be full rank i.e. is invertible.
Accordingly, a control variable can be interpreted as a linear explanatory variable that affects the mean value of Y (Assumption 1), but which does not present the primary variable of investigation, and which also satisfies the other assumptions above.
Example
Consider a study about whether getting older affects someone's life satisfaction. (Some researchers perceive a "u-shape": life satisfaction appears to decline first and then rise after middle age.) To identify the control variables needed here, one could ask what other variables determine not only someone's life satisfaction but also their age. Many other variables determine life satisfaction. But no other variable determines how old someone is (as long as they remain alive). (All people keep getting older, at the same rate, no matter what their other characteristics.) So, no control variables are needed here.
To determine the needed control variables, it can be useful to construct a directed acyclic graph.
See also
References
- Frost, Jim. "A Tribute to Regression Analysis | Minitab". Retrieved 2015-08-04.
- Streiner, David L (February 2016). "Control or overcontrol for covariates?". Evid Based Ment Health. 19 (1): 4–5. doi:10.1136/eb-2015-102294. PMC 10699339. PMID 26755716. S2CID 11155639.
- ^ Pearl, Judea; Mackenzie, Dana (2018). The Book of Why: The New Science of Cause and Effect. London: Allen Lane. ISBN 978-0-241-24263-6.
- WEISBERG, SANFORD (2021). APPLIED LINEAR REGRESSION. JOHN WILEY. ISBN 978-1-119-58014-0. OCLC 1225621417.
- Blanchflower, D.; Oswald, A. (2008). "Is well-being U-shaped over the life cycle?" (PDF). Social Science & Medicine. 66 (8): 1733–1749. doi:10.1016/j.socscimed.2008.01.030. PMID 18316146.
- Bartram, D. (2020). "Age and Life Satisfaction: Getting Control Variables under Control". Sociology. 55 (2): 421–437. doi:10.1177/0038038520926871.
Further reading
- Freedman, David; Pisani, Robert; Purves, Roger (2007). Statistics. W. W. Norton & Company. ISBN 978-0393929720.