Misplaced Pages

Structural equation modeling: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 03:41, 25 October 2022 editPJohnson120 (talk | contribs)38 edits SEM-specific software: added information regarding instructional materials for softwaresTag: Visual edit← Previous edit Latest revision as of 14:24, 19 December 2024 edit undoBunnysBot (talk | contribs)Bots6,182 editsm Fix CW Errors with GenFixes (T1)Tag: AWB 
(119 intermediate revisions by 46 users not shown)
Line 1: Line 1:
{{short description|Form of causal modeling that fit networks of constructs to data}} {{short description|Form of causal modeling that fit networks of constructs to data}}
{{for|the journal|Structural Equation Modeling (journal)}} {{about|the general structural modeling|the use of structural models in ]|Structural estimation|the journal|Structural Equation Modeling (journal)}}
]


] ]


'''Structural equation modeling''' ('''SEM''') is a diverse set of methods used by scientists for both observational and experimental research. SEM is used mostly in the social and behavioral science fields, but it is also used in epidemiology,<ref name="BM08">{{cite book | doi=10.4135/9781412953948.n443 | chapter=Structural Equation Modeling | title=Encyclopedia of Epidemiology | date=2008 | isbn=978-1-4129-2816-8 }}</ref> business,<ref name="Shelley06">{{cite book | doi=10.4135/9781412939584.n544 | chapter=Structural Equation Modeling | title=Encyclopedia of Educational Leadership and Administration | date=2006 | isbn=978-0-7619-3087-7 }}</ref> and other fields. A common definition of SEM is, "...a class of methodologies that seeks to represent hypotheses about the means, variances, and covariances of observed data in terms of a smaller number of 'structural' parameters defined by a hypothesized underlying conceptual or theoretical model,".<ref>{{Cite web |title=Structural Equation Modeling - an overview {{!}} ScienceDirect Topics |url=https://www.sciencedirect.com/topics/neuroscience/structural-equation-modeling#:~:text=Structural%20equation%20modeling%20can%20be,underlying%20conceptual%20or%20theoretical%20model. |access-date=2024-11-15 |website=www.sciencedirect.com}}</ref>
]


SEM involves a model representing how various aspects of some ] are thought to ] connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using '']s'' but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.<ref name="Pearl09">Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Second edition. New York: Cambridge University Press.</ref>
'''Structural equation modeling''' ('''SEM''') is a label for a diverse set of methods used by scientists in both experimental and observational research across the sciences,<ref name="Boslaugh2008" /> business,<ref>{{cite book|last1=Shelley|first1=Mack C|title=Encyclopedia of Educational Leadership and Administration|year=2006|isbn=978-0-7619-3087-7|chapter=Structural Equation Modeling|doi=10.4135/9781412939584.n544}}</ref> and other fields. It is used most in the social and behavioral sciences. A definition of SEM is difficult without reference to highly technical language, but a good starting place is the name itself.


The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including ], ], ], multi-group modeling, longitudinal modeling, ], ] and hierarchical or multilevel modeling.<ref name="kline_2016">{{Cite book|last=Kline|first=Rex B. |title=Principles and practice of structural equation modeling|date=2016 |isbn=978-1-4625-2334-4|edition=4th |location=New York|oclc=934184322}}</ref><ref name="Hayduk87">Hayduk, L. (1987) Structural Equation Modeling with LISREL: Essentials and Advances. Baltimore, Johns Hopkins University Press. ISBN 0-8018-3478-3</ref><ref>{{Cite book |last=Bollen |first=Kenneth A. |title=Structural equations with latent variables |date=1989 |publisher=Wiley |isbn=0-471-01171-1 |location=New York |oclc=18834634}}</ref><ref>{{Cite book |last=Kaplan |first=David |title=Structural equation modeling: foundations and extensions |date=2009 |publisher=SAGE |isbn=978-1-4129-1624-0 |edition=2nd |location=Los Angeles |oclc=225852466}}</ref><ref>{{Cite journal|last=Curran|first=Patrick J.|date=2003-10-01|title=Have Multilevel Models Been Structural Equation Models All Along?|journal=Multivariate Behavioral Research|volume=38|issue=4|pages=529–569|doi=10.1207/s15327906mbr3804_5|issn=0027-3171|pmid=26777445|s2cid=7384127}}</ref>
SEM involves the construction of a '']'', to represent how various aspects of an observable or theoretical phenomenon are thought to be ] structurally related to one another. The '']'' aspect of the model implies theoretical associations between variables that represent the phenomenon under investigation. The postulated causal structuring is often depicted with arrows representing causal connections between variables (as in Figures 1 and 2) but these causal connections can be equivalently represented as equations. The causal structures imply that specific patterns of connections should appear among the values of the variables, and the observed connections between the variables’ values are used to estimate the magnitudes of the causal effects, and to test whether or not the observed data are consistent with the postulated causal structuring. The '']'' in SEM are ] and ] properties that are implied by the model and its structural features, and then estimated with statistical algorithms (usually based on ] and ]s) run on experimental or observational data.


SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.<ref>{{cite journal |last1=Tarka |first1=Piotr |year=2017 |title=An overview of structural equation modeling: Its beginnings, historical development, usefulness and controversies in the social sciences |journal=Quality & Quantity |volume=52 |issue=1 |pages=313–54 |doi=10.1007/s11135-017-0469-8 |pmc=5794813 |pmid=29416184}}</ref>
The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can’t be directly observed) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including ], ], ], multi-group modeling, longitudinal modeling, ], ] and hierarchical or multilevel modeling.<ref name="kline_2016" /><ref>{{Cite book |last=Bollen |first=Kenneth A. |title=Structural equations with latent variables |date=1989 |publisher=Wiley |isbn=0-471-01171-1 |location=New York |oclc=18834634}}</ref><ref>{{Cite book |last=Kaplan |first=David |title=Structural equation modeling: foundations and extensions |date=2009 |publisher=SAGE |isbn=978-1-4129-1624-0 |edition=2nd |location=Los Angeles |oclc=225852466}}</ref>


A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.{{sfn|MacCallum|Austin|2000|p=209}}
Use of SEM is commonly justified because it helps identify latent variables that are believed to exist, but cannot be directly observed (like an attitude, intelligence or mental illness). Although there are not always clear boundaries of what is and what is not SEM,<ref>{{Cite journal|last=Curran|first=Patrick J.|date=2003-10-01|title=Have Multilevel Models Been Structural Equation Models All Along?|journal=Multivariate Behavioral Research|volume=38|issue=4|pages=529–569|doi=10.1207/s15327906mbr3804_5|issn=0027-3171|pmid=26777445|s2cid=7384127}}</ref> it generally involves ''path models'' (see also ]) and ''measurement models'' (see also ]) and always employs statistical models and computer programs to investigate the structural connections between ]s underlying the actual variables taken from observed data.<ref name="kline_2016">{{Cite book|last=Kline|first=Rex B. |title=Principles and practice of structural equation modeling|date=2016 |isbn=978-1-4625-2334-4|edition=4th |location=New York|oclc=934184322}}</ref> Researchers using SEM employ software programs to estimate the strength and sign of a coefficient for each modeled arrow (the numbers shown in Figure 1 for example), and to provide diagnostic clues suggesting which indicators or model components might produce inconsistency between the model and the data. Criticisms of SEM methods hint at mathematical formulation problems, a tendency to accept models without establishing external validity, and potential philosophical bias.<ref>{{cite journal |last1=Tarka |first1=Piotr |year=2017 |title=An overview of structural equation modeling: Its beginnings, historical development, usefulness and controversies in the social sciences |journal=Quality & Quantity |volume=52 |issue=1 |pages=313–54 |doi=10.1007/s11135-017-0469-8 |pmc=5794813 |pmid=29416184}}</ref>


== History ==
A SEM suggesting that intelligence (as measured by four questions) can predict academic performance (as measured by SAT, ACT, and high school GPA) is shown in Figure 1. The concept of ] cannot be measured directly in the way that one could measure height or weight. Instead, researchers have a theory and conceptualization of intelligence and then design ] such as a questionnaire or test that provides them with multiple indicators of intelligence. These indicators are then combined in a model to create a plausible way of measuring intelligence as a latent variable (the circle for intelligence in Figure 1) from the indicators (square boxes with scale 1–4 in Figure 1).<ref name="Salkind2007" /> Figure 1 is presented as a final model, after running it and obtaining all estimates (the numbers on the arrows). There is no consensus on the best symbolic notation to represent SEMs, for example Figure 2 represents a similar model as Figure 1 without as many arrows and in a format that might occur prior to running the model.


Structural equation modeling (SEM) began differentiating itself from correlation and regression when ] provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables.<ref name="Wright21">Wright, Sewall. (1921) "Correlation and causation". Journal of Agricultural Research. 20: 557-585.</ref><ref name="Wright34">{{cite journal | doi=10.1214/aoms/1177732676 | title=The Method of Path Coefficients | date=1934 | last1=Wright | first1=Sewall | journal=The Annals of Mathematical Statistics | volume=5 | issue=3 | pages=161–215 }}</ref><ref name="Wolfle99">Wolfle, L.M. (1999) "Sewall Wright on the method of path coefficients: An annotated bibliography" Structural Equation Modeling: 6(3):280-291.</ref> The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book<ref name="Duncan75">Duncan, Otis Dudley. (1975). Introduction to Structural Equation Models. New York: Academic Press. ISBN 0-12-224150-9.</ref> and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk<ref name="Hayduk87"/> provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).<ref name="Bollen89">Bollen, K. (1989). Structural Equations with Latent Variables. New York, Wiley. ISBN 0-471-01171-1.</ref>
A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where the errors throughout the model are calculated using all information from the model. This means the errors are more accurate than if a researcher were to calculate each part of the model separately.{{sfn|MacCallum|Austin|2000|p=209}}


Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early ] work on ] estimation centered on Koopman and Hood's (1953) algorithms from ] and optimal routing, with ], and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL<ref name="JGvT70">Jöreskog, Karl; Gruvaeus, Gunnar T.; van Thillo, Marielle. (1970) ACOVS: A General Computer Program for Analysis of Covariance Structures. Princeton, N.J.; Educational Testing Services.</ref><ref name=":0">{{Cite journal|last1=Jöreskog|first1=Karl Gustav|last2=van Thillo|first2=Mariella|date=1972|title=LISREL: A General Computer Program for Estimating a Linear Structural Equation System Involving Multiple Indicators of Unmeasured Variables|url=https://files.eric.ed.gov/fulltext/ED073122.pdf|journal=Research Bulletin: Office of Education|volume=ETS-RB-72-56|via=US Government}}</ref><ref name="JS76">Jöreskog, Karl; Sorbom, Dag. (1976) LISREL III: Estimation of Linear Structural Equation Systems by Maximum Likelihood Methods. Chicago: National Educational Resources, Inc.</ref> embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.
== History ==


Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates.<ref name="HG00a">Hayduk, L.; Glaser, D.N. (2000) "Jiving the Four-Step, Waltzing Around Factor Analysis, and Other Serious Fun". Structural Equation Modeling. 7 (1): 1-35.</ref><ref name="HG00b">Hayduk, L.; Glaser, D.N. (2000) "Doing the Four-Step, Right-2-3, Wrong-2-3: A Brief Reply to Mulaik and Millsap; Bollen; Bentler; and Herting and Costner". Structural Equation Modeling. 7 (1): 111-123.</ref> Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
Structural equation modeling (SEM) has its roots in the work of ] who applied explicit causal interpretations to regression equations based on direct and indirect effects of observed variables in population genetics.<ref>{{Cite journal|last=Wright|first=S.|date=1920-06-01|title=The Relative Importance of Heredity and Environment in Determining the Piebald Pattern of Guinea-Pigs|journal=Proceedings of the National Academy of Sciences|language=en|volume=6|issue=6|pages=320–332|doi=10.1073/pnas.6.6.320|issn=0027-8424|pmc=1084532|pmid=16576506|bibcode=1920PNAS....6..320W|doi-access=free}}</ref><ref>{{Cite journal|last=Wright|first=Sewall|date=1921|title=Journal of Agricultural Research|url=https://naldc.nal.usda.gov/download/IND43966364/PDF|journal=Journal of Agricultural Research|volume=20|issue=1|pages=557–585|via=USDA}}</ref> Lee M. Wolfle compiled an annotated bibliographic history of Sewall Wright's path coefficient method which we know today as ].<ref>{{Cite journal|last=Wolfle|first=Lee M.|date=1999|title=Sewall wright on the method of path coefficients: An annotated bibliography|url=http://www.tandfonline.com/doi/abs/10.1080/10705519909540134|journal=Structural Equation Modeling|language=en|volume=6|issue=3|pages=280–291|doi=10.1080/10705519909540134|issn=1070-5511}}</ref> Wright added two important elements to the standard practice of using regression to predict an outcome. These were (1) to combine information from more than one regression equation using (2) a causal approach to regression modeling rather than merely predictive. Sewall Wright consolidated his method of path analysis in his 1934 article "The Method of Path Coefficients".<ref>{{Cite journal|last=Wright|first=Sewall|date=1934|title=The Method of Path Coefficients|url=https://www.jstor.org/stable/2957502|journal=The Annals of Mathematical Statistics|volume=5|issue=3|pages=161–215|doi=10.1214/aoms/1177732676|jstor=2957502|issn=0003-4851}}</ref>


Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain.<ref name="Westland15">Westland, J.C. (2015). Structural Equation Modeling: From Paths to Networks. New York, Springer.</ref><ref>{{Cite journal|last=Christ|first=Carl F.|date=1994|title=The Cowles Commission's Contributions to Econometrics at Chicago, 1939-1955|url=https://www.jstor.org/stable/2728422|journal=Journal of Economic Literature|volume=32|issue=1|pages=30–59|jstor=2728422|issn=0022-0515}}</ref> Disciplinary differences in approaches can be seen in SEMNET discussions of endogeneity, and in discussions on causality via directed acyclic graphs (DAGs).<ref name="Pearl09"/> Discussions comparing and contrasting various SEM approaches are available<ref name="Imbens20">Imbens, G.W. (2020). "Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics". Journal of Economic Literature. 58 (4): 11-20-1179.</ref><ref name="BP13">{{cite book | doi=10.1007/978-94-007-6094-3_15 | chapter=Eight Myths About Causality and Structural Equation Models | title=Handbook of Causal Analysis for Social Research | series=Handbooks of Sociology and Social Research | date=2013 | last1=Bollen | first1=Kenneth A. | last2=Pearl | first2=Judea | pages=301–328 | isbn=978-94-007-6093-6 }}</ref> highlighting disciplinary differences in data structures and the concerns motivating economic models.
Otis Dudley Duncan introduced SEM to the social sciences in 1975<ref>{{Cite book|last=Duncan|first=Otis Dudley|url=https://www.worldcat.org/oclc/1175858|title=Introduction to structural equation models|date=1975|publisher=Academic Press|isbn=0-12-224150-9|location=New York|oclc=1175858}}</ref> and it flourished throughout the 1970s and 80s. Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM although there is great overlap with econometric practices employing simultaneous equations and exogenous (causal variables).<ref>{{Cite journal|last=Christ|first=Carl F.|date=1994|title=The Cowles Commission's Contributions to Econometrics at Chicago, 1939-1955|url=https://www.jstor.org/stable/2728422|journal=Journal of Economic Literature|volume=32|issue=1|pages=30–59|jstor=2728422|issn=0022-0515}}</ref><ref name="Westland2015" />


]<ref name="Pearl09" /> extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.<ref name="BP13" />
One of several programs ] developed in the early 1970s at Educational Testing Services (]) embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists had inherited from Wright and Duncan).<ref name=":0">{{Cite journal|last1=Jöreskog|first1=Karl Gustav|last2=van Thillo|first2=Mariella|date=1972|title=LISREL: A General Computer Program for Estimating a Linear Structural Equation System Involving Multiple Indicators of Unmeasured Variables|url=https://files.eric.ed.gov/fulltext/ED073122.pdf|journal=Research Bulletin: Office of Education|volume=ETS-RB-72-56|via=US Government}}</ref> The factor-structured portion of the model incorporated measurement errors and thereby permitted measurement-error-adjusted estimation of effects connecting latent variables.


SEM analyses are popular in the social sciences because these analytic techniques help us break down complex concepts and understand causal processes, but the complexity of the models can introduce substantial variability in the results depending on the presence or absence of conventional control variables, the sample size, and the variables of interest.<ref>{{Citation |last=Bollen |first=Kenneth A. |title=Eight Myths About Causality and Structural Equation Models |date=2013 |work=Handbooks of Sociology and Social Research |pages=301–328 |url=https://doi.org/10.1007/978-94-007-6094-3_15 |access-date=2024-12-11 |place=Dordrecht |publisher=Springer Netherlands |isbn=978-94-007-6093-6 |last2=Pearl |first2=Judea}}</ref> The use of experimental designs may address some of these doubts.<ref>{{Cite journal |last=Ng |first=Ted Kheng Siang |last2=Gan |first2=Daniel R.Y. |last3=Mahendran |first3=Rathi |last4=Kua |first4=Ee Heok |last5=Ho |first5=Roger C-M |date=September 2021 |title=Social connectedness as a mediator for horticultural therapy's biological effect on community-dwelling older adults: Secondary analyses of a randomized controlled trial |url=https://doi.org/10.1016/j.socscimed.2021.114191 |journal=Social Science &amp; Medicine |volume=284 |pages=114191 |doi=10.1016/j.socscimed.2021.114191 |issn=0277-9536}}</ref>
Loose and confusing terminology has been used to obscure weaknesses in the methods. In particular, PLS-PA (also known as PLS-PM) has been conflated with partial least squares regression PLSR, which is a substitute for ordinary least squares regression and has nothing to do with path analysis. PLS-PA has been falsely promoted as a method that works with small datasets when other estimation approaches fail; in fact, it has been shown that minimum required sample sizes for this method are consistent with those required in multiple regression.<ref>{{cite journal | url=https://onlinelibrary.wiley.com/doi/full/10.1111/isj.12131 | doi=10.1111/isj.12131 | title=Minimum sample size estimation in PLS-SEM: The inverse square root and gamma-exponential methods | year=2018 | last1=Kock | first1=Ned | last2=Hadaya | first2=Pierre | journal=Information Systems Journal | volume=28 | pages=227–261 | s2cid=3733557 }}</ref>


Today, SEM forms the basis of ] and (interpretable) ]. Exploratory and confirmatory factor analyses in classical statistics mirror unsupervised and supervised machine learning.
Both LISREL and PLS-PA were conceived as iterative computer algorithms, with an emphasis from the start on creating an accessible graphical and data entry interface and extension of Wright's (1921) path analysis. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from the economics of transportation and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers.


== General steps and considerations ==
Anderson and Rubin (1949, 1950) developed the limited information maximum likelihood estimator for the parameters of a single structural equation, which indirectly included the two-stage least squares estimator and its asymptotic distribution (Anderson, 2005) and Farebrother (1999). Two-stage least squares was originally proposed as a method of estimating the parameters of a single structural equation in a system of linear simultaneous equations, being introduced by ] (1953a, 1953b, 1961) and more or less independently by ] (1957) and ] (1958). Anderson's limited information maximum likelihood estimation was eventually implemented in a computer search algorithm, where it competed with other iterative SEM algorithms. Of these, two-stage least squares was by far the most widely used method in the 1960s and the early 1970s.
The following considerations apply to the construction and assessment of many structural equation models.


=== Model specification ===
Systems of regression equation approaches were developed at the Cowles Commission from the 1950s on, extending the transportation modeling of Tjalling Koopmans. Sewall Wright and other statisticians attempted to promote path analysis methods at Cowles (then at the University of Chicago). University of Chicago statisticians identified many faults with path analysis applications to the social sciences; faults which did not pose significant problems for identifying gene transmission in Wright's context, but which made path methods such as PLS-PA and LISREL problematic in the social sciences. Freedman (1987) summarized these objections in path analyses: "failure to distinguish among causal assumptions, statistical implications, and policy claims has been one of the main reasons for the suspicion and confusion surrounding quantitative methods in the social sciences" (see also Wold's (1987) response). Wright's path analysis never gained a large following among US econometricians, but was successful in influencing Hermann Wold and his student Karl Jöreskog. Jöreskog's student Claes Fornell promoted LISREL in the US.


Building or specifying a model requires attending to:
Advances in computers made it simple for novices to apply structural equation methods in the computer-intensive analysis of large datasets in complex, unstructured problems. The most popular solution techniques fall into three classes of algorithms: (1) ordinary least squares algorithms applied independently to each path, such as applied in the so-called PLS path analysis packages which estimate with OLS; (2) covariance analysis algorithms evolving from seminal work by Wold and his student Karl Jöreskog implemented in LISREL, AMOS, and EQS; and (3) simultaneous equations regression algorithms developed at the Cowles Commission by Tjalling Koopmans.
* the set of variables to be employed,
* what is known about the variables,
* what is theorized or hypothesized about the variables' causal connections and disconnections,
* what the researcher seeks to learn from the modeling, and
* the instances of missing values and/or the need for imputation.


Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies:
Pearl<ref name=Pearl /> has extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. For example, excluding a variable Z from the arguments of an equation asserts that the dependent variable is independent of interventions on the excluded variable, once we hold constant the remaining arguments. Nonparametric SEMs permit the estimation of total, direct and indirect effects without making any commitment to the form of the equations or to the distributions of the error terms. This extends mediation analysis to systems involving categorical variables in the presence of nonlinear interactions. Bollen and Pearl<ref name="bollen-pearl2013" /> survey the history of the causal interpretation of SEM and why it has become a source of confusions and controversies.
* which effects and/or correlations/covariances are to be included and estimated,
* which effects and other coefficients are forbidden or presumed unnecessary,
* and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2).


The latent level of a model is composed of ]. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether ], is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.<ref name="BMvH03">{{cite journal | doi=10.1037/0033-295X.110.2.203 | title=The theoretical status of latent variables | date=2003 | last1=Borsboom | first1=Denny | last2=Mellenbergh | first2=Gideon J. | last3=Van Heerden | first3=Jaap | journal=Psychological Review | volume=110 | issue=2 | pages=203–219 | pmid=12747522 }}</ref>
SEM path analysis methods are popular in the social sciences because of their accessibility; packaged computer programs allow researchers to obtain results without the inconvenience of understanding experimental design and control, effect and sample sizes, and numerous other factors that are part of good research design. Supporters say that this reflects a holistic, and less blatantly causal, interpretation of many real world phenomena – especially in psychology and social interaction – than may be adopted in the natural sciences; detractors suggest that many flawed conclusions have been drawn because of this lack of experimental control.


The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations.<ref name="JS76"/><ref name="Hayduk87"/><ref name="Bollen89"/><ref name="Kline16" >Kline, Rex. (2016) Principles and Practice of Structural Equation Modeling (4th ed). New York, Guilford Press. ISBN 978-1-4625-2334-4</ref> Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.
Direction in the directed network models of SEM arises from presumed cause-effect assumptions made about reality. Social interactions and artifacts are often epiphenomena – secondary phenomena that are difficult to directly link to causal factors. An example of a physiological epiphenomenon is, for example, time to complete a 100-meter sprint. A person may be able to improve their sprint speed from 12 seconds to 11 seconds, but it will be difficult to attribute that improvement to any direct causal factors, like diet, attitude, weather, etc. The 1 second improvement in sprint time is an epiphenomenon – the holistic product of interaction of many individual factors.
Two main components of models are distinguished in SEM: the ''structural model'' showing potential causal dependencies between ], and the ''measurement model'' showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory ] models, for example, focus on the causal measurement connections, while ] more closely correspond to SEMs latent structural connections.


Modelers specify each coefficient in a model as being ''free'' to be estimated, or ''fixed'' at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used.<ref name="Kline16"/><ref name="Hayduk87"/><ref name="Bollen89"/> The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure.
== General approach to SEM ==
Although each technique in the SEM family is different, the following aspects are common to many SEM methods, as it can be summarized as a 4E framework by many SEM scholars like ], that is 1) Equation (model or equation specification), 2) Estimation of free parameters, 3) Evaluation of models and model fit, 4) Explanation and communication, as well as execution of results.


There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.<ref name="Rigdon95">Rigdon, E. (1995). "A necessary and sufficient identification rule for structural models estimated in practice." Multivariate Behavioral Research. 30 (3): 359-383.</ref><ref name="Hayduk96">Hayduk, L. (1996) LISREL Issues, Debates, and Strategies. Baltimore, Johns Hopkins University Press. ISBN 0-8018-5336-2</ref><ref name="Kline16"/>
=== Model specification ===


=== Estimation of free model coefficients ===
Two main components of models are distinguished in SEM: the ''structural model'' showing potential causal dependencies between endogenous and exogenous variables, and the ''measurement model'' showing the relations between latent variables and their indicators. Exploratory and confirmatory ] models, for example, contain only the measurement part, while ] can be viewed as SEMs that contain only the structural part.


Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on:
In specifying pathways in a model, the modeler can posit two types of relationships: (1) ''free'' pathways, in which hypothesized causal (in fact counterfactual) relationships between variables are tested, and therefore are left 'free' to vary, and (2) relationships between variables that already have an estimated relationship, usually based on previous studies, which are 'fixed' in the model.
a) the coefficients' locations in the model (e.g. which variables are connected/disconnected),
b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear),
c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables),
and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).


A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features.
A modeler will often specify a set of theoretically plausible models in order to assess whether the model proposed is the best of the set of possible models. Not only must the modeler account for the theoretical reasons for building the model as it is, but the modeler must also take into account the number of data points and the number of parameters that the model must estimate to identify the model.


The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.<ref name="Kline16"/>
An identified model is a model where a specific parameter value uniquely identifies the model (]), and no other equivalent formulation can be given by a different parameter value. A ] is a variable with observed scores, like a variable containing the scores on a question or the number of times respondents buy a car. The parameter is the value of interest, which might be a regression coefficient between the exogenous and the endogenous variable or the factor loading (regression coefficient between an indicator and its factor). If there are fewer data points than the number of estimated parameters, the resulting model is "unidentified", since there are too few reference points to account for all the variance in the model. The solution is to constrain one of the paths to zero, which means that it is no longer part of the model.


One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate,<ref name="Hayduk96" /> but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification.<ref name="Rigdon95" /> Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly.<ref name="Rigdon95"/> Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent.
=== Estimation of free parameters ===
Parameter estimation is done by comparing the actual ] representing the relationships between variables and the estimated covariance matrices of the best fitting model. This is obtained through numerical maximization via ] of a ''fit criterion'' as provided by ] estimation, ] estimation, ] or asymptotically distribution-free methods. This is often accomplished by using a specialized SEM analysis program, of which several exist.
<!--
such as SPSS' , , ], , , the package in , or (more information on SAS PROC CALIS: see or ). -->


=== Evaluation of models and model fit === === Model assessment ===
{{More citations needed section|date=February 2019}} {{summary style|date=March 2024}}
Having estimated a model, analysts will want to interpret the model. Estimated paths may be tabulated and/or presented graphically as a path model. The impact of variables is assessed using ] (see ]).


Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider:
It is important to examine the "fit" of an estimated model to determine how well it models the data. This is a basic task in SEM modeling, forming the basis for accepting or rejecting models and, more usually, accepting one competing model over another. The output of SEM programs includes matrices of the estimated relationships between variables in the model. Assessment of fit essentially calculates how similar the predicted data are to matrices containing the relationships in the actual data.
* '''whether the data contain reasonable measurements of appropriate variables''',
* '''whether the modeled case are causally homogeneous''', (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.)
* '''whether the model appropriately represents the theory or features of interest''', (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.)
* '''whether the estimates are statistically justifiable''', (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.)
* '''the substantive reasonableness of the estimates''', (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.)
* '''the remaining consistency, or inconsistency, between the model and data'''. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.)


Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a {{math|χ<sup>2</sup>}} (]) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small {{math|χ<sup>2</sup>}} probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations.
Formal statistical tests and fit indices have been developed for these purposes. Individual parameters of the model can also be examined within the estimated model in order to see how well the proposed model fits the driving theory. Most, though not all, estimation methods make such tests of the model possible.


If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model {{math|χ<sup>2</sup>}} test).<ref name="Hayduk14b"/> Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification.
Of course as in all ], SEM model tests are based on the assumption that the correct and complete relevant data have been modeled. In the SEM literature, discussion of fit has led to a variety of different recommendations on the precise application of the various fit indices and hypothesis tests.
Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence.


Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis (CFA) is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
There are differing approaches to assessing fit. Traditional approaches to modeling start from a ], rewarding more parsimonious models (i.e. those with fewer free parameters), to others such as ] that focus on how little the fitted values deviate from a saturated model {{Citation needed|date=November 2009}} (i.e. how well they reproduce the measured values), taking into account the number of free parameters used. Because different measures of fit capture different elements of the fit of the model, it is appropriate to report a selection of different fit measures. Guidelines (i.e., "cutoff scores") for interpreting fit measures, including the ones listed below, are the subject of much debate among SEM researchers.{{sfn|MacCallum|Austin|2000|p=218-219}}


A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution."<ref name="MacCallum1986" /> Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.<ref name="HL12">{{cite journal | doi=10.1186/1471-2288-12-159 | doi-access=free | title=Should researchers use single indicators, best indicators, or multiple indicators in structural equation models? | date=2012 | last1=Hayduk | first1=Leslie A. | last2=Littvay | first2=Levente | journal=BMC Medical Research Methodology | volume=12 | page=159 | pmid=23088287 | pmc=3506474 }}</ref>
Some of the more commonly used measures of fit include

* ]
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the {{math|χ<sup>2</sup>}} test can have (though it does not always have) considerable power to detect model misspecification.<ref name="BMKAG02">Browne, M.W.; MacCallum, R.C.; Kim, C.T.; Andersen, B.L.; Glaser, R. (2002) "When fit indices and residuals are incompatible." Psychological Methods. 7: 403-421.</ref> The probability accompanying a {{math|χ<sup>2</sup>}} test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small {{math|χ<sup>2</sup>}} probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to {{math|χ<sup>2</sup>}}. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres<ref name="HP-RCLB05">{{cite journal |doi=10.1186/1471-2288-5-1|doi-access=free |title=Structural equation model testing and the quality of natural killer cell activity measurements |date=2005 |last1=Hayduk |first1=Leslie A. |last2=Pazderka-Robinson |first2=Hannah |last3=Cummings |first3=Greta G. |last4=Levers |first4=Merry-Jo D. |last5=Beres |first5=Melanie A. |journal=BMC Medical Research Methodology |volume=5 |page=1 |pmid=15636638 |pmc=546216 }} Note the correction of .922 to .992, and the correction of .944 to .994 in the Hayduk, et al. Table 1.</ref> who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of {{math|χ<sup>2</sup>}} testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.<ref name="Hayduk14a">{{cite journal | doi=10.1177/0013164414527449 | title=Seeing Perfectly Fitting Factor Models That Are Causally Misspecified | date=2014 | last1=Hayduk | first1=Leslie | journal=Educational and Psychological Measurement | volume=74 | issue=6 | pages=905–926 }}</ref>
** A fundamental measure of fit used in the calculation of many other fit measures. Conceptually it is a function of the sample size and the difference between the observed covariance matrix and the model covariance matrix.

Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that {{math|χ<sup>2</sup>}} increases (and hence {{math|χ<sup>2</sup>}} probability decreases) with increasing sample size (N). There are two mistakes in discounting {{math|χ<sup>2</sup>}} on this basis. First, for proper models, {{math|χ<sup>2</sup>}} does not increase with increasing N,<ref name="Hayduk14b"/> so if {{math|χ<sup>2</sup>}} increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, {{math|χ<sup>2</sup>}} increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by {{math|χ<sup>2</sup>}},<ref name="Hayduk14a"/> so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration.<ref name="Barrett07"/><ref name="Hayduk14b"/> The {{math|χ<sup>2</sup>}} model test, possibly adjusted,<ref name="SB94">Satorra, A.; and Bentler, P. M. (1994) “Corrections to test statistics and standard errors in covariance structure analysis”. In A. von Eye and C. C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Thousand Oaks, CA: Sage.</ref> is the strongest available structural equation model test.

Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency.<ref name="Hayduk14a"/> Models with different causal structures which fit the data identically well, have been called equivalent models.<ref name="Kline16"/> Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment.

This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data,<ref name="Barrett07"/> but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose."<ref name="Sorbom">Sorbom, D. "xxxxx" in Cudeck, R; du Toit R.; Sorbom, D. (editors) (2001) Structural Equation Modeling: Present and Future: Festschrift in Honor of Karl Joreskog. Scientific Software International: Lincolnwood, IL.</ref> The {{math|χ<sup>2</sup>}} evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model.

Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.<ref name="HL12"/>

The considerations relevant to using fit indices include checking:
# whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency);
# whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured);
# whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables);
# whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.);
# whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time);
# whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler<ref name="HB99">Hu, L.; Bentler, P.M. (1999) "Cutoff criteria for fit indices in covariance structure analysis: Conventional criteria versus new alternatives." Structural Equation Modeling. 6: 1-55.</ref> report that some common indices function inappropriately unless they are assessed together.);
# whether a model test is, or is not, available. (A {{math|χ<sup>2</sup>}} value, degrees of freedom, and probability will be available for models reporting indices based on {{math|χ<sup>2</sup>}}.)
# and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.).

Some of the more commonly used fit statistics include
* ]
** A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.<ref name="Hayduk14b">{{cite journal | doi=10.1186/1471-2288-14-24 | doi-access=free | title=Describing qualitative research undertaken with randomised controlled trials in grant proposals: A documentary analysis | date=2014 | last1=Drabble | first1=Sarah J. | last2=o'Cathain | first2=Alicia | last3=Thomas | first3=Kate J. | last4=Rudolph | first4=Anne | last5=Hewison | first5=Jenny | journal=BMC Medical Research Methodology | volume=14 | page=24 | pmid=24533771 | pmc=3937073 }}</ref>
* ] (AIC) * ] (AIC)
** A test of relative model fit: The preferred model is the one with the lowest AIC value. ** An index of relative model fit: The preferred model is the one with the lowest AIC value.
** <math>\mathit{AIC} = 2k - 2\ln(L)\,</math> ** <math>\mathit{AIC} = 2k - 2\ln(L)\,</math>
** where ''k'' is the number of ]s in the ], and ''L'' is the maximized value of the ] of the model. ** where ''k'' is the number of ]s in the ], and ''L'' is the maximized value of the ] of the model.
* ] (RMSEA) * ] (RMSEA)
**Fit index where a value of zero indicates the best fit.{{sfn|Kline|2011|p=205}} While the guideline for determining a "close fit" using RMSEA is highly contested,{{sfn|Kline|2011|p=206}} most researchers concur that an RMSEA of .1 or more indicates poor fit.{{sfn|Hu|Bentler|1999|p=11}}<ref name="Browne1993" /> **Fit index where a value of zero indicates the best fit.{{sfn|Kline|2011|p=205}} Guidelines for determining a "close fit" using RMSEA are highly contested.{{sfn|Kline|2011|p=206}}
* ] (SRMR) * ] (SRMR)
** The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.{{sfn|Hu|Bentler|1999|p=27}} Kline (2011) suggested .1 or smaller as a guideline for good fit. ** The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.{{sfn|Hu|Bentler|1999|p=27}}
* ] (CFI) * ] (CFI)
**In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.{{sfn|Hu|Bentler|1999|p=27}} **In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.{{sfn|Hu|Bentler|1999|p=27}}


The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions.<ref name="Kline16"/> For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.<ref name="Hayduk14b" />
For each measure of fit, a decision as to what represents a good-enough fit between the model and the data must reflect other contextual factors such as ], the ratio of indicators to factors, and the overall complexity of the model. For example, very large samples make the Chi-squared test overly sensitive and more likely to indicate a lack of model-data fit. {{sfn|Kline|2011|p=201}}
{| class="wikitable"
|+ '''Features of Fit Indices'''
!
!RMSEA
!SRMR
!CFI
|-
|Index Name
|Root Mean Square Error of Approximation
|Standardized Root Mean Squared Residual
|Confirmatory Fit Index
|-
| Formula
|RMSEA = sq-root(({{math|χ<sup>2</sup>}} - d)/(d(N-1)))
|
|
|-
|Basic References
|<ref name="SL80"/><ref name="S90"/><ref name="BC92"/>
|
|
|-
|'''Factor Model''' proposed wording
for critical values
| .06 wording?<ref name="HB99"/>
|
|
|-
|'''NON-Factor Model''' proposed wording
for critical values
|
|
|
|-
|References proposing revised/changed,<br/>
disagreements over critical values
|<ref name="HB99"/>
|<ref name="HB99"/>
|<ref name="HB99"/>
|-
|References indicating two-index or paired-index
criteria are required
|<ref name="HB99"/>
|<ref name="HB99"/>
|<ref name="HB99"/>


|-
=== Model modification ===
|Index based on {{math|χ<sup>2</sup>}}
The model may need to be modified in order to improve the fit, thereby estimating the most likely relationships between variables. Many programs provide modification indices which may guide minor modifications. Modification indices report the change in χ² that result from freeing fixed parameters: usually, therefore adding a path to a model which is currently set to zero. Modifications that improve model fit may be flagged as potential changes that can be made to the model. Modifications to a model, especially the structural model, are changes to the theory claimed to be true. Modifications therefore must make sense in terms of the theory being tested, or be acknowledged as limitations of that theory. Changes to measurement model are effectively claims that the items/data are impure indicators of the latent variables specified by theory.<ref name="Loehlin2004" />
|Yes
|No
|Yes
|-
|References recommending against use
of this index
|<ref name="Barrett07"/>
|<ref name="Barrett07"/>
|<ref name="Barrett07"/>
|}


=== Sample size, power, and estimation ===
Models should not be led by modification indices, as Maccallum (1986) demonstrated: "even under favorable conditions, models arising from specification searches must be viewed with caution."<ref name="MacCallum1986" />


Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients.<ref name="Kline16"/> Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances.<ref name="HL12"/> Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators.
=== Sample size and power ===


The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data.
While researchers agree that large ]s are required to provide sufficient ] and precise estimates using SEM, there is no general consensus on the appropriate method for determining adequate sample size.{{sfn|Quintana|Maxwell|1999|p=499}} <ref name="Westland" /> Generally, the considerations for determining sample size include the number of observations per parameter, the number of observations required for fit indexes to perform adequately, and the number of observations per degree of freedom.{{sfn|Quintana|Maxwell|1999|p=499}} Researchers have proposed guidelines based on simulation studies,<ref name="Chou1995" /> professional experience,<ref name="Bentler2016" /> and mathematical formulas.<ref name="Westland"/><ref name="MacCallum1996" />


=== Interpretation ===
Sample size requirements to achieve a particular significance and power in SEM hypothesis testing are similar for the same model when any of the three algorithms (PLS-PA, LISREL or systems of regression equations) are used for testing.{{Citation needed|date=January 2015}}


Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications.<ref name="HC00">Herting, R.H.; Costner, H.L. (2000) “Another perspective on “The proper number of factors” and the appropriate number of steps.” Structural Equation Modeling. 7 (1): 92-110.</ref>
=== Explanation and communication ===
The set of models are then interpreted so that claims about the constructs can be made, based on the best fitting model.


Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.<ref name="BMvH03"/>
Caution should always be taken when making claims of causality even when experimentation or time-ordered studies have been done. The term ''causal model'' must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiment cannot rule out all such threats to causal inference. Good fit by a model consistent with one causal hypothesis invariably entails equally good fit by another model consistent with an opposing causal hypothesis. No research design, no matter how clever, can help distinguish such rival hypotheses, save for interventional experiments.<ref name="Pearl" />


SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes.<ref name="Duncan75"/> The meaning and interpretation of specific estimates should be contextualized in the full model.
As in any science, subsequent replication and perhaps modification will proceed from the initial finding.


SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models.
== Advanced uses ==
* ]
Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause.<ref name="Duncan75"/> (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance.<ref name="Hayduk87p20">Hayduk, L. (1987) Structural Equation Modeling with LISREL: Essentials and Advances, page 20. Baltimore, Johns Hopkins University Press. ISBN 0-8018-3478-3 Page 20</ref> Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled.<ref name="Pearl09"/><ref name="HCSNGDGP-R03">Hayduk, L. A.; Cummings, G.; Stratkotter, R.; Nimmo, M.; Grugoryev, K.; Dosman, D.; Gillespie, M.; Pazderka-Robinson, H. (2003) “Pearl’s D-separation: One more step into causal thinking.” Structural Equation Modeling. 10 (2): 289-311.</ref> As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.<ref name="Duncan75"/><ref name="Bollen89"/><ref name="Hayduk87"/><ref name="Hayduk96"/>
* Multiple group modelling: This is a technique allowing joint estimation of multiple models, each with different sub-groups. Applications include ], and analysis of differences between groups (e.g., gender, cultures, test forms written in different languages, etc.).
* ]
*]
* Hierarchical/]; ] models
* ] (latent class) SEM
* Alternative estimation and testing techniques
* Robust inference
* ] analyses
* Multi-method multi-trait models
* Structural Equation Model Trees


The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by ''R''<sup>2</sup>, though the Blocked-Error ''R''<sup>2</sup> should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.<ref name="Hayduk06">Hayduk, L.A. (2006) “Blocked-Error-R2: A conceptually improved definition of the proportion of explained variance in models containing loops or correlated residuals.” Quality and Quantity. 40: 629-649.</ref>
== SEM-specific software ==
Numerous software packages exist for fitting structural equation models. ] was the first such software, initially released in the 1970s.<ref name=":0" /> Frequently used software implementations among researchers include ], ], ], ], ] AMOS, and ].<ref>{{Cite journal |last=Narayanan |first=A. |date=2012-05-01 |title=A Review of Eight Software Packages for Structural Equation Modeling |url=https://doi.org/10.1080/00031305.2012.708641 |journal=The American Statistician |volume=66 |issue=2 |pages=129–138 |doi=10.1080/00031305.2012.708641 |s2cid=59460771 |issn=0003-1305}}</ref> ] published multiple instructional books for using a variety of these softwares as part of the ]'s Multivariate Applications book series.<ref>{{Cite web |title=Barbara Byrne Award for Outstanding Book or Edited Volume {{!}} SMEP |url=https://smep.org/barbara-byrne-award |access-date=2022-10-25 |website=smep.org}}</ref>


The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features.
There are also several packages for the ] open source statistical environment. The ] R package provides an open source and enhanced version of the Mx application. Another open source ] package for SEM is lavaan.<ref name="lavaan" />


Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency.<ref name="Hayduk96"/> The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.<ref name="Hayduk96"/>
Scholars consider it good practice to report which software package and version was used for SEM analysis because they have different capabilities and may use slightly different methods to perform similarly named techniques.{{sfn|Kline|2011|p=79-88}}

Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables.<ref name="Kline16"/> <!-- For interpretations of coefficients in models containing interactions, see { reference needed }, for multilevel models see { reference needed }, for longitudinal models see, { reference needed }, and for models containing categoric variables see { reference needed }. --> Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.<ref name="Hayduk87"/><ref name="Hayduk96"/>
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients.<ref name="Millsap07">Millsap, R.E. (2007) “Structural equation modeling made difficult.” Personality and Individual Differences. 42: 875-881.</ref> Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.<ref name="HP-RCLB05"/><ref name="EHR82">Entwisle, D.R.; Hayduk, L.A.; Reilly, T.W. (1982) Early Schooling: Cognitive and Affective Outcomes. Baltimore: Johns Hopkins University Press.</ref><ref name="Hayduk94">Hayduk, L.A. (1994). “Personal space: Understanding the simplex model.” Journal of Nonverbal Behavior., 18 (3): 245-260.</ref><ref name="HSR97">Hayduk, L.A.; Stratkotter, R.; Rovers, M.W. (1997) “Sexual Orientation and the Willingness of Catholic Seminary Students to Conform to Church Teachings.” Journal for the Scientific Study of Religion. 36 (3): 455-467.</ref>

The multiple ways of conceptualizing PLS models<ref name="RSR17">{{cite journal | doi=10.15358/0344-1369-2017-3-4 | title=On Comparing Results from CB-SEM and PLS-SEM: Five Perspectives and Five Recommendations | date=2017 | last1=Rigdon | first1=Edward E. | last2=Sarstedt | first2=Marko | last3=Ringle | first3=Christian M. | journal=Marketing ZFP | volume=39 | issue=3 | pages=4–16 | doi-access=free }}</ref> complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on ''R''<sup>2</sup> or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle<ref name="RSR17"/> point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.

Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term ''causal model'' must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.<ref name="Pearl09" />

=== Controversies and movements ===
Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent.<ref name="Hayduk96"/> If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser<ref name="HG00a"/> followed by several comments and a rejoinder,<ref name="HG00b"/> all made freely available, thanks to the efforts of George Marcoulides.

These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett<ref name="Barrett07"/> who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” <ref name="Barrett07"/>(page 821). Barrett’s article was also accompanied by commentary from both perspectives.<ref name="Millsap07"/><ref name="HCBP-RB07">Hayduk, L.A.; Cummings, G.; Boadu, K.; Pazderka-Robinson, H.; Boulianne, S. (2007) “Testing! testing! one, two, three – Testing the theory in structural equation models!” Personality and Individual Differences. 42 (5): 841-850</ref>

The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports.<ref name="Hayduk14b"/> The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models.<ref name="Mulaik09">Mulaik, S.A. (2009) Foundations of Factor Analysis (second edition). Chapman and Hall/CRC. Boca Raton, pages 130-131.</ref> The comments by Bollen and Pearl regarding myths about causality in the context of SEM<ref name="BP13" /> reinforced the centrality of causal thinking in the context of SEM.

A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007),<ref name="LH07">Levy, R.; Hancock, G.R. (2007) “A framework of statistical tests for comparing mean and covariance structure models.” Multivariate Behavioral Research. 42(1): 33-66.</ref> for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016)<ref name="Kline16"/> remain disturbingly weak in their presentation of model testing.<ref name="Hayduk18">{{cite journal | doi=10.25336/csp29397 | title=Review essay on Rex B. Kline's Principles and Practice of Structural Equation Modeling: Encouraging a fifth edition | date=2018 | last1=Hayduk | first1=Leslie | journal=Canadian Studies in Population | volume=45 | issue=3–4 | page=154 | doi-access=free }}</ref> Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.

An additional controversy that touched the fringes of the previous controversies awaits ignition.{{citation needed|date=March 2024}} Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012)<ref name="HL12"/> discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time,<ref name="EHR82"/> but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.
Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.

== Extensions, modeling alternatives, and statistical kin ==

* Categorical dependent variables {{citation needed|date=July 2023}}
* Categorical intervening variables {{citation needed|date=July 2023}}
* Copulas {{citation needed|date=March 2024}}
* Deep Path Modelling <ref name="Ing2024"/>
* Exploratory Structural Equation Modeling <ref>{{Cite journal |last1=Marsh |first1=Herbert W. |last2=Morin |first2=Alexandre J.S. |last3=Parker |first3=Philip D. |last4=Kaur |first4=Gurvinder |date=2014-03-28 |title=Exploratory Structural Equation Modeling: An Integration of the Best Features of Exploratory and Confirmatory Factor Analysis |url=https://www.annualreviews.org/doi/10.1146/annurev-clinpsy-032813-153700 |journal=Annual Review of Clinical Psychology |language=en |volume=10 |issue=1 |pages=85–110 |doi=10.1146/annurev-clinpsy-032813-153700 |pmid=24313568 |issn=1548-5943}}</ref>
* Fusion validity models<ref name="HEH19">{{doi|10.3389/psyg.2019.01139}}</ref>
* ] models {{citation needed|date=July 2023}}
* ] {{citation needed|date=July 2023}}
* ] {{citation needed|date=July 2023}}
* Link functions {{citation needed|date=July 2023}}
* Longitudinal models <ref>{{Cite journal |last1=Zyphur |first1=Michael J. |last2=Allison |first2=Paul D. |last3=Tay |first3=Louis |last4=Voelkle |first4=Manuel C. |last5=Preacher |first5=Kristopher J. |last6=Zhang |first6=Zhen |last7=Hamaker |first7=Ellen L. |last8=Shamsollahi |first8=Ali |last9=Pierides |first9=Dean C. |last10=Koval |first10=Peter |last11=Diener |first11=Ed |date=October 2020 |title=From Data to Causes I: Building A General Cross-Lagged Panel Model (GCLM) |journal=Organizational Research Methods |language=en |volume=23 |issue=4 |pages=651–687 |doi=10.1177/1094428119847278 |s2cid=181878548 |issn=1094-4281|doi-access=free |hdl=11343/247887 |hdl-access=free }}</ref>
* ] models <ref>{{Cite journal |last1=Leitgöb |first1=Heinz |last2=Seddig |first2=Daniel |last3=Asparouhov |first3=Tihomir |last4=Behr |first4=Dorothée |last5=Davidov |first5=Eldad |last6=De Roover |first6=Kim |last7=Jak |first7=Suzanne |last8=Meitinger |first8=Katharina |last9=Menold |first9=Natalja |last10=Muthén |first10=Bengt |last11=Rudnev |first11=Maksim |last12=Schmidt |first12=Peter |last13=van de Schoot |first13=Rens |date=February 2023 |title=Measurement invariance in the social sciences: Historical development, methodological challenges, state of the art, and future perspectives |url=https://linkinghub.elsevier.com/retrieve/pii/S0049089X22001168 |journal=Social Science Research |language=en |volume=110 |pages=102805 |doi=10.1016/j.ssresearch.2022.102805|pmid=36796989 |hdl=1874/431763 |s2cid=253343751 |hdl-access=free }}</ref>
* ] {{citation needed|date=July 2023}}
* ], hierarchical models (e.g. people nested in groups) <ref>{{Citation |last1=Sadikaj |first1=Gentiana |title=Multilevel structural equation modeling for intensive longitudinal data: A practical guide for personality researchers |date=2021 |url=https://linkinghub.elsevier.com/retrieve/pii/B9780128139950000339 |work=The Handbook of Personality Dynamics and Processes |pages=855–885 |access-date=2023-11-03 |publisher=Elsevier |language=en |doi=10.1016/b978-0-12-813995-0.00033-9 |isbn=978-0-12-813995-0 |last2=Wright |first2=Aidan G.C. |last3=Dunkley |first3=David M. |last4=Zuroff |first4=David C. |last5=Moskowitz |first5=D.S.}}</ref>
* Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.) {{citation needed|date=July 2023}}
* Multi-method multi-trait models {{citation needed|date=July 2023}}
* Random intercepts models {{citation needed|date=July 2023}}
* Structural Equation Model Trees {{citation needed|date=July 2023}}
* Structural Equation ]<ref>{{Cite journal |last1=Vera |first1=José Fernando |last2=Mair |first2=Patrick |date=2019-09-03 |title=SEMDS: An R Package for Structural Equation Multidimensional Scaling |url=https://www.tandfonline.com/doi/full/10.1080/10705511.2018.1561292 |journal=Structural Equation Modeling |language=en |volume=26 |issue=5 |pages=803–818 |doi=10.1080/10705511.2018.1561292 |issn=1070-5511}}</ref>

== Software ==
Structural equation modeling programs differ widely in their capabilities and user requirements.<ref>{{Cite journal |last=Narayanan |first=A. |date=2012-05-01 |title=A Review of Eight Software Packages for Structural Equation Modeling |url=https://doi.org/10.1080/00031305.2012.708641 |journal=The American Statistician |volume=66 |issue=2 |pages=129–138 |doi=10.1080/00031305.2012.708641 |s2cid=59460771 |issn=0003-1305}}</ref>


== See also == == See also ==
* ] * {{Annotated link|Causal model}}
* ] * {{Annotated link|Graphical model}}
* ] * ]
* {{Annotated link|Multivariate statistics}}
* ]
* ] * {{Annotated link|Partial least squares path modeling}}
* {{Annotated link|Partial least squares regression}}
* ]
* {{Annotated link|Simultaneous equations model}}
* '']''
*] * {{Annotated link|Causal map}}
* {{Annotated link|Bayesian Network}}


== References == == References ==
{{Reflist|30em|refs= {{Reflist|30em|refs=

<ref name="lavaan">{{cite journal |last1=Rosseel |first1=Yves |title=lavaan: An R Package for Structural Equation Modeling. |journal=Journal of Statistical Software |date=2012-05-24 |volume=48 |issue=2 |pages=1–36 |doi=10.18637/jss.v048.i02 |url=https://www.jstatsoft.org/article/view/v048i02 |access-date=27 January 2021|doi-access=free }}</ref>
<ref name="Barrett07">Barrett, P. (2007). "Structural equation modeling: Adjudging model fit." Personality and Individual Differences. 42 (5): 815-824.</ref>
<ref name="BC92">Browne, M.W.; Cudeck, R. (1992) "Alternate ways of assessing model fit." Sociological Methods and Research. 21(2): 230-258.</ref>
<ref name="S90">Steiger, J. H. (1990) "Structural Model Evaluation and Modification: An Interval Estimation Approach". Multivariate Behavioral Research 25:173-180.</ref>
<ref name="SL80">Steiger, J. H.; and Lind, J. (1980) "Statistically Based Tests for the Number of Common Factors." Paper presented at the annual meeting of the Psychometric Society, Iowa City.</ref>
<ref name="MacCallum1986">{{cite journal |doi=10.1037/0033-2909.100.1.107 |title=Specification searches in covariance structure modeling |journal=Psychological Bulletin |volume=100 |pages=107–120 |year=1986 |last1=MacCallum |first1=Robert }}</ref>
<ref name="Salkind2007">{{cite book |doi=10.4135/9781412952644.n220 |chapter=Intelligence Tests |title=Encyclopedia of Measurement and Statistics |year=2007 |isbn=978-1-4129-1611-0 |last1=Salkind |first1=Neil J. }}</ref>

<!--
<ref name="Pearl">{{Cite book | first = Judea | last = Pearl | author-link = Judea Pearl | title = Causality: Models, Reasoning, and Inference | publisher = ] | year = 2000 | isbn = 978-0-521-77362-1 | url-access = registration | url = https://archive.org/details/causalitymodelsr0000pear}}</ref> <ref name="Pearl">{{Cite book | first = Judea | last = Pearl | author-link = Judea Pearl | title = Causality: Models, Reasoning, and Inference | publisher = ] | year = 2000 | isbn = 978-0-521-77362-1 | url-access = registration | url = https://archive.org/details/causalitymodelsr0000pear}}</ref>
<ref name="Westland">{{cite journal <ref name="Westland">{{cite journal
Line 150: Line 289:
| pages = 476–487 | pages = 476–487
}}</ref> }}</ref>
-->
<ref name="MacCallum1986">{{cite journal |doi=10.1037/0033-2909.100.1.107 |title=Specification searches in covariance structure modeling |journal=Psychological Bulletin |volume=100 |pages=107–120 |year=1986 |last1=MacCallum |first1=Robert }}</ref>
<!--
<ref name="MacCallum1996">{{cite journal |doi=10.1037/1082-989X.1.2.130 |title=Power analysis and determination of sample size for covariance structure modeling |journal=Psychological Methods |volume=1 |issue=2 |pages=130–49 |year=1996 |last1=MacCallum |first1=Robert C |last2=Browne |first2=Michael W |last3=Sugawara |first3=Hazuki M }}</ref> <ref name="MacCallum1996">{{cite journal |doi=10.1037/1082-989X.1.2.130 |title=Power analysis and determination of sample size for covariance structure modeling |journal=Psychological Methods |volume=1 |issue=2 |pages=130–49 |year=1996 |last1=MacCallum |first1=Robert C |last2=Browne |first2=Michael W |last3=Sugawara |first3=Hazuki M }}</ref>
<ref name="Bentler2016">{{cite journal |doi=10.1177/0049124187016001004 |title=Practical Issues in Structural Modeling |journal=Sociological Methods & Research |volume=16 |issue=1 |pages=78–117 |year=2016 |last1=Bentler |first1=P. M |last2=Chou |first2=Chih-Ping |s2cid=62548269 }}</ref> <ref name="Bentler2016">{{cite journal |doi=10.1177/0049124187016001004 |title=Practical Issues in Structural Modeling |journal=Sociological Methods & Research |volume=16 |issue=1 |pages=78–117 |year=2016 |last1=Bentler |first1=P. M |last2=Chou |first2=Chih-Ping |s2cid=62548269 }}</ref>
<ref name="Browne1993">{{cite book|last1=Browne|first1=M. W.|last2=Cudeck|first2=R.|editor1-last=Bollen|editor1-first=K. A.|editor2-last=Long|editor2-first=J. S.|title=Testing structural equation models|date=1993|publisher=Sage|location=Newbury Park, CA|chapter=Alternative ways of assessing model fit}}</ref> <ref name="Browne1993">{{cite book|last1=Browne|first1=M. W.|last2=Cudeck|first2=R.|editor1-last=Bollen|editor1-first=K. A.|editor2-last=Long|editor2-first=J. S.|title=Testing structural equation models|date=1993|publisher=Sage|location=Newbury Park, CA|chapter=Alternative ways of assessing model fit}}</ref>
<ref name="Loehlin2004">Loehlin, J. C. (2004). ''Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis''. Psychology Press.</ref> <ref name="Loehlin2004">Loehlin, J. C. (2004). ''Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis''. Psychology Press.</ref>

<ref name="Chou1995">{{cite book|last1=Chou|first1=C. P.|last2=Bentler|first2=Peter|editor1-last=Hoyle|editor1-first=Rick|editor1-link=H|title=Structural equation modeling: Concepts, issues, and applications|date=1995|publisher=Sage|location=Thousand Oaks, CA|pages=37–55|chapter=Estimates and tests in structural equation modeling}}</ref> <ref name="Chou1995">{{cite book|last1=Chou|first1=C. P.|last2=Bentler|first2=Peter|editor1-last=Hoyle|editor1-first=Rick|editor1-link=H|title=Structural equation modeling: Concepts, issues, and applications|date=1995|publisher=Sage|location=Thousand Oaks, CA|pages=37–55|chapter=Estimates and tests in structural equation modeling}}</ref>

<ref name=bollen-pearl2013>{{cite book |doi=10.1007/978-94-007-6094-3_15 |chapter=Eight Myths About Causality and Structural Equation Models |title=Handbook of Causal Analysis for Social Research |pages=301–28 |series=Handbooks of Sociology and Social Research |year=2013 |last1=Bollen |first1=Kenneth A |last2=Pearl |first2=Judea |isbn=978-94-007-6093-6 }}</ref> <ref name=bollen-pearl2013>{{cite book |doi=10.1007/978-94-007-6094-3_15 |chapter=Eight Myths About Causality and Structural Equation Models |title=Handbook of Causal Analysis for Social Research |pages=301–28 |series=Handbooks of Sociology and Social Research |year=2013 |last1=Bollen |first1=Kenneth A |last2=Pearl |first2=Judea |isbn=978-94-007-6093-6 }}</ref>
<ref name="Westland2015">{{Cite book|title = Structural Equation Modeling: From Paths to Networks|last = Westland|first = J. Christopher|publisher = Springer|year = 2015|location = New York}}</ref> <ref name="Westland2015">{{Cite book|title = Structural Equation Modeling: From Paths to Networks|last = Westland|first = J. Christopher|publisher = Springer|year = 2015|location = New York}}</ref>
-->
<ref name="Salkind2007">{{cite book |doi=10.4135/9781412952644.n220 |chapter=Intelligence Tests |title=Encyclopedia of Measurement and Statistics |year=2007 |isbn=978-1-4129-1611-0 |last1=Salkind |first1=Neil J. }}</ref>

<ref name="Boslaugh2008">{{cite book |doi=10.4135/9781412953948.n443 |chapter=Structural Equation Modeling |title=Encyclopedia of Epidemiology |year=2008 |isbn=978-1-4129-2816-8 |last1=Boslaugh |first1=Sarah |last2=McNutt |first2=Louise-Anne |hdl=2022/21973 }}</ref>
<!--<ref name="Boslaugh2008">{{cite book |doi=10.4135/9781412953948.n443 |chapter=Structural Equation Modeling |title=Encyclopedia of Epidemiology |year=2008 |isbn=978-1-4129-2816-8 |last1=Boslaugh |first1=Sarah |last2=McNutt |first2=Louise-Anne |hdl=2022/21973 }}</ref> -->

<ref name="Ing2024">{{cite journal |title=Integrating Multi-Modal Cancer Data Using Deep Latent Variable Path Modelling |author=Alex James Ing, Alvaro Andrades, Marco Raffaele Cosenza, Jan Oliver Korbel |journal=bioRxiv |date=2024-06-13 |url=https://www.biorxiv.org/content/10.1101/2024.06.13.598616v1 |doi=10.1101/2024.06.13.598616 }}</ref>

}} }}


== Bibliography == == Bibliography ==


*{{cite journal |doi=10.1080/10705519909540118 |title=Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives |journal=Structural Equation Modeling |volume=6 |pages=1–55 |year=1999 |last1=Hu |first1=Li‐tze |last2=Bentler |first2=Peter M |hdl=2027.42/139911 |hdl-access=free }} *{{cite journal |doi=10.1080/10705519909540118 |title=Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives |journal=Structural Equation Modeling |volume=6 |pages=1–55 |year=1999 |last1=Hu |first1=Li-tze |last2=Bentler |first2=Peter M |hdl=2027.42/139911 |hdl-access=free }}
*{{cite book|last=Kaplan |first=D. |year=2008 |title=Structural Equation Modeling: Foundations and Extensions |publisher=SAGE |edition=2nd |isbn=978-1412916240 }} *{{cite book|last=Kaplan |first=D. |year=2008 |title=Structural Equation Modeling: Foundations and Extensions |publisher=SAGE |edition=2nd |isbn=978-1412916240 }}
*{{cite book|last1=Kline|first1=Rex|title=Principles and Practice of Structural Equation Modeling| publisher=Guilford| isbn=978-1-60623-876-9|date=2011 |edition=Third}} *{{cite book|last1=Kline|first1=Rex|title=Principles and Practice of Structural Equation Modeling| publisher=Guilford| isbn=978-1-60623-876-9|date=2011 |edition=Third}}
*{{cite journal|last1=MacCallum|first1=Robert| last2=Austin|first2=James| title=Applications of Structural Equation Modeling in Psychological Research| date=2000| journal=Annual Review of Psychology |volume=51|pages=201–226|doi=10.1146/annurev.psych.51.1.201|pmid=10751970|access-date=25 January 2015|url=http://www-psychology.concordia.ca/fac/kline/sem/qicss/maccallum.pdf}} *{{cite journal|last1=MacCallum|first1=Robert|last2=Austin|first2=James|title=Applications of Structural Equation Modeling in Psychological Research|date=2000|journal=Annual Review of Psychology|volume=51|pages=201–226|doi=10.1146/annurev.psych.51.1.201|pmid=10751970|access-date=25 January 2015|url=http://www-psychology.concordia.ca/fac/kline/sem/qicss/maccallum.pdf|archive-date=28 January 2015|archive-url=https://web.archive.org/web/20150128132931/http://www-psychology.concordia.ca/fac/kline/sem/qicss/maccallum.pdf|url-status=dead}}
*{{cite journal|last1=Quintana|first1=Stephen M.|last2=Maxwell|first2=Scott E.|date=1999|title=Implications of Recent Developments in Structural Equation Modeling for Counseling Psychology|journal=The Counseling Psychologist|volume=27|issue=4|pages=485–527|doi=10.1177/0011000099274002|s2cid=145586057}} *{{cite journal|last1=Quintana|first1=Stephen M.|last2=Maxwell|first2=Scott E.|date=1999|title=Implications of Recent Developments in Structural Equation Modeling for Counseling Psychology|journal=The Counseling Psychologist|volume=27|issue=4|pages=485–527|doi=10.1177/0011000099274002|s2cid=145586057}}



Latest revision as of 14:24, 19 December 2024

Form of causal modeling that fit networks of constructs to data This article is about the general structural modeling. For the use of structural models in econometrics, see Structural estimation. For the journal, see Structural Equation Modeling (journal).
An example structural equation model
Figure 1. An example structural equation model after estimation. Latent variables are sometimes indicated with ovals while observed variables are shown in rectangles. Residuals and variances are sometimes drawn as double-headed arrows (shown here) or single arrows and a circle (as in Figure 2). The latent IQ variance is fixed at 1 to provide scale to the model. Figure 1 depicts measurement errors influencing each indicator of latent intelligence and each indicator of latent achievement. Neither the indicators nor the measurement errors of the indicators are modeled as influencing the latent variables.
An example structural equation model pre-estimation
Figure 2. An example structural equation model before estimation. Similar to Figure 1 but without standardized values and fewer items. Because intelligence and academic performance are merely imagined or theory-postulated variables, their precise scale values are unknown, though the model specifies that each latent variable's values must fall somewhere along the observable scale possessed by one of the indicators. The 1.0 effect connecting a latent to an indicator specifies that each real unit increase or decrease in the latent variable's value results in a corresponding unit increase or decrease in the indicator's value. It is hoped a good indicator has been chosen for each latent, but the 1.0 values do not signal perfect measurement because this model also postulates that there are other unspecified entities causally impacting the observed indicator measurements, thereby introducing measurement error. This model postulates that separate measurement errors influence each of the two indicators of latent intelligence, and each indicator of latent achievement. The unlabeled arrow pointing to academic performance acknowledges that things other than intelligence can also influence academic performance.

Structural equation modeling (SEM) is a diverse set of methods used by scientists for both observational and experimental research. SEM is used mostly in the social and behavioral science fields, but it is also used in epidemiology, business, and other fields. A common definition of SEM is, "...a class of methodologies that seeks to represent hypotheses about the means, variances, and covariances of observed data in terms of a smaller number of 'structural' parameters defined by a hypothesized underlying conceptual or theoretical model,".

SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.

The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis, confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling.

SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.

A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.

History

Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables. The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).

Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.

Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates. Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.

Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain. Disciplinary differences in approaches can be seen in SEMNET discussions of endogeneity, and in discussions on causality via directed acyclic graphs (DAGs). Discussions comparing and contrasting various SEM approaches are available highlighting disciplinary differences in data structures and the concerns motivating economic models.

Judea Pearl extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.

SEM analyses are popular in the social sciences because these analytic techniques help us break down complex concepts and understand causal processes, but the complexity of the models can introduce substantial variability in the results depending on the presence or absence of conventional control variables, the sample size, and the variables of interest. The use of experimental designs may address some of these doubts.

Today, SEM forms the basis of machine learning and (interpretable) neural networks. Exploratory and confirmatory factor analyses in classical statistics mirror unsupervised and supervised machine learning.

General steps and considerations

The following considerations apply to the construction and assessment of many structural equation models.

Model specification

Building or specifying a model requires attending to:

  • the set of variables to be employed,
  • what is known about the variables,
  • what is theorized or hypothesized about the variables' causal connections and disconnections,
  • what the researcher seeks to learn from the modeling, and
  • the instances of missing values and/or the need for imputation.

Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies:

  • which effects and/or correlations/covariances are to be included and estimated,
  • which effects and other coefficients are forbidden or presumed unnecessary,
  • and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2).

The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.

The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations. Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.

Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections.

Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used. The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure.

There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.

Estimation of free model coefficients

Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on: a) the coefficients' locations in the model (e.g. which variables are connected/disconnected), b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear), c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables), and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).

A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features.

The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.

One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate, but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification. Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly. Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent.

Model assessment

This article may benefit from being shortened by the use of summary style. Summary style may involve the splitting of sections of text to one or more sub-topic articles which are then summarized in the main article.

Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider:

  • whether the data contain reasonable measurements of appropriate variables,
  • whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.)
  • whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.)
  • whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.)
  • the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.)
  • the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.)

Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a χ (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small χ probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations.

If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model χ test). Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification. Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence.

Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis (CFA) is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.

A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution." Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.

"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the χ test can have (though it does not always have) considerable power to detect model misspecification. The probability accompanying a χ test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small χ probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to χ. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of χ testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.

Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that χ increases (and hence χ probability decreases) with increasing sample size (N). There are two mistakes in discounting χ on this basis. First, for proper models, χ does not increase with increasing N, so if χ increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, χ increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by χ, so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration. The χ model test, possibly adjusted, is the strongest available structural equation model test.

Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency. Models with different causal structures which fit the data identically well, have been called equivalent models. Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment.

This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data, but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose." The χ evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model.

Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.

The considerations relevant to using fit indices include checking:

  1. whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency);
  2. whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured);
  3. whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables);
  4. whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.);
  5. whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time);
  6. whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler report that some common indices function inappropriately unless they are assessed together.);
  7. whether a model test is, or is not, available. (A χ value, degrees of freedom, and probability will be available for models reporting indices based on χ.)
  8. and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.).

Some of the more commonly used fit statistics include

  • Chi-square
    • A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.
  • Akaike information criterion (AIC)
    • An index of relative model fit: The preferred model is the one with the lowest AIC value.
    • A I C = 2 k 2 ln ( L ) {\displaystyle {\mathit {AIC}}=2k-2\ln(L)\,}
    • where k is the number of parameters in the statistical model, and L is the maximized value of the likelihood of the model.
  • Root Mean Square Error of Approximation (RMSEA)
    • Fit index where a value of zero indicates the best fit. Guidelines for determining a "close fit" using RMSEA are highly contested.
  • Standardized Root Mean Squared Residual (SRMR)
    • The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.
  • Comparative Fit Index (CFI)
    • In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.

The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions. For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.

Features of Fit Indices
RMSEA SRMR CFI
Index Name Root Mean Square Error of Approximation Standardized Root Mean Squared Residual Confirmatory Fit Index
Formula RMSEA = sq-root((χ - d)/(d(N-1)))
Basic References
Factor Model proposed wording

for critical values

.06 wording?
NON-Factor Model proposed wording

for critical values

References proposing revised/changed,

disagreements over critical values

References indicating two-index or paired-index

criteria are required

Index based on χ Yes No Yes
References recommending against use

of this index

Sample size, power, and estimation

Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients. Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances. Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators.

The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data.

Interpretation

Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications.

Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.

SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes. The meaning and interpretation of specific estimates should be contextualized in the full model.

SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models.

Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause. (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance. Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled. As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.

The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R, though the Blocked-Error R should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.

The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features.

Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency. The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.

Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables. Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.

Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients. Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.

The multiple ways of conceptualizing PLS models complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.

Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.

Controversies and movements

Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent. If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser followed by several comments and a rejoinder, all made freely available, thanks to the efforts of George Marcoulides.

These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” (page 821). Barrett’s article was also accompanied by commentary from both perspectives.

The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports. The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models. The comments by Bollen and Pearl regarding myths about causality in the context of SEM reinforced the centrality of causal thinking in the context of SEM.

A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007), for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016) remain disturbingly weak in their presentation of model testing. Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.

An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012) discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time, but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.

Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.

Extensions, modeling alternatives, and statistical kin

Software

Structural equation modeling programs differ widely in their capabilities and user requirements.

See also

References

  1. Salkind, Neil J. (2007). "Intelligence Tests". Encyclopedia of Measurement and Statistics. doi:10.4135/9781412952644.n220. ISBN 978-1-4129-1611-0.
  2. "Structural Equation Modeling". Encyclopedia of Epidemiology. 2008. doi:10.4135/9781412953948.n443. ISBN 978-1-4129-2816-8.
  3. "Structural Equation Modeling". Encyclopedia of Educational Leadership and Administration. 2006. doi:10.4135/9781412939584.n544. ISBN 978-0-7619-3087-7.
  4. "Structural Equation Modeling - an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved 2024-11-15.
  5. ^ Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Second edition. New York: Cambridge University Press.
  6. Kline, Rex B. (2016). Principles and practice of structural equation modeling (4th ed.). New York. ISBN 978-1-4625-2334-4. OCLC 934184322.{{cite book}}: CS1 maint: location missing publisher (link)
  7. ^ Hayduk, L. (1987) Structural Equation Modeling with LISREL: Essentials and Advances. Baltimore, Johns Hopkins University Press. ISBN 0-8018-3478-3
  8. Bollen, Kenneth A. (1989). Structural equations with latent variables. New York: Wiley. ISBN 0-471-01171-1. OCLC 18834634.
  9. Kaplan, David (2009). Structural equation modeling: foundations and extensions (2nd ed.). Los Angeles: SAGE. ISBN 978-1-4129-1624-0. OCLC 225852466.
  10. Curran, Patrick J. (2003-10-01). "Have Multilevel Models Been Structural Equation Models All Along?". Multivariate Behavioral Research. 38 (4): 529–569. doi:10.1207/s15327906mbr3804_5. ISSN 0027-3171. PMID 26777445. S2CID 7384127.
  11. Tarka, Piotr (2017). "An overview of structural equation modeling: Its beginnings, historical development, usefulness and controversies in the social sciences". Quality & Quantity. 52 (1): 313–54. doi:10.1007/s11135-017-0469-8. PMC 5794813. PMID 29416184.
  12. MacCallum & Austin 2000, p. 209.
  13. Wright, Sewall. (1921) "Correlation and causation". Journal of Agricultural Research. 20: 557-585.
  14. Wright, Sewall (1934). "The Method of Path Coefficients". The Annals of Mathematical Statistics. 5 (3): 161–215. doi:10.1214/aoms/1177732676.
  15. Wolfle, L.M. (1999) "Sewall Wright on the method of path coefficients: An annotated bibliography" Structural Equation Modeling: 6(3):280-291.
  16. ^ Duncan, Otis Dudley. (1975). Introduction to Structural Equation Models. New York: Academic Press. ISBN 0-12-224150-9.
  17. ^ Bollen, K. (1989). Structural Equations with Latent Variables. New York, Wiley. ISBN 0-471-01171-1.
  18. Jöreskog, Karl; Gruvaeus, Gunnar T.; van Thillo, Marielle. (1970) ACOVS: A General Computer Program for Analysis of Covariance Structures. Princeton, N.J.; Educational Testing Services.
  19. Jöreskog, Karl Gustav; van Thillo, Mariella (1972). "LISREL: A General Computer Program for Estimating a Linear Structural Equation System Involving Multiple Indicators of Unmeasured Variables" (PDF). Research Bulletin: Office of Education. ETS-RB-72-56 – via US Government.
  20. ^ Jöreskog, Karl; Sorbom, Dag. (1976) LISREL III: Estimation of Linear Structural Equation Systems by Maximum Likelihood Methods. Chicago: National Educational Resources, Inc.
  21. ^ Hayduk, L.; Glaser, D.N. (2000) "Jiving the Four-Step, Waltzing Around Factor Analysis, and Other Serious Fun". Structural Equation Modeling. 7 (1): 1-35.
  22. ^ Hayduk, L.; Glaser, D.N. (2000) "Doing the Four-Step, Right-2-3, Wrong-2-3: A Brief Reply to Mulaik and Millsap; Bollen; Bentler; and Herting and Costner". Structural Equation Modeling. 7 (1): 111-123.
  23. Westland, J.C. (2015). Structural Equation Modeling: From Paths to Networks. New York, Springer.
  24. Christ, Carl F. (1994). "The Cowles Commission's Contributions to Econometrics at Chicago, 1939-1955". Journal of Economic Literature. 32 (1): 30–59. ISSN 0022-0515. JSTOR 2728422.
  25. Imbens, G.W. (2020). "Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics". Journal of Economic Literature. 58 (4): 11-20-1179.
  26. ^ Bollen, Kenneth A.; Pearl, Judea (2013). "Eight Myths About Causality and Structural Equation Models". Handbook of Causal Analysis for Social Research. Handbooks of Sociology and Social Research. pp. 301–328. doi:10.1007/978-94-007-6094-3_15. ISBN 978-94-007-6093-6.
  27. Bollen, Kenneth A.; Pearl, Judea (2013), "Eight Myths About Causality and Structural Equation Models", Handbooks of Sociology and Social Research, Dordrecht: Springer Netherlands, pp. 301–328, ISBN 978-94-007-6093-6, retrieved 2024-12-11
  28. Ng, Ted Kheng Siang; Gan, Daniel R.Y.; Mahendran, Rathi; Kua, Ee Heok; Ho, Roger C-M (September 2021). "Social connectedness as a mediator for horticultural therapy's biological effect on community-dwelling older adults: Secondary analyses of a randomized controlled trial". Social Science & Medicine. 284: 114191. doi:10.1016/j.socscimed.2021.114191. ISSN 0277-9536.
  29. ^ Borsboom, Denny; Mellenbergh, Gideon J.; Van Heerden, Jaap (2003). "The theoretical status of latent variables". Psychological Review. 110 (2): 203–219. doi:10.1037/0033-295X.110.2.203. PMID 12747522.
  30. ^ Kline, Rex. (2016) Principles and Practice of Structural Equation Modeling (4th ed). New York, Guilford Press. ISBN 978-1-4625-2334-4
  31. ^ Rigdon, E. (1995). "A necessary and sufficient identification rule for structural models estimated in practice." Multivariate Behavioral Research. 30 (3): 359-383.
  32. ^ Hayduk, L. (1996) LISREL Issues, Debates, and Strategies. Baltimore, Johns Hopkins University Press. ISBN 0-8018-5336-2
  33. ^ Drabble, Sarah J.; o'Cathain, Alicia; Thomas, Kate J.; Rudolph, Anne; Hewison, Jenny (2014). "Describing qualitative research undertaken with randomised controlled trials in grant proposals: A documentary analysis". BMC Medical Research Methodology. 14: 24. doi:10.1186/1471-2288-14-24. PMC 3937073. PMID 24533771.
  34. MacCallum, Robert (1986). "Specification searches in covariance structure modeling". Psychological Bulletin. 100: 107–120. doi:10.1037/0033-2909.100.1.107.
  35. ^ Hayduk, Leslie A.; Littvay, Levente (2012). "Should researchers use single indicators, best indicators, or multiple indicators in structural equation models?". BMC Medical Research Methodology. 12: 159. doi:10.1186/1471-2288-12-159. PMC 3506474. PMID 23088287.
  36. Browne, M.W.; MacCallum, R.C.; Kim, C.T.; Andersen, B.L.; Glaser, R. (2002) "When fit indices and residuals are incompatible." Psychological Methods. 7: 403-421.
  37. ^ Hayduk, Leslie A.; Pazderka-Robinson, Hannah; Cummings, Greta G.; Levers, Merry-Jo D.; Beres, Melanie A. (2005). "Structural equation model testing and the quality of natural killer cell activity measurements". BMC Medical Research Methodology. 5: 1. doi:10.1186/1471-2288-5-1. PMC 546216. PMID 15636638. Note the correction of .922 to .992, and the correction of .944 to .994 in the Hayduk, et al. Table 1.
  38. ^ Hayduk, Leslie (2014). "Seeing Perfectly Fitting Factor Models That Are Causally Misspecified". Educational and Psychological Measurement. 74 (6): 905–926. doi:10.1177/0013164414527449.
  39. ^ Barrett, P. (2007). "Structural equation modeling: Adjudging model fit." Personality and Individual Differences. 42 (5): 815-824.
  40. Satorra, A.; and Bentler, P. M. (1994) “Corrections to test statistics and standard errors in covariance structure analysis”. In A. von Eye and C. C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Thousand Oaks, CA: Sage.
  41. Sorbom, D. "xxxxx" in Cudeck, R; du Toit R.; Sorbom, D. (editors) (2001) Structural Equation Modeling: Present and Future: Festschrift in Honor of Karl Joreskog. Scientific Software International: Lincolnwood, IL.
  42. ^ Hu, L.; Bentler, P.M. (1999) "Cutoff criteria for fit indices in covariance structure analysis: Conventional criteria versus new alternatives." Structural Equation Modeling. 6: 1-55.
  43. Kline 2011, p. 205.
  44. Kline 2011, p. 206.
  45. ^ Hu & Bentler 1999, p. 27.
  46. Steiger, J. H.; and Lind, J. (1980) "Statistically Based Tests for the Number of Common Factors." Paper presented at the annual meeting of the Psychometric Society, Iowa City.
  47. Steiger, J. H. (1990) "Structural Model Evaluation and Modification: An Interval Estimation Approach". Multivariate Behavioral Research 25:173-180.
  48. Browne, M.W.; Cudeck, R. (1992) "Alternate ways of assessing model fit." Sociological Methods and Research. 21(2): 230-258.
  49. Herting, R.H.; Costner, H.L. (2000) “Another perspective on “The proper number of factors” and the appropriate number of steps.” Structural Equation Modeling. 7 (1): 92-110.
  50. Hayduk, L. (1987) Structural Equation Modeling with LISREL: Essentials and Advances, page 20. Baltimore, Johns Hopkins University Press. ISBN 0-8018-3478-3 Page 20
  51. Hayduk, L. A.; Cummings, G.; Stratkotter, R.; Nimmo, M.; Grugoryev, K.; Dosman, D.; Gillespie, M.; Pazderka-Robinson, H. (2003) “Pearl’s D-separation: One more step into causal thinking.” Structural Equation Modeling. 10 (2): 289-311.
  52. Hayduk, L.A. (2006) “Blocked-Error-R2: A conceptually improved definition of the proportion of explained variance in models containing loops or correlated residuals.” Quality and Quantity. 40: 629-649.
  53. ^ Millsap, R.E. (2007) “Structural equation modeling made difficult.” Personality and Individual Differences. 42: 875-881.
  54. ^ Entwisle, D.R.; Hayduk, L.A.; Reilly, T.W. (1982) Early Schooling: Cognitive and Affective Outcomes. Baltimore: Johns Hopkins University Press.
  55. Hayduk, L.A. (1994). “Personal space: Understanding the simplex model.” Journal of Nonverbal Behavior., 18 (3): 245-260.
  56. Hayduk, L.A.; Stratkotter, R.; Rovers, M.W. (1997) “Sexual Orientation and the Willingness of Catholic Seminary Students to Conform to Church Teachings.” Journal for the Scientific Study of Religion. 36 (3): 455-467.
  57. ^ Rigdon, Edward E.; Sarstedt, Marko; Ringle, Christian M. (2017). "On Comparing Results from CB-SEM and PLS-SEM: Five Perspectives and Five Recommendations". Marketing ZFP. 39 (3): 4–16. doi:10.15358/0344-1369-2017-3-4.
  58. Hayduk, L.A.; Cummings, G.; Boadu, K.; Pazderka-Robinson, H.; Boulianne, S. (2007) “Testing! testing! one, two, three – Testing the theory in structural equation models!” Personality and Individual Differences. 42 (5): 841-850
  59. Mulaik, S.A. (2009) Foundations of Factor Analysis (second edition). Chapman and Hall/CRC. Boca Raton, pages 130-131.
  60. Levy, R.; Hancock, G.R. (2007) “A framework of statistical tests for comparing mean and covariance structure models.” Multivariate Behavioral Research. 42(1): 33-66.
  61. Hayduk, Leslie (2018). "Review essay on Rex B. Kline's Principles and Practice of Structural Equation Modeling: Encouraging a fifth edition". Canadian Studies in Population. 45 (3–4): 154. doi:10.25336/csp29397.
  62. Alex James Ing, Alvaro Andrades, Marco Raffaele Cosenza, Jan Oliver Korbel (2024-06-13). "Integrating Multi-Modal Cancer Data Using Deep Latent Variable Path Modelling". bioRxiv. doi:10.1101/2024.06.13.598616.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  63. Marsh, Herbert W.; Morin, Alexandre J.S.; Parker, Philip D.; Kaur, Gurvinder (2014-03-28). "Exploratory Structural Equation Modeling: An Integration of the Best Features of Exploratory and Confirmatory Factor Analysis". Annual Review of Clinical Psychology. 10 (1): 85–110. doi:10.1146/annurev-clinpsy-032813-153700. ISSN 1548-5943. PMID 24313568.
  64. doi:10.3389/psyg.2019.01139
  65. Zyphur, Michael J.; Allison, Paul D.; Tay, Louis; Voelkle, Manuel C.; Preacher, Kristopher J.; Zhang, Zhen; Hamaker, Ellen L.; Shamsollahi, Ali; Pierides, Dean C.; Koval, Peter; Diener, Ed (October 2020). "From Data to Causes I: Building A General Cross-Lagged Panel Model (GCLM)". Organizational Research Methods. 23 (4): 651–687. doi:10.1177/1094428119847278. hdl:11343/247887. ISSN 1094-4281. S2CID 181878548.
  66. Leitgöb, Heinz; Seddig, Daniel; Asparouhov, Tihomir; Behr, Dorothée; Davidov, Eldad; De Roover, Kim; Jak, Suzanne; Meitinger, Katharina; Menold, Natalja; Muthén, Bengt; Rudnev, Maksim; Schmidt, Peter; van de Schoot, Rens (February 2023). "Measurement invariance in the social sciences: Historical development, methodological challenges, state of the art, and future perspectives". Social Science Research. 110: 102805. doi:10.1016/j.ssresearch.2022.102805. hdl:1874/431763. PMID 36796989. S2CID 253343751.
  67. Sadikaj, Gentiana; Wright, Aidan G.C.; Dunkley, David M.; Zuroff, David C.; Moskowitz, D.S. (2021), "Multilevel structural equation modeling for intensive longitudinal data: A practical guide for personality researchers", The Handbook of Personality Dynamics and Processes, Elsevier, pp. 855–885, doi:10.1016/b978-0-12-813995-0.00033-9, ISBN 978-0-12-813995-0, retrieved 2023-11-03
  68. Vera, José Fernando; Mair, Patrick (2019-09-03). "SEMDS: An R Package for Structural Equation Multidimensional Scaling". Structural Equation Modeling. 26 (5): 803–818. doi:10.1080/10705511.2018.1561292. ISSN 1070-5511.
  69. Narayanan, A. (2012-05-01). "A Review of Eight Software Packages for Structural Equation Modeling". The American Statistician. 66 (2): 129–138. doi:10.1080/00031305.2012.708641. ISSN 0003-1305. S2CID 59460771.

Bibliography

Further reading

External links

Statistics
Descriptive statistics
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Data collection
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical inference
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical / Multivariate / Time-series / Survival analysis
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Applications
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Least squares and regression analysis
Computational statistics
Correlation and dependence
Regression analysis
Regression as a
statistical model
Linear regression
Predictor structure
Non-standard
Non-normal errors
Decomposition of variance
Model exploration
Background
Design of experiments
Numerical approximation
Applications
Categories: