What are modification indices and should I use them when fitting SEMs to my own data?

This is a great question and is one that prompts much disagreement among quantitative methodologists. Nearly all confirmatory factor analysis or structural equation models impose some kind of restrictions on the number parameters to be estimated. Usually, some parameters are set to zero (and thus not estimated at all), but sometimes restrictions come in the form of equality constraints or other kinds of structured relations among parameters. The model chi-square test reflects the extent to which these imposed restrictions impede the ability of the model to reproduce the means, variances, and covariances that were observed in the sample. Smaller chi-square values reflect that the estimated model is able to adequately reproduce the observed sample statistics whereas larger values reflect that some aspect of the hypothesized model is inconsistent with characteristics of the observed sample.

In other words, a larger chi-square indicates that the model does not “fit well” and confidence is undermined as to the extent to which the hypothesized model is a valid representation of the population model. Given a large chi-square (and poor fit measures in general), one must consider whether to re-specify the model in some way to try to attain better fit and it is here that the Modification Index (MI, sometimes called a LaGrange Multiplier or Score Test) comes into play. An MI is an estimate of the amount by which the chi-square would be reduced if a single parameter restriction were to be removed from the model. There are thus as many MIs as imposed restrictions in the model. Most commonly, an MI reflects the improvement in model fit that would result if a previously omitted parameter were to be added and freely estimated. This might be a factor loading, a regression coefficient, or a correlated residual. If a parameter is added based on a large MI, this is called a “post hoc model modification” and represents a data-driven modification of the original hypothesized model. It is not uncommon in practice for researchers to consult MIs to suggest model modifications that lead to a “better” fitting model.

Although MIs can be useful in identifying sources of misfit in a model, using them also carries risks. First, they are completely determined by the data and are thus devoid of theory. The largest MIs might be associated with parameters that are unsupported by theory and instead represent some idiosyncratic characteristics of the data. Second, simulation research has suggested that using MIs to guide model specification rarely leads to the true population model. MacCallum, Roznowski and Necowitz (1992) conducted a comprehensive study of MIs and concluded “In summary, our results bring us to a position of considerable skepticism with regard to the validity of the model modification process as it is often used in practice.” We completely agree.

However, like many things in statistics, MIs can be beneficial if used in a thoughtful and judicious way. For example, these can be used as another indicator of global and local goodness-of-fit; a small number of large MIs versus a large number of small MIs reflect different model performance in terms of fit. They are also commonly used when assessing measurement invariance (or lack thereof) across groups in confirmatory factor analysis models. Further, although our theories often well developed, they are not articulated with sufficient detail to guide introducing correlated residuals or removing equality constraints; thus, MIs might offer some guidance about a more complex model structure than what theory hypothesized. Taken together, we believe that MIs are an important source of information about model fit, but that these should be used both thoughtfully and cautiously, and models should only be modified if there is a strong and defensible theoretical reason for doing so. Additionally, authors should always report model modifications, whether guided by MIs or other considerations, so that reviewers and consumers of the research can also judge these decisions.

MacCallum, R. C., Roznowski, M., & Necowitz, L. B. (1992). Model modifications in covariance structure analysis: The problem of capitalization on chance. Psychological Bulletin, 111, 490-504.

Related Articles