News and Updates
The analysis of longitudinal data has quickly gained in importance across a variety of fields because it allows for the examination of questions about change over time. This is why all of our current workshops (Network Analysis, Latent Class/Mixture Modeling, Multilevel Modeling, Structural Equation Modeling, and Longitudinal Structural Equation Modeling) address the analysis of longitudinal data to some degree. A common theme underlying nearly any analysis of repeated measures data is the importance of modeling between-unit differences in within-unit change. We use the term “unit” instead of “person” because these models can be applied to repeated measures that have been drawn from any unit of observations, whether that is an individual person or region of the brain or even country. This latter application is clearly demonstrated in a recent article in the New York Times examining the relation between health expenditures and health outcomes over time and across country. The article presents several highly impactful graphics that clearly show that the United States was characterized by a trajectory of life expectancy that was similar to a number of comparable countries, but only up to about 1980. After that time the US has systematically fallen behind comparison countries, even though health care spending has increased during this same period. Of course the identification of the specific causal influences leading to these changes is complex, but only through the analysis of the repeated measures data on both health care spending and life expectancy could these trends be identified. Developing an understanding of these trajectories in turn allows for the generation of hypotheses about the causal mechanisms and the implementation of changes in policy that might lead to improvements in life expectancy in the future. It is thus important to consider the analysis of individual trajectories, regardless of what “individual” means in your own research application.Read More
In nearly every discipline within the behavioral, health, and educational sciences, longitudinal data have become requisite for establishing temporal precedence and distinguishing inter-individual differences in intra-individual change. Whereas traditional longitudinal designs often obtained repeated assessments at monthly or even yearly intervals, recent advances in mobile technology have allowed for the collection of multiple assessments throughout a single day. These so-called intensive longitudinal designs (ILDs) are becoming increasing prevalent in many empirical studies of human development and behavior. However, as with any advancement in design and assessment, a multitude of complexities arise when fitting statistical models to large numbers of repeated assessments often taken on smaller numbers of individuals. For example, it is not uncommon to use an ILD to obtain six daily assessments over a 14 day period on 75 individuals. Key among a variety of complexities that must be addressed is that of missing data. Standard existing methods for handling missing data are not always well suited when applied to large numbers of repeated assessments and guidance for practitioners is sparse. A recent paper in the journal Structural Equation Modeling by Linying Ji and colleagues addresses this very issue. The paper is motivated by an actual ILD in which a large number of assessments are obtained from parents in the assessment of emotional states and behaviors arising from conflicts between parents and children. In the original application, missing data are pervasive yet no well developed methods were available to address this issue. The authors then describe several modern methods for the analysis of missing data in ILDs, they conduct a computer simulation to evaluate the methods under known conditions, and they re-analyze the empirical data to demonstrate the new techniques. They conclude with recommendations for handling missing data in ILDs and provide R code to help in this endeavor.Read More
Curran-Bauer Analytics is pleased to announce that Dan has been honored with another teaching award in recognition of his exceptional teaching and mentoring skills. Dan was selected to receive the 2018 Jacob Cohen Award for Distinguished Teaching and Mentoring by the American Psychological Association, Division 5 (Quantitative and Qualitative Methods). He will be recognized at this year’s annual convention in San Francisco. This adds to several prior teaching awards granted to both Dan and Patrick. Dan also received the 2016 Distinguished Teaching Award for Post-Baccalaureate Instruction by the University of North Carolina at Chapel Hill, and Patrick received both the 2016 Cohen Teaching Award from the APA and the 2012 Chapman Family Teaching Award from the University of North Carolina in recognition of excellence in undergraduate teaching. Dan and Patrick bring the same level of passion and commitment to teaching in their summer workshops, disseminating quantitative methods to a broad audience of researchers in the psychological, social, and health sciences.Read More
Our research group recently published a new paper that explores the many advantages of integrative data analysis (or IDA). IDA is an approach to combining and analyzing data across multiple, independent samples (a specific type of “data fusion”). Our recent paper explores several particularly salient advantages of IDA when applied to the study of high-risk child and adolescent behavior. First, pooling independent longitudinal samples with varying ranges of ages allow us to “stretch” or “accelerate” the passage of time by combining overlapping developmental cohorts. Second, studying substance use in children often leads to low cell counts due to the rare nature of the behavior, and pooling multiple samples can increase the number of cases reporting the rare outcomes. Finally, pooling samples enhances between subject heterogeneity thus strengthening our ability to generalize findings to a broader population of individuals. We demonstrate these advantages with a detailed worked example in which we generate score estimates for polysubstance use based on unique and shared substance use items drawn from three independent contributing studies. We hope others will find this paper useful as they explore the potential of IDA for their own research.
Curran, P.J., Cole, V.T., Giordano, M., Georgeson, A.R., Hussong, A.M., & Bauer, D.J. (in press). Advancing the study of adolescent substance use through the use of integrative data analysis. Evaluation and the Health Professions.Read More
There is often much confusion about what is meant by “statistical adjustment” when estimating and interpreting models that include more than one predictor variable. This issue commonly arises in the traditional multiple regression model, but is ubiquitous across nearly any class of multivariate models ranging from regression to factor analysis to complex structural equation models. Statistical adjustment goes by many names including an effect that is “controlled for”, is “unique to”, or is “above-and-beyond” the effects of one or more other predictors. Regardless of term, statistical adjustment attempts to equate factors in a model that were not controlled as part of the experimental design. An interesting recent example was described in a New York Times article about the relation between climate change and world record marathon times. Nearly five million finish times where drawn from 900 marathons spanning a 20 year period, and a nonlinear relation was found between temperature and finish time. The fastest times were recorded when temperatures were in the 40’s, and finish times systematically slowed with increasing temperature. This raises an interesting question about whether world records should be adjusted for race-time temperature. For example, the current world record holder would be replaced by the third place finisher on the record list when factoring in temperature. Similar arguments have been made in baseball about adjusting home run counts for elevation because the air resistance is markedly less in Denver than in Boston. Although an interesting argument among friends, the notion of statistical adjustment becomes much more serious in many applied research settings, particularly when studying issues such as end-of-grade achievement, substance use, psychopathology, or any of a myriad of outcomes in the behavioral and health sciences. Failure to account for potential confounding factors can lead to biased interpretations of causal effects that in turn threatens both the internal and external validity of the study. Dan discusses and demonstrates the idea of statistical control in greater detail in our Office Hours video series on multiple regression.Read More