News and Updates
For the first time, CBA will be hosting a Fall online workshop series this December, including a new 2-day workshop on Applied Research Design using Mixed Methods from acclaimed instructor Greg Guest. The course schedule is
- December 2-4: Introduction to Multilevel Modeling
- December 7-8: Applied Research Design using Mixed Methods
- December 9-11: Introduction to Longitudinal Structural Equation Modeling
- December 14-16: Introduction to Network Analysis
- December 14-16: Introduction to Mixture Modeling and Latent Class Analysis
See our Training page for a general description of our teaching philosophy, course reviews and sample course notes.Read More
Patrick Curran and Greg Hancock have dedicated a recent episode of their quantitative methods podcast, Quantitude, to diversity and equity in academia. As part of their summer Quanti•Qamp episode series, they welcome special guest Dr. A. Nayena Blankson, Professor of Psychology at Spelman College, to discuss how we can capitalize on the current national conversation about race and equity to enhance diversity in both academia in general and in the quantitative sciences in particular.
The episode is titled Enhancing Diversity in the Quantitative Sciences and may be downloaded using all major podcast services or directly from buzzsprout.com/639103.Read More
From May 6 to 8, CBA conducted a free 3-day Introduction to Structural Equation Modeling webinar, drawing participants from all around the world. Just for fun, we generated the map below to show the countries of origin of participants. To receive notices about future workshop offerings like this, follow us on twitter or facebook or subscribe to our newsletter (“Receive Email Updates” in page footer).
Missing data are a common problem faced by nearly all data analysts, particularly with the increasing emphasis on the collection of repeated assessments over time. Data values can be missing for a variety of reasons. A common situation is when a subject provides data at one time point but fails to provide data at a later time point; this is sometimes called attrition. However, data can also be missing within a single administration. For example, a subject might find a question objectionable and not want to provide a response; a subject might be fatigued or not invested in the study and skip an entire section; or there might be some mechanical failure where data are not recorded or items are inadvertently not presented. Regardless of source, it is very common for assessments to be missing for a portion of the sample under study. Fortunately, there are several excellent options available that allow us to retain cases that only provide partial data.
The issue of reliability can be a complex and often misunderstood issue. Entire text books have been written about reliability, validity, and scale construction, so we only briefly touch on the key issues here (see Bandalos, 2018, for an excellent recent example). To begin, in most areas across the behavioral, educational, and health sciences, theoretical constructs are hypothesized to exist yet cannot be directly measured. Common examples include depression, anxiety, academic motivation, commitment to treatment, and perceived stress. A vast array of psychometric methods have been developed over the past century to use multi-item scales as a basis to infer the existence of these underlying constructs. Indeed, the genesis of factor analysis (most commonly dated to Spearman in 1903) was motivated by the desire to use multi-test assessments to compute person-specific values of cognitive functioning. Psychometric methods are sometimes organized into pragmatic approaches (e.g., Classical Test Theory) and axiomatic approaches (e.g., item response theory and factor analysis). However, a fundamental component of all of these methods is reliability.