# News and Updates

Missing data are a common problem faced by nearly all data analysts, particularly with the increasing emphasis on the collection of repeated assessments over time. Data values can be missing for a variety of reasons. A common situation is when a subject provides data at one time point but fails to provide data at a later time point; this is sometimes called attrition. However, data can also be missing within a single administration. For example, a subject might find a question objectionable and not want to provide a response; a subject might be fatigued or not invested in the study and skip an entire section; or there might be some mechanical failure where data are not recorded or items are inadvertently not presented. Regardless of source, it is very common for assessments to be missing for a portion of the sample under study. Fortunately, there are several excellent options available that allow us to retain cases that only provide partial data.

Read MoreThe issue of reliability can be a complex and often misunderstood issue. Entire text books have been written about reliability, validity, and scale construction, so we only briefly touch on the key issues here (see Bandalos, 2018, for an excellent recent example). To begin, in most areas across the behavioral, educational, and health sciences, theoretical constructs are hypothesized to exist yet cannot be directly measured. Common examples include depression, anxiety, academic motivation, commitment to treatment, and perceived stress. A vast array of psychometric methods have been developed over the past century to use multi-item scales as a basis to infer the existence of these underlying constructs. Indeed, the genesis of factor analysis (most commonly dated to Spearman in 1903) was motivated by the desire to use multi-test assessments to compute person-specific values of cognitive functioning. Psychometric methods are sometimes organized into pragmatic approaches (e.g., Classical Test Theory) and axiomatic approaches (e.g., item response theory and factor analysis). However, a fundamental component of all of these methods is reliability.

Read MoreOur very own Patrick Curran has teamed up with Greg Hancock (Professor, College of Education, University of Maryland) to launch a new podcast called Quantitude. It is dedicated to all things quantitative, ranging from the relevant to the highly irrelevant. Picture a cross between the Car Talk guys, the two old men from the Muppets, and a graduate statistics course.

Quantitude explores serious issues but in a sometimes grousing and irreverent way. Episodes address current topics in quantitative methodology, data analysis, and research methods; interviews with professionals in the field; responses to listener questions; quantitative puzzlers; and much more. Episodes are posted every-other week, notably on “Quanti-Tudesday”.

So if you’re interested, please check Quantitude out. The podcast can be found at https://www.buzzsprout.com/639103 (or wherever you listen to your favorite podcasts), and the Quantitude home page is http://quantitudethepodcast.org/. Finally, Quantitude is on Twitter at @quantitudepod

Read MoreWe’re pleased to announce our summer workshop schedule for 2020. The schedule for our regular 5-Day workshops is:

**May 11-15:**Network Analysis**May 11-15:**Structural Equation Modeling**May 18-22:**Longitudinal Structural Equation Modeling**June 1-5:**Latent Class/Cluster Analysis and Mixture Modeling**June 8-12:**Multilevel Modeling*now with an R software option in addition to SAS, SPSS, and Stata*

In addition, we are pleased to again offer a steeply reduced cost ($100) 3-day introductory workshop designed specifically for graduate students seeking advanced methodological training:

Given high demand, students are asked to pre-register for this event by March 20, 2020, and will be notified if they have been selected to attend by April 1, 2020.

See our Training page for a general description of our teaching philosophy, links to course reviews and sample course notes.

Read MoreThis is a question that often arises when using structural equation models in practice, sometimes once a study is completed but more often in the planning phase of a future study. To think about power, we must first consider ways in which we can make errors in hypothesis testing (Cohen, 1992). Briefly, the Type I error rate is the probability of incorrectly rejecting a true null hypothesis; this is the probability that an effect will be found in a sample when there is truly no effect in the population. In contrast, the Type II error rate is the probability of accepting a false null hypothesis; this is the probability that an effect will not be found in a sample when there truly is an effect in the population. Statistical power is one minus the Type II error rate and represents the probability of correctly rejecting a false null hypothesis; this is the probability that an effect will be found in the sample if an effect truly exists in the population. It is important to determine whether a proposed study will have sufficient power to detect an effect if an effect really exists. Although power is quite easy to compute for simple kinds of tests such as a t-test or for a regression parameter, it becomes increasingly complicated to compute power for complex SEMs.

Read More