Using Listservs to get Advice
Although a bit old school, listservs such as SEMNET (for latent variable models) and the Multilevel Discussion List remain useful resources when grappling with quantitative modeling issues. You can search the archives to see if your question has come up before or post a new question to obtain fresh feedback / opinions from list members. Quick tip: use the “receive digest” setting to avoid having your inbox flooded with listserv posts.
Here’s an example from a recent exchange that I participated in on the multilevel list (the initial query resulted in 19 messages: search “Partial nesting, beginning 11/6/2015 for the complete exchange):
I will soon be analyzing data with a partially nested structure (my first time dealing with this type of data structure), and I’m hoping someone out there may be able to answer some basic questions.
Quick summary of study: The study is a parenting intervention for single moms and their infants. Moms were randomly assigned to either the treatment condition (n = 75) or the control condition (n = 75). The intervention was administered in a group format, with five moms per group (making for 15 groups total). The control condition was wait listed over the course of the study and then given the opportunity to receive the intervention afterwards. Our focus is on the effects of the intervention on several (continuous) outcomes, all which were measured at the level of the individual parent. My hope has been to use a multilevel modeling approach to handle the lack of independently distributed observations in the intervention group.
Q1: One concern I have is the small number of clusters. If cluster membership is related to the outcome variables for those who received the intervention, then would it still be reasonable to use multilevel modeling to test for mean differences in outcomes between treatment and control conditions? Or would I be better off including cluster as a fixed effect in a single level model?
Q2: I know that ignoring nesting can lead to improper estimation of SEs, and related problems with the Type I error rate. However, if we find that the intervention group ICCs are very low (< .005), then could a reasonable argument be made to use a single level model?
Thanks in advance for your thoughts!
Some of the simulation results in Baldwin et al (2011, ref below) may also help you to make an argument regarding how small an ICC is small enough to be ignorable. The lowest ICC considered in this paper was .05 and under some conditions an inflation of Type I error rates was already observed at that level.
I tend to agree with Sam that, to be conservative, one should include the variance component for group even if the ICC is small. I might also recommend using the Satterthwaite method of computing degrees of freedom and Kenward-Rodgers method of computing standard errors for the fixed effects, as this will reflect the (very small) ICC and adjust for any small sample bias that may arise in the variance components with only 15 clusters (see Baldwin et al., 2011).
To the extent that the ICC -> 0, the group variance component becomes superfluous and you approach equivalence with a single-level model, with the caveat that there is a small loss of power associated with estimating one extra parameter. But that seems like a reasonable price to pay to obtain trustworthy standard errors for treatment effects.
I would probably also encourage you to adopt an ANCOVA-like approach where baseline/pre-test is a predictor of post-test(s) and you are analyzing post-test scores nested (or not) within groups. This tends to provide greater power than using pre-test as the first of two repeated observations on the outcome nested within person (see Rausch et al., 2003, ref below) and interpretation is relatively straightforward under the assumption of baseline equivalence of treatment groups. With the ANCOVA-like specification, the ICC pertains only to post-test (net pre-test), addressing Sam’s point (that the ICC>0 at post but ICC=0 at pre) without requiring you to dig into the PARMS statement in SAS MIXED or other model constraint options in other software.
Baldwin, S.A., Bauer, D.J., Stice, E. & Rohde, P. (2011). Evaluating models for partially clustered designs. Psychological Methods, 16, 149-165. doi:10.1037/a0023464.
Rausch, J. R., Maxwell, S. E., & Kelley, K. (2003). Analytic methods for questions pertaining to a pretest posttest follow-up design. Journal of Clinical Child and Adolescent Psychology, 32, 467-486