Methods for disentangling complexities in reading interventions

Methods for disentangling complexities in reading interventions

First Author: Lee Branum-Martin -- Georgia State University
Keywords: Intervention, Modeling of reading, Growth Modeling, causality, Quantile regression
Abstract / Summary: 

Treatment effects from reading interventions can be measured with multiple outcomes and multiple time points inviting a rich set of multivariate and longitudinal models. Although such models may fit well, they raise interesting validity questions about the meaning of the intervention. Multivariate models bring to light the question of intervention impact upon the latent construct versus the components of each test. Longitudinal univariate and multivariate models, while powerful, also raise questions about complex treatment effects such as how well a treatment effect is maintained over time. Moreover, complications arise when the intervention effect differs at different levels of student outcome performance, or when the structure among outcomes appears to change after treatment. This symposium presents five papers, each exploring different facets of these design and measurement complexities in applied reading interventions.

Symposium Papers: 

Modeling fleeting and persisting treatment effects from randomized intervention studies

First Author/Chair:Jamie Quinn -- Florida Center for Reading Research
Additional authors/chairs: 
Jessica Logan

Purpose – We illustrate how to use time-varying covariates to test for fleeting and persisting treatment effects in interventions with post-intervention follow-ups. We applied this method to outcome measures from a randomized control trial that examined the efficacy of an intervention to improve pre-reading skills of Pre-K SPED children with language impairments.
Method – Children were randomized to three conditions: read-aloud only (n = 107), read-aloud and print-focused (n = 101), and print-focused only (n = 104). Letter knowledge, name writing, and print concept knowledge were measured at three time-points post-intervention. To test for fleeting and persisting effects, we coded a time-varying covariate at three time points. These were coded depending on intervention group: fleeting effects across the three time points (pre-test, post-test, follow-up) were coded as 0, 0, and 0 for the read-aloud only group, but were coded as 0, 1, and 0 in the print-focused groups. Persistent effects were coded as 0, 0, 0, for read-only, but as 0, 1, 1 in the print-focused groups. A final model with partial persisting effects was coded as 0, 0, 0, in read-only, and 0, 1, 0.25 in the print-focused groups.
Results and Conclusions – The three models were fit to the data and compared using model fit criteria. Results indicated that a partially persisting model (whereby effects partially persisted at follow-up) was the best fitting model. We will present these analyses, including other partially-persisting models, as an illustration for the usage of these models when analyzing longitudinal intervention data.

Don’t be mean: treatment may affect more than just the average

First Author/Chair:Yusra Ahmed -- University of Houston
Additional authors/chairs: 
David Francis; Jeremy Miciak; Pat Taylor

Purpose – Traditional analysis of change involves the comparisons of means on pre- and post- intervention responses on observed or latent variables (e.g., status or slope). However, interventions don’t just influence the mean. Interventions potentially influence (a) the variances (i.e., the intervened group becomes more homogeneous) and (b) covariances between any variables between and across time. We examine not only whether children change in their reading comprehension but also how variables within the reading system interact. This approach allows the focus to shift from intervention as influencing a change in reading comprehension status to a complex set of processes.
Method - Students were assigned to a one year treatment (n= 161), two-year treatment (n = 162) or BAU condition (n = 161). The one-year treatment spanned from fall to spring of 4th grade and the two-year treatment spanned from fall of 4th grade to spring of 5th grade. We fit explanatory latent change score models to evaluate differences in covariance structures across groups.
Results – Students in both one- and two-year groups demonstrated significantly larger gains in decoding and fluency in comparison to BAU. There were no significant differences between groups on reading comprehension. However, differences were noted in the variances, auto-proportion, cross-construct coefficients and slope-intercept correlations.
Conclusions – The lack of significant effects for reading comprehension in the present study would lead to the conclusion that the intervention was not effective. However, we illustrate how interventions can disrupt the relationship among variables even when differences are not visible in the means.

Measuring treatment impact across different outcomes, settings, and designs

First Author/Chair:Lee Branum-Martin -- Georgia State University
Additional authors/chairs: 
Congying Sun; Beth Calhoon

Purpose: We explored the extent to which treatment effects estimated in a latent variable model across six reading tests would be consistent with univariate growth models. Differences between the two types of model may be informative to the nature of the intervention and the student skills involved.
Method: We pooled intervention data from five different RCT studies in which five versions of a modular reading intervention program were tested for adolescents with diagnosed reading difficulties: three time points nested within 744 children, nested within 85 classrooms, in eight student cohorts in six studies. A single factor model of general reading was fit to the six reading measures at three time points, with Study and treatment Version used as predictors of latent reading. Additionally, method factors for the two speeded fluency tests were added.
Results: The longitudinally invariant model fit reasonably well, with good pattern coefficients. Average gain was about one third of a SD at each time point. Study effects were large, up to 1.20 latent z-units, suggesting site/study differences.
Conclusions: First, these adolescents, despite their struggles with reading, have an integrated literacy system. Second, the effects of speed are reasonably consistent, and can be isolated from the reading outcomes. Third, two of the treatment versions, Additive and Integrated, have an appreciable effect on general reading skill over the course of an academic year. Student performance in general reading can be isolated from differences due to speed or test-specific error.

Methods for combining outcomes to generate multivariate estimates of intervention response

First Author/Chair:Jan C. Frijters -- Brock University
Additional authors/chairs: 
Maureen W. Lovett; Lee Branum-Martin; Robin D. Morris

Purpose: Methodologies for estimating response on reading intervention outcomes for individual participants on single measures are available for both research and clinical purposes. Multilevel and SEM-based growth curve models can produce individual estimates. Such estimates can then be linked in subsequent analyses to individual difference variables like brain activation or neuropsychological performance. Less attention has been paid to combining growth/change outcomes into multivariate estimates of response, important because growth estimates across different outcome measures have moderate intercorrelations.
Method: Children (n = 99; age 8 to 15) in an intensive small-group reading intervention were measured at four occasions on ten well-validated single-word identification outcomes (e.g., WJ-III, SRI, TOWRE, and two experimenter-constructed outcomes linked to the intervention). All models described below were replicated on a larger sample of 372 middle-school children participating in a similar intervention.
Results: The presentation will detail the successful implementation of two competing/related second-order growth curve SEM models (FOCUS/CUFFS) to produce a single multivariate individual-child estimate of intervention change, including issues of model-building, assessing fit for small samples, and extraction of individual curves via empirical Bayes estimates. This approach will be contrasted with an alternate approach that starts with estimating reliable change scores for individual outcomes and then employing latent-class analyses to form homogeneous outcome groups based on multivariate outcomes.
Conclusions: Both approaches yielded estimates of individual multivariate intervention response that also demonstrated external validity, referenced against brain activation patterns and neuropsychological assessments. Discussion will focus on implementation issues, agreement/uniqueness across the two methods, and intervention design considerations.

Expanding notions of causality with quantile treatment effects

First Author/Chair:Yaacov Petscher -- Florida Center for Reading Research

Purpose: The purpose of the present study is to present a quantile framework that allows the treatment effect to be evaluated conditional on other points of the outcome distribution other than the mean. Examples from recent studies will be reported and compared to traditional single- and multilevel approaches for estimating average treatment effects.
Method: Data from three intervention studies are used to illustrate how quantile treatment effects and average treatment effects are estimated and compared using single-level and multilevel models. Moderation of treatment effects in the quantile framework is also estimated with simple slopes analyses.
Results: When the post-test was normally distributed, the average treatment effect and the quantile treatment effect at the .50 quantile were approximately equal. Quantile treatment effect models in both the single-level and multilevel contexts demonstrated that varying standardized differences between treatment and comparison groups existed conditional on the point of the post-test distribution.
Conclusions: Rubin’s Causal Model has served as a foundational mathematical framework for establishing causality in randomized controlled studies. At the heart of this framework is that the treatment effect can be estimated as the expected difference in outcome means between the treatment and control groups. Results from these studies demonstrated that the concept of average treatment effects via causal modeling can be expanded to encompass a broader set of distribution-conditional treatment effects.