Delving deeper into comprehension considering child, text, and assessment factors

Delving deeper into comprehension considering child, text, and assessment factors

First Author: Young-Suk Kim -- University of California, Irvine
Additional authors/chairs: 
David Francis
Keywords: Comprehension, Reader-text interactions, Assessment
Abstract / Summary: 

The goal of this symposium is to expand our knowledge on multiple factors that contribute to comprehension. Previous studies have provided rich information about individual characteristics that contribute to comprehension (e.g., working memory, vocabulary). However, growing evidence, and recent theoretical models (e.g., DIER, Kim, 2019; complete view of reading, Francis et al., 2018) indicate the roles of text and assessment characteristics on comprehension. In this symposium, a collection of five papers systematically addresses the interplay of these factors using diverse data and methodologies. They reveal that text and assessment factors as well as child factors explain large variance in children’s comprehension; the results vary by children’s language skill; precision of estimates is enhanced when accounting for nesting (items within passages); the relation of vocabulary to reading comprehension varies by the characteristics of vocabulary words; and vocabulary demand on comprehension significantly greater in science texts compared to social studies or mathematics.

Symposium Papers: 

Comprehension Unpacked: Relations of Child, Text, and Assessment Factors to Comprehension

First Author/Chair:Young-Suk Kim -- University of California at Irvine
Additional authors/chairs: 
Yaacov Petscher

Purpose: In this study, we investigate the relations of multiple factors that contribute to children’s performance on comprehension (listening comprehension), including child factors (e.g., working memory, vocabulary), text factors (e.g., narrative vs. expository texts), and assessment factors (assessment format such as retell vs. short open-ended responses; and assessment information such as literal and inferential comprehension questions).

Method: Participant included 523 second-graders in the US. Children were assessed on comprehension (listening comprehension) using narrative and expository texts, and their comprehension ability was measured by oral retell and short open-ended questions which included literal and inferential questions. Children were also assessed on their working memory, attention, vocabulary, grammatical knowledge, knowledge-based inference, perspective taking, and comprehension monitoring. Data were analyzed using explanatory item response modeling (De Boeck & Wilson, 2004).

Results: Preliminary results showed that a large amount of variance was attributed to differences among question types (i.e., literal vs. inferential comprehension questions; 33%) and differences among passages (18%). Expository versus narrative passage types explained almost all the variance attributable to variance due to passages (i.e., 94% of 18%) whereas literal versus inferential questions explained 10% of variance among different questions (i.e., 10% of 33%). Further explorations are underway to investigate interactions of student factors with text and assessment features. 

Conclusions: Findings highlight the importance of carefully considering multiple factors – individual, text, and assessment factors – and their interactions on comprehension.

 

Reader Skill and Measurement Error in Reading Comprehension Assessment

First Author/Chair:Alyson Collins -- Texas State University
Additional authors/chairs: 
Esther R. Lindström; Micheal Sandbank

Purpose: Measuring reading comprehension (RC) is complicated because research suggests estimates of RC ability differs by tests (Colenbrander et al., 2017; Keenan and Meenan, 2014). To extend recent research on this topic, this study investigated how measurement facets interact with reader skills to contribute error to scores from a RC test.

Method: Participant included 79 fourth-graders in an urban elementary school. A randomized and counterbalanced 3×2 study investigated three response formats (open-ended questions, multiple choice, and retell) and two text genres (narrative and expository) from the Qualitative Reading Inventory (QRI-5). A language knowledge composite derived from three measures (Woodcock Johnson-III Picture Vocabulary, Oral Comprehension, and Academic Knowledge) defined three skill groups: (a) <90 as low, (b) 90-99 as low-average, and (c) >99 as average or above. Generalizability studies partitioned variance in scores for reader, text genre, and response format across these groups. Decision studies documented the dependability of scores from each response format within group.

Results: Response format accounted for 39% to 55% of variance in RC scores across groups, while text genre accounted for negligible variance in scores (0.9%-1.5%). Decision studies suggested the dependability of response formats widely varied by skill, where multiple choice scores were less dependable for students with low-average and above-average language knowledge skill. Furthermore, open-ended scores were less dependable for students with low-average skill.

Conclusions: Findings highlight error contributed by response format in RC assessment and underscore limitations of using a single score to classify readers with and without foundational RC skills.

Understanding clustering effects when examining reading comprehension as a function of reader-text interactions: Explanatory item response study

First Author/Chair:Paulina Kulesz -- University of Houston
Additional authors/chairs: 
David Francis

Purpose: Despite an increasing number of studies investigating the effects of reader-text interactions on comprehension using explanatory item response models, many of them compute models with a two-level cross-classified structure (item responses are nested within persons and items). These models do not take into account random effects of school and random effects of passages (i.e., the nesting of items within passages). The current study compares two-level and three-level cross-classified models to examine the effect of ignoring the clustering effects on the estimates of fixed effects and their standard errors for text-reader interactions.

Method: The used sample covered grades 7 through 12 (N = 1,082). The explanatory item response models included two-level and three-level cross-classified structures and were used to construct an empirical model of the joint interactions among reader and text, the end product of which is reading comprehension.

Results and Conclusions: The results suggested that ignoring clustering effects exerted a greater influence on the estimates of fixed effects relative to their standard errors. The absolute magnitude of regression coefficients was on average larger and their standard errors were smaller in the model with the three-level cross-classified structure, though the clustering effects were less pronounced for standard errors. Consequently, the model with the three-level cross-classified structure resulted in more statistically significant reader-text interactions. The current study illustrates importance of specifying all necessary random effects in the explanatory item response models.

Word meanings and reading comprehension: Exploring the relationship across target word characteristics

First Author/Chair:Joshua Lawrence -- University of Oslo
Additional authors/chairs: 
Rebecca Knoph; Jin Kyoung Hwang; Saemi Park; Paul De Boeck

Purpose: Research question is what word features characterize vocabulary most associated with general reading comprehension ability?

Method: Data are from pretest of a large randomized trial from the Word Generation program. The analytic sample includes monolingual English students attending sixth (n=429), seventh (n=412), and eighth (n=874) grades. All students completed a standardized reading comprehension assessment and a vocabulary assessment of academic words. We coded the target words using well-established measures of frequency and dispersion, number of senses and meanings, and semantic diversity. We used nested logistic models to predict the  probability of a correct response controlling for item and person-level covariates. We tested two-way interactions between reading comprehension and item-level characteristics to determine if certain dimensions of vocabulary were strongly associated with reading ability after controlling for the main effects of ability and the item features.

Results: There were significant interactions between reading comprehension and frequency, and reading comprehension and number of senses and meanings. We tested the robustness of these findings with alternative models and cohorts.

Conclusions: Middle school students are more likely to know frequent academic words and those with more senses and meanings, and better readers tend to know more words than other readers do. Even controlling for these effects, there is still a relationship between the frequency and number of meanings of a target word and skilled reading. We interpret these findings with reference to the literature on the multiple pathways by which vocabulary knowledge supports reading comprehension.

The Comparative Volume of Academic Vocabulary in Elementary Grades U. S. Science, Mathematics, and Social Studies Disciplinary Textbooks

First Author/Chair:Jeff Elmore -- MetaMetrics
Additional authors/chairs: 
Jackie Eunjung Relyea; Jill Fitzgerald

Purpose: Although researchers have advocated plentiful textual exposure to academic vocabulary, even for young children, empirical accounts are lacking concerning academic vocabulary presence in disciplinary textbooks in any grades. Textual academic vocabulary demands may vary across domains, potentially differentially burdening children’s text comprehension and domain learning. We asked: Is the change in volume of academic vocabulary across grades one through five different for three domains (science, mathematics, social studies)? We also assessed the same issue with a subset of exceptionally specialized academic vocabulary called “high jargon” words.

Method: The data source was two digitized first- through fifth-grade textbook programs in each of three domains—science, mathematics, and social studies. Domain-specific academic words and a subset of “high jargon” words were identified computationally. Multilevel Poisson growth modeling was conducted for both research questions with five grades nested within two textbook programs nested in three domains.

Results: The trajectory (across grades) for volume of academic vocabulary was curvilinear for all three domains, with sharp acceleration in the number of academic words beginning in fourth grade. The growth of “high jargon” words was dramatically accelerated for science as compared to the other two domains.  Social studies demonstrated the least change in the volume of “high jargon” words over time.

Conclusion: As elementary grades increased, students’ disciplinary textbook exposure place a particularly high demand on students’ academic science vocabulary knowledge as compared to other domains, potentially reflecting heavier burden for comprehending and learning from the science textbooks as compared to other domains.