Somatosensory feedback modulates silent word reading performance in children and adults

Somatosensory feedback modulates silent word reading performance in children and adults

First Author: ANGELA CULLUM -- University of Alberta
Additional authors/chairs: 
Daniel Aalto; Cassidy Fleming; Alesha Reed; Aadya Thapliyal; Amberley Ostevik; William Hodgetts; Jacqueline Cummine
Keywords: speech, visual word recognition, Reading, Reading speed, Phonological processing
Abstract / Summary: 

Purpose. The print-to-speech model describes how skilled reading relies on feedforward (i.e., motor representations) and feedback (i.e., auditory and somatosensory) representations. However, the effect of somatosensory perturbations on reading performance in children and adults is not known.
Method. Children (N = 26; 9y; 5m) and adults participants (N = 61; 24y; 5m) completed three tasks twice: once with a lollipop in their mouth and once without a lollipop. The three tasks that were completed varied in somatosensory feedback reliance: 1) picture categorization (PC; no feedback needed; e.g., press ‘1’ if the picture is an animal), 2) orthographic lexical decision (OLDT; minimal feedback needed: press ‘1’ if the letters spell a real word) and 3) phonological lexical decision (PLDT; feedback necessary: press ‘1’ if the word sounds like a real word).
Results. For children, there was no difference in response time (RT) for the PC task. The lollipop impacted RT in the OLDT (225ms; effect size .43) and PLDT (160ms; effect size .46), such that children were faster when the lollipop was present vs. not present. For adults, individuals were 43 ms faster to recognize pseudohomophones (e.g., hoap; effect size .34) when the lollipop was present vs. not present (in the OLDT). No significant differences in RT were found for the PLDT or PC tasks in adults.
Conclusions. Together, these findings provide direct evidence for the role of somatosensory information in the reading process, particularly in a developmental population, and serve to advance our understanding of the print-to-speech model.