Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Oct 15;13:1093.
doi: 10.3389/fnins.2019.01093. eCollection 2019.

Auditory, Cognitive, and Linguistic Factors Predict Speech Recognition in Adverse Listening Conditions for Children With Hearing Loss

Affiliations
Free PMC article

Auditory, Cognitive, and Linguistic Factors Predict Speech Recognition in Adverse Listening Conditions for Children With Hearing Loss

Ryan W McCreery et al. Front Neurosci. .
Free PMC article

Abstract

Objectives: Children with hearing loss listen and learn in environments with noise and reverberation, but perform more poorly in noise and reverberation than children with normal hearing. Even with amplification, individual differences in speech recognition are observed among children with hearing loss. Few studies have examined the factors that support speech understanding in noise and reverberation for this population. This study applied the theoretical framework of the Ease of Language Understanding (ELU) model to examine the influence of auditory, cognitive, and linguistic factors on speech recognition in noise and reverberation for children with hearing loss. Design: Fifty-six children with hearing loss and 50 age-matched children with normal hearing who were 7-10 years-old participated in this study. Aided sentence recognition was measured using an adaptive procedure to determine the signal-to-noise ratio for 50% correct (SNR50) recognition in steady-state speech-shaped noise. SNR50 was also measured with noise plus a simulation of 600 ms reverberation time. Receptive vocabulary, auditory attention, and visuospatial working memory were measured. Aided speech audibility indexed by the Speech Intelligibility Index was measured through the hearing aids of children with hearing loss. Results: Children with hearing loss had poorer aided speech recognition in noise and reverberation than children with typical hearing. Children with higher receptive vocabulary and working memory skills had better speech recognition in noise and noise plus reverberation than peers with poorer skills in these domains. Children with hearing loss with higher aided audibility had better speech recognition in noise and reverberation than peers with poorer audibility. Better audibility was also associated with stronger language skills. Conclusions: Children with hearing loss are at considerable risk for poor speech understanding in noise and in conditions with noise and reverberation. Consistent with the predictions of the ELU model, children with stronger vocabulary and working memory abilities performed better than peers with poorer skills in these domains. Better aided speech audibility was associated with better recognition in noise and noise plus reverberation conditions for children with hearing loss. Speech audibility had direct effects on speech recognition in noise and reverberation and cumulative effects on speech recognition in noise through a positive association with language development over time.

Keywords: children; hearing aids; hearing loss; noise; reverberation; speech recognition.

Figures

Figure 1
Figure 1
Peabody Picture Vocabulary standard scores for children with hearing loss (HL; green) and children with normal hearing (NH; blue). Box plots represent the median (middle line) and interquartile range of the data. The colored regions around each box blot are symmetrical representations of the distribution of data points in each condition.
Figure 2
Figure 2
NEPSY II Auditory Attention combined scaled scores for children with hearing loss (HL; green) and children with normal hearing (NH; blue). Box plots represent the median (middle line) and interquartile range of the data. The colored regions around each box blot are symmetrical representations of the distribution of data points in each condition.
Figure 3
Figure 3
Automated Working Memory Assessment Odd-One-Out subtest standard scores for children with hearing loss (HL; green) and children with normal hearing (NH; blue). Box plots represent the median (middle line) and interquartile range of the data. The colored regions around each box blot are symmetrical representations of the distribution of data points in each condition.
Figure 4
Figure 4
The signal-to-noise ratio (SNR) for 50% correct sentence recognition for children with hearing loss (HL; green) and children with normal hearing (NH; blue). The top panel shows data for noise, and the bottom panel shows data for noise + reverberation. Box plots represent the median (middle line) and interquartile range of the data. The colored regions around each box blot are symmetrical representations of the distribution of data points in each condition.

Similar articles

See all similar articles

Cited by 1 article

References

    1. Akeroyd M. A. (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int. J. Audiol. 47, S53–S71. 10.1080/14992020802301142 - DOI - PubMed
    1. Allen P., Wightman F. (1995). Effects of signal and masker uncertainty on children's detection. J. Speech Lang. Hear. Res. 38, 503–511. 10.1044/jshr.3802.503 - DOI - PubMed
    1. Alloway T. P., Gathercole S. E., Kirkwood H., Elliott J. (2008). Evaluating the validity of the automated working memory assessment. Edu. Psychol. 28, 725–734. 10.1080/01443410802243828 - DOI
    1. ANSI (1997). Methods for Calculation of the Speech Intelligibility Index. ANSI S3.5-1997. New York, NY: American National Standards Institute.
    1. Baron R. M., Kenny D. A. (1986). The moderator–mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J. Pers. Soc. Psychol. 51:1173. 10.1037//0022-3514.51.6.1173 - DOI - PubMed

LinkOut - more resources

Feedback