Cortical encoding of phonetic onsets of both attended and ignored speech in hearing impaired individuals

PLoS One. 2024 Nov 22;19(11):e0308554. doi: 10.1371/journal.pone.0308554. eCollection 2024.

Abstract

Hearing impairment alters the sound input received by the human auditory system, reducing speech comprehension in noisy multi-talker auditory scenes. Despite such difficulties, neural signals were shown to encode the attended speech envelope more reliably than the envelope of ignored sounds, reflecting the intention of listeners with hearing impairment (HI). This result raises an important question: What speech-processing stage could reflect the difficulty in attentional selection, if not envelope tracking? Here, we use scalp electroencephalography (EEG) to test the hypothesis that the neural encoding of phonological information (i.e., phonetic boundaries and phonological categories) is affected by HI. In a cocktail-party scenario, such phonological difficulty might be reflected in an overrepresentation of phonological information for both attended and ignored speech sounds, with detrimental effects on the ability to effectively focus on the speaker of interest. To investigate this question, we carried out a re-analysis of an existing dataset where EEG signals were recorded as participants with HI, fitted with hearing aids, attended to one speaker (target) while ignoring a competing speaker (masker) and spatialised multi-talker background noise. Multivariate temporal response function (TRF) analyses indicated a stronger phonological information encoding for target than masker speech streams. Follow-up analyses aimed at disentangling the encoding of phonological categories and phonetic boundaries (phoneme onsets) revealed that neural signals encoded the phoneme onsets for both target and masker streams, in contrast with previously published findings with normal hearing (NH) participants and in line with our hypothesis that speech comprehension difficulties emerge due to a robust phonological encoding of both target and masker. Finally, the neural encoding of phoneme-onsets was stronger for the masker speech, pointing to a possible neural basis for the higher distractibility experienced by individuals with HI.

MeSH terms

  • Acoustic Stimulation
  • Adult
  • Aged
  • Attention / physiology
  • Electroencephalography*
  • Female
  • Hearing Loss / physiopathology
  • Humans
  • Male
  • Middle Aged
  • Phonetics*
  • Speech / physiology
  • Speech Perception* / physiology

Grants and funding

This work was conducted with the financial support of the William Demant Fonden (https://www.williamdemantfonden.dk/), grant 21-0628 and grant 22-0552 and of the Science Foundation Ireland Centre for Research Training in Artificial Intelligence (https://www.crt-ai.ie/), under Grant No. 18/CRT/6223. This research was supported by the Science Foundation Ireland under Grant Agreement No. 13/RC/2106_P2 at the ADAPT SFI Research Centre (https://www.sfi.ie/sfi-research-centres/adapt/) at Trinity College Dublin. ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology, is funded by Science Foundation Ireland through the SFI Research Centres Programme. There was no additional external funding received for this study.