Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jul/Aug;40(4):961-980.
doi: 10.1097/AUD.0000000000000681.

How Do You Deal With Uncertainty? Cochlear Implant Users Differ in the Dynamics of Lexical Processing of Noncanonical Inputs

Affiliations
Free PMC article

How Do You Deal With Uncertainty? Cochlear Implant Users Differ in the Dynamics of Lexical Processing of Noncanonical Inputs

Bob McMurray et al. Ear Hear. 2019 Jul/Aug.
Free PMC article

Abstract

Objectives: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty.

Design: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls.

Results: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes.

Conclusions: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
Average accuracy as a function of listener group and degree of mismatch for onset mispronunciations (panel A) and offset mispronunciations (panel B).
Figure 2:
Figure 2:
A) Average fixations to the target and the three unrelated objects as a function of time for NH listeners on correct trials; B) for both groups of CI users; C) Target fixations for ATH and CI listeners.
Figure 3:
Figure 3:
Effect of mispronunciation type (single feature only) on fixations to the target for A) ATH listeners; B) CIE users and C) CIAE users.
Figure 4:
Figure 4:
Logistic function used to model target fixations. Equation is given in inset. This function has four parameters: min (the lower asymptote), max (the upper asymptote), crossover (the point in time where the function is halfway between min and max, and slope (the derivative at the crossover).
Figure 5:
Figure 5:
Timing (A) and Max (B) parameters as a function of Location of mispronunciation and CI use (X axis), and type of mispronunciation (grouped bars)
Figure 6:
Figure 6:
BDOTS analysis of ATH listeners hearing correct or single-feature mismatch stimuli. Black bar indicates significant region. See Table 5 for statistical results. A) BDOTS directly comparing looks to target between stimulus conditions (Table 5, Row 1); B) BDOTS analysis of difference in fixations (MP disruption measure) comparing onset and offset mispronunciations
Figure 7:
Figure 7:
Comparison of MP disruption effect (difference in target fixations between correct and incorrect forms) between CI users and NH listeners. Significant regions are indicated with black bars (see Table 6 for corresponding statistics). A high MP disruption indicates more difficulty created by non-canonical form. A) Single feature onset mispronunciations; B) Multi-feature onset mispronunciation; C) Single feature offset mispronunciation; D) Multi-feature offset mispronunciation.

Similar articles

Cited by

References

    1. Allopenna P, Magnuson JS, & Tanenhaus MK (1998). Tracking the time course of spoken word recognition using eye-movements: evidence for continuous mapping models. Journal of Memory and Language, 38(4), 419–439.
    1. Balkany T, Hodges A, Menapace C, Hazard L, Driscoll C, Gantz BJ, . . . Payne S (2007). Nucleus Freedom North American clinical trial. Otolaryngology Head and Neck Surgery, 136, 757–762. - PubMed
    1. Bard EG, Shillcock RC, & Altmann GT (1988). The recognition of words after their acoustic offsets in spontaneous speech: Effects of subsequent context. Perception & Psychophysics, 44(5), 395–408. - PubMed
    1. Bent T, & Holt RF (2013). The influence of talker and foreign-accent variability on spoken word identification. The Journal of the Acoustical Society of America, 133(3), 1677–1686. - PMC - PubMed
    1. Boersma P, & Weenink D (2009). Praat: doing phonetics by computer (Version Version 5.1.05). Retrieved from http://www.praat.org/

Publication types