Under real-life adverse listening conditions, the interdependence of the brain's analysis of language structure (syntax) and its analysis of the acoustic signal is unclear. In two fMRI experiments, we first tested the functional neural organization when listening to increasingly complex syntax in fMRI. We then tested parametric combinations of syntactic complexity (argument scrambling in three degrees) with speech signal degradation (noise-band vocoding in three different numbers of bands), to shed light on the mutual dependency of sound and syntax analysis along the neural processing pathways. The left anterior and the posterior superior temporal sulcus (STS) as well as the left inferior frontal cortex (IFG) were linearly more activated as syntactic complexity increased (Experiment 1). In Experiment 2, when syntactic complexity was combined with improving signal quality, this pattern was replicated. However, when syntactic complexity was additive to degrading signal quality, the syntactic complexity effect in the IFG shifted dorsally and medially, and the activation effect in the left posterior STS shifted from posterior toward more middle sections of the sulcus. A distribution analysis of supra- as well as subthreshold data was indicative of this pattern of shifts in the anterior and posterior STS and within the IFG. Results suggest a signal quality gradient within the fronto-temporal language network. More signal-bound processing areas, lower in the processing hierarchy, become relatively more recruited for the analysis of complex language input under more challenging acoustic conditions ("upstream delegation"). This finding provides evidence for dynamic resource assignments along the neural pathways in auditory language comprehension.
Copyright © 2011 Elsevier Inc. All rights reserved.