Tracing the temporal structure of acoustic events is crucial in order to efficiently adapt to dynamic changes in the environment. In turn, regularity in temporal structure may facilitate tracing of the acoustic signal and its likely spatial source. However, temporal processing in audition extends beyond a domain-general facilitatory function. Temporal regularity and temporal order of auditory events correspond to contextually extracted, statistically sampled relations among sounds. These relations are the backbone of prediction in audition, determining both when an event is likely to occur (temporal structure) and also what type of event can be expected at a specific point in time (formal structure, e.g. spectral information). Here, we develop a model of temporal processing in audition and speech that involves a division of labor between the cerebellum and the basal ganglia in tracing acoustic events in time. As for the cerebellum and its associated thalamo-cortical connections, we refer to its role in the automatic encoding of event-based temporal structure with high temporal precision, while the basal ganglia-thalamo-cortical system engages in the attention-dependent evaluation of longer-range intervals. Recent electrophysiological and neurofunctional evidence suggests that neocortical processing of spectral structure relies on concurrent extraction of event-based temporal information. We propose that spectrotemporal predictive processes may be facilitated by subcortical coding of relevant changes in sound energy as temporal event markers.
Copyright © 2011 Elsevier B.V. All rights reserved.