The automatic detection of deviations within a constant sine wave tone is confined to the initial part of approximately 350 ms. When a deviation occurs beyond this critical limit, the mismatch negativity (MMN) - a deviance-related component of the event-related potential - is largely attenuated or even absent. However, for time-variant acoustic stimuli such as speech sounds or tonal patterns, MMN is also obtained for deviations beyond the initial 350 ms. We consider two hypotheses that can explain the MMN to time-variant sounds. One is that the terminal part of those sounds is represented as the spectral information varies over time (spectral-variation hypothesis). The other is that transients, occurring in time-variant signals, help to segment the long sounds into smaller units, each being not larger than the critical 350 ms (segmentation hypothesis). We measured MMN to duration shortenings (deviants) embedded in a sequence of 1000 ms long standard tones of increasing frequency (sweeps). The sweeps did or did not contain a noise burst. Results reveal a lack of MMN to the duration deviant in the sweep without a noise burst, which rules out the spectral-variation hypothesis. The presence of MMN to the duration deviant in the sweep with a noise burst supports the segmentation hypothesis. Thus, the results suggest a temporal constraint inherent to the processing of unstructured/unsegmented long tones. We argue that transients within a sound act as segmentation cues providing an automatic sound representation for which deviations can be detected.
Copyright 2010 Elsevier B.V. All rights reserved.