A common neural code for frequency- and amplitude-modulated sounds

Nature. 1995 Apr 6;374(6522):537-9. doi: 10.1038/374537a0.


Most naturally occurring sounds are modulated in amplitude or frequency; important examples include animal vocalizations and species-specific communication signals in mammals, insects, reptiles, birds and amphibians. Deciphering the information from amplitude-modulated (AM) sounds is a well-understood process, requiring a phase locking of primary auditory afferents to the modulation envelopes. The mechanism for decoding frequency modulation (FM) is not as clear because the FM envelope is flat (Fig. 1). One biological solution is to monitor amplitude fluctuations in frequency-tuned cochlear filters as the instantaneous frequency of the FM sweeps through the passband of these filters. This view postulates an FM-to-AM transduction whereby a change in frequency is transmitted as a change in amplitude. This is an appealing idea because, if such transduction occurs early in the auditory pathway, it provides a neurally economical solution to how the auditory system encodes these important sounds. Here we illustrate that an FM and AM sound must be transformed into a common neural code in the brain stem. Observers can accurately determine if the phase of an FM presented to one ear is leading or lagging, by only a fraction of a millisecond, the phase of an AM presented to the other ear. A single intracranial image is perceived, the spatial position of which is a function of this phase difference.

Publication types

  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Brain Stem / physiology*
  • Hearing / physiology*
  • Humans
  • Sound*