Sound localization requires comparison between the inputs to the left and right ears. One important aspect of this comparison is the differences in arrival time to each side, also called interaural time difference (ITD). A prevalent model of ITD detection, consisting of delay lines and coincidence-detector neurons, was proposed by Jeffress (J Comp Physiol Psychol 41:35-39, 1948). As an extension of the Jeffress model, the process of detecting and encoding ITD has been compared to an effective cross-correlation between the input signals to the two ears. Because the cochlea performs a spectrotemporal decomposition of the input signal, this cross-correlation takes place over narrow frequency bands. Since the cochlear tonotopy is arranged in series, sounds of different frequencies will trigger neural activity with different temporal delays. Thus, the matching of the frequency tuning of the left and right inputs to the cross-correlator units becomes a 'timing' issue. These properties of auditory transduction gave theoretical support to an alternative model of ITD-detection based on a bilateral mismatch in frequency tuning, called the 'stereausis' model. Here we first review the current literature on the owl's nucleus laminaris, the equivalent to the medial superior olive of mammals, which is the site where ITD is detected. Subsequently, we use reverse correlation analysis and stimulation with uncorrelated sounds to extract the effective monaural inputs to the cross-correlator neurons. We show that when the left and right inputs to the cross-correlators are defined in this manner, the computation performed by coincidence-detector neurons satisfies conditions of cross-correlation theory. We also show that the spectra of left and right inputs are matched, which is consistent with predictions made by the classic model put forth by Jeffress.