The superior colliculus (SC) has been implicated in the mediation of residual visual function in hemianopic patients, and has been shown to be capable of using multiple sensory cues to facilitate its localization functions. The aim of the present study was to examine the possibility that the SC could effect covert visual processes, via multisensory integration of auditory and visual stimuli in patients with visual field loss. To this aim hard-to-localize auditory targets were presented alone (unimodal condition) or with a visual stimulus (cross-modal condition) in either hemifield and at various spatial (0 degree, 16 degrees, 32 degrees) and temporal (0 ms, 500 ms) disparities. The results showed substantial field-specific differences. As expected, a visual stimulus in the intact hemifield induced a strong visual bias in auditory localization independent of the spatial disparities, and did so even when the two stimuli were temporally offset. In these spatially disparate conditions, the localization accuracy was markedly reduced. In the blind hemifield, however, the visual stimulus affected auditory localization only when it was coincident with that target in both space and time. In this circumstance auditory localization performance was markedly enhanced. This result strongly suggests that covert visual processes remain active in hemianopia, though they differ from those in the normal hemifield. A likely explanation of these differences is that enhancement and visual bias depend on different neural pathways: with the former dependent on circuits involving the superior colliculus, a structure involved in the integration of cues from multiple senses to facilitate orientation/localization; and the latter dependent on geniculo-striate circuits that facilitate more detailed analyses of the visual scene. Overall the present results not only enhance our understanding of the impact of covert visual processes in hemianopic patients, but also enhance our knowledge of how different brain regions areas contribute to processing cross-modal information.