Patient studies suggest that speech and environmental sounds are differentially processed by the left and right hemispheres. Here, using functional imaging in normal subjects, we compared semantic processing of spoken words to equivalent processing of environmental sounds, after controlling for low-level perceptual differences. Words enhanced activation in left anterior and posterior superior temporal regions, while environmental sounds enhanced activation in a right posterior superior temporal region. This left/right dissociation was unchanged by different attentional/working memory contexts, but it was specific to tasks requiring semantic analysis. While semantic processing involves widely distributed networks in both hemispheres, our results support the hypothesis of a dual access route specific for verbal and nonverbal material, respectively.