Phonological processes map sound information onto higher levels of language processing and provide the mechanisms by which verbal information can be temporarily stored in working memory. Despite a strong convergence of data suggesting both left lateralization and distributed encoding in the anterior and posterior perisylvian language areas, the nature and brain encoding of phonological subprocesses remain ambiguous. The present study used functional magnetic resonance imaging (fMRI) to investigate the conditions under which anterior (lateral frontal) areas are activated during speech-discrimination tasks that differ in segmental processing demands. In two experiments, subjects performed "same/ different" judgments on the first sound of pairs of words. In the first experiment, the speech stimuli did not require overt segmentation of the initial consonant from the rest of the word, since the "different" pairs only varied in the phonetic voicing of the initial consonant (e.g., dip-tip). In the second experiment, the speech stimuli required segmentation since "different" pairs both varied in initial consonant voicing and contained different vowels and final consonants (e.g., dip-ten). These speech conditions were compared to a tone-discrimination control condition. Behavioral data showed that subjects were highly accurate in both experiments, but revealed different patterns of reaction-time latencies between the two experiments. The imaging data indicated that whereas both speech conditions showed superior temporal activation when compared to tone discrimination, only the second experiment showed consistent evidence of frontal activity. Taken together, the results of Experiments 1 and 2 suggest that phonological processing per se does not necessarily recruit frontal areas. We postulate that frontal activation is a product of segmentation processes in speech perception, or alternatively, working memory demands required for such processing.