Classification images provide an important new method for learning about which parts of the stimulus are used to make perceptual decisions and provide a new tool for measuring the template an observer uses to accomplish a task. Here we introduce a new method using one-dimensional sums of sinusoids as both test stimuli (discrete frequency patterns [DFP]) and as noise. We use this method to study and compare the templates used to detect a target and to discriminate the target's position in central and parafoveal vision. Our results show that, unsurprisingly, the classification images for detection in both foveal and parafoveal vision resemble the DFP test stimulus, but are considerably broader in spatial frequency tuning than the ideal observer. In contrast, the classification images for foveal position discrimination are not ideal, and depend on the size of the position offset. Over a range of offsets from close to threshold to about 90 arc sec, our observers appear to use a peak strategy (responding to the location of the peak of the luminance profile of the target plus noise). Position acuity is much less acute in the parafovea, and this is reflected in the reduced root efficiency (i.e., square root of efficiency) and the coarse classification images for peripheral position discrimination. The peripheral position template is a low spatial frequency template.