I propose that pre-attentive computational mechanisms in primary visual cortex create a saliency map. This map awards higher responses to more salient image locations; these responses are those of conventional V1 cells tuned to input features, such as orientation and color. Hence no separate feature maps, or any subsequent combination of them, is needed to create a saliency map. I use a model to show that this saliency map accounts for the way that the relative difficulties of visual search tasks depend on the features and spatial configurations of targets and distractors. This proposal links psychophysical behavior to V1 physiology and anatomy, and thereby makes testable predictions.