Many models of the processing of printed or spoken words or objects or faces propose that systems of local representations of the forms of such stimuli--lexicons--exist. This is denied by partisans of the distributed-representation connectionist approach to cognitive modelling. An experimental paradigm of key theoretical importance here is lexical decision and its analogue in the domain of objects, object decision. How does each theoretical camp account for our ability to perform these two tasks? The localists say that the tasks are done by matching or failing to match a stimulus to a local representation in a lexicon. Advocates of distributed representations often do not seek to explain these two tasks; however, when they do, they propose that patterns of activation evoked in a semantic system can be used to discriminate between words and nonwords, or between real objects and false objects. Therefore the distributed-representation account of lexical and object decision tasks predicts that performance on these tasks can never be normal in patients with an impaired semantic system, nor in patients who cannot access semantics normally from the stimulus domain being tested. However, numerous such patients have been reported in the literature, indicating that semantic access is not needed for normal performance on these tasks. Such results support the localist form of modelling rather than the distributed-representation approach.