Understanding the neural mechanisms underlying object recognition is one of the fundamental challenges of visual neuroscience. While neurophysiology experiments have provided evidence for a "simple-to-complex" processing model based on a hierarchy of increasingly complex image features, behavioral and fMRI studies of face processing have been interpreted as incompatible with this account. We present a neurophysiologically plausible, feature-based model that quantitatively accounts for face discrimination characteristics, including face inversion and "configural" effects. The model predicts that face discrimination is based on a sparse representation of units selective for face shapes, without the need to postulate additional, "face-specific" mechanisms. We derive and test predictions that quantitatively link model FFA face neuron tuning, neural adaptation measured in an fMRI rapid adaptation paradigm, and face discrimination performance. The experimental data are in excellent agreement with the model prediction that discrimination performance should asymptote as faces become dissimilar enough to activate different neuronal populations.