Vision provides us with an ever-changing neural representation of the world from which we must extract stable object categorizations. We argue that visual analysis involves a fundamental interaction between the observer's top-down categorization goals and the incoming stimulation. Specifically, we discuss the information available for categorization from an analysis of different spatial scales by a bank of flexible, interacting spatial-frequency (SF) channels. We contend that the activity of these channels is not determined simply bottom-up by the stimulus. Instead, we argue that, following perceptual learning a specification of the diagnostic, object-based, SF information dynamically influences the top-down processing of retina-based SF information by these channels. Our analysis of SF processing provides a case study that emphasizes the continuity between higher-level cognition and lower-level perception.