How does the brain tell the eye where to go? Classical models of rapid eye movements are lumped control systems that compute analogs of physical signals such as desired eye displacement, instantaneous error, and motor drive. Components of these lumped models do not correspond well with anatomical and physiological data. We have developed a more brain-like, distributed model (called a neuromimetic model), in which the superior colliculus (SC) and cerebellum (CB) play novel roles, using information about the desired target and the movement context to generate saccades. It suggests that the SC is neither sensory nor motor; rather it encodes the desired sensory consequence of the saccade in retinotopic coordinates. It also suggests a non-computational scheme for motor control by the cerebellum, based on context learning and a novel spatial mechanism, the pilot map. The CB learns to use contextual information to initialize the pilot signal that will guide the saccade to its goal. The CB monitors feedback information to steer and stop the saccade, and thus replaces the classical notion of a displacement integrator. One consequence of this model is that no desired eye movement signal is encoded explicitly in the brain; rather it is distributed across activity in both the SC and CB. Another is that the transformation from spatially coded sensory information to temporally coded motor information is implicit in the velocity feedback loop around the CB. No explicit spatial-to-temporal transformation with a normalization step is needed.