Motion information is required for the solution of many complex tasks of the visual system such as depth perception by motion parallax and figure/ground discrimination by relative motion. However, motion information is not explicitly encoded at the level of the retinal input. Instead, it has to be computed from the time-dependent brightness patterns of the retinal image as sensed by the two-dimensional array of photoreceptors. Different models have been proposed which describe the neural computations underlying motion detection in various ways. To what extent do biological motion detectors approximate any of these models? As will be argued here, there is increasing evidence from the different disciplines studying biological motion vision, that, throughout the animal kingdom ranging from invertebrates to vertebrates including man, the mechanisms underlying motion detection can be attributed to only a few, essentially equivalent computational principles. Motion detection may, therefore, be one of the first examples in computational neurosciences where common principles can be found not only at the cellular level (e.g., dendritic integration, spike propagation, synaptic transmission) but also at the level of computations performed by small neural networks.