We scan our surroundings with quick eye movements called saccades, and from the resulting sequence of images we build a unified percept by a process known as transsaccadic integration. This integration is often said to be flawed, because around the time of saccades, our perception is distorted and we show saccadic suppression of displacement (SSD): we fail to notice if objects change location during the eye movement. Here we show that transsaccadic integration works by optimal inference. We simulated a visuomotor system with realistic saccades, retinal acuity, motion detectors and eye-position sense, and programmed it to make optimal use of these imperfect data when interpreting scenes. This optimized model showed human-like SSD and distortions of spatial perception. It made new predictions, including tight correlations between perception and motor action (for example, more SSD in people with less-precise eye control) and a graded contraction of perceived jumps; we verified these predictions experimentally. Our results suggest that the brain constructs its evolving picture of the world by optimally integrating each new piece of sensory or motor information.