Vision and haptics have different limitations and advantages because they obtain information by different methods. If the brain combined information from the two senses optimally, it would rely more on the one providing more precise information for the current task. In this study, human observers judged the distance between two parallel surfaces in two within-modality experiments (vision-alone and haptics-alone) and in an intermodality experiment (vision and haptics together). In the within-modality experiments, the precision of visual estimates varied with surface orientation, as expected from geometric considerations; the precision of haptic estimates did not. An ideal observer that combines visual and haptic information weights them differently as a function of orientation. In the intermodality experiment, humans adjusted visual and haptic weights in a fashion quite similar to that of the ideal observer. As a result, combined size estimates are finer than is possible with either vision or haptics alone; indeed, they approach statistical optimality.