Binocular disparity is an important cue to depth, allowing us to make very fine discriminations of the relative depth of objects. In complex scenes, this sensitivity depends on the particular shape and layout of the objects viewed. For example, judgments of the relative depths of points on a smoothly curved surface are less accurate than those for points in empty space. It has been argued that this occurs because depth relationships are represented accurately only within a local spatial area. A consequence of this is that, when judging the relative depths of points separated by depth maxima and minima, information must be integrated across separate local representations. This integration, by adding more stages of processing, might be expected to reduce the accuracy of depth judgements. We tested this idea directly by measuring how accurately human participants could report the relative depths of two dots, presented with different binocular disparities. In the first, Two Dot condition the two dots were presented in front of a square grid. In the second, Three Dot condition, an additional dot was presented midway between the target dots, at a range of depths, both nearer and further than the target dots. In the final, Surface condition, the target dots were placed on a smooth surface defined by binocular disparity cues. In some trials, this contained a depth maximum or minimum between the target dots. In the Three Dot condition, performance was impaired when the central dot was presented with a large disparity, in line with predictions. In the Surface condition, performance was worst when the midpoint of the surface was at a similar distance to the targets, and relatively unaffected when there was a large depth maximum or minimum present. These results are not consistent with the idea that depth order is represented only within a local spatial area.