Superposition of two dot clouds moving in different directions results in the perception of two transparent layers. Despite the ambiguous depth order of the layers, there are consistent preferences to perceive the layer, which is moving either rightward or downward in front of the other layer. Here we investigated the origin of these depth order biases. For this purpose, we measured the interaction with stereoscopic disparity and the influence of global and local motion properties. Motion direction and stereoscopic disparity were equally effective in determining depth order at a disparity of one arcmin. Global motion properties, such as the aperture location in the visual field or the aperture's motion direction did not affect directional biases. Local motion properties however were effective. When the moving elements were oriented lines rather than dots, the directional biases were shifted towards the direction orthogonal to the lines rather than the actual motion direction of the lines. Therefore, depth order was determined before the aperture problem was fully resolved. Varying the duration of the stimuli, we found that the time constant of the aperture problem was much lower for depth order than for perceived motion direction. Altogether, our results indicate that depth order is determined in one shot on the basis of an early motion signal, while perceived motion direction is continuously updated. Thus, depth ordering in transparent motion appears to be a surprisingly fast process, that relies on early, local motion signals and that precedes high-level motion analysis.