In addition to all of the monocular depth cues described above, most healthy individuals with sight in both eyes are able to sense depth due to the differences in the images seen by each eye. The two images are processed in the visual cortex of the brain, combined into a single image, and augmented by all of the monocular depth cues to give a good sense of depth and distance for each object and surface.
In the real world, each eye sees a different image due to the different position of each eye with respect to nearby objects. 3D video systems are designed to duplicate this real-world experience by providing each eye a unique version of the video.
An observer sees a die. Each eye sees a slightly different view of the die. To capture a 3D picture, a 3D camera captures images of the die from the perspective of each eye.
To display the die in 3D, the separate images are displayed for each eye. The image for each eye represents a slightly different view of the die. Without 3D glasses, the observer will see both images on the display.
3D glasses must be used to assure that each eye only sees the image meant for that eye. When each eye sees the image shot from the correct perspective, the die appears as a 3D object in front of the display.
By displaying a separate image for each eye, a 3D image is created. Objects in a 3D video may appear to be in front of or behind the screen. When the horizontal offset of the left- and right-eye images is zero (when the two images converge on the screen), the object will appear to be on the screen (although the apparent distance of the object may be different than the screen distance, due to the focal length of the camera lens and the size of the screen).