Palo Alto (CA) - A research group from Stanford University is working on a new idea that one day could enable the production of a relatively simple yet very capable 3D digital camera: Instead of using just one lens, the researchers are mounting thousands of lenses on groups of sensors, which will generate thousands of images for every single shot - providing a slightly different view on every pixel in an image.
The quest to build am effective 3D camera has been going on for many years and there are several solutions available today. A key problem in the creation of a 3D image is to figure out the distance of objects to the camera lens in order to calculate the geography of objects within a picture. Current projects appear to either focus on pictures that are taken by multiple lenses at the same time, the use of prisms, lasers or software methods that analyze shadows in 2D pictures.
Abbas El Gamal's research group at Stanford University now has come up with the idea to dramatically increase the number of lenses to enable 3D photography. The prototype chip being worked on is a 3-megapixel unit with a total of 3,229,696 sensors with a size of 0.7 microns each. The scientists want to integrate a tiny lense on each array of 16x16 pixels (256 pixels), resulting in a total of 12,616 lenses.
In theory, a digital camera using such a chip would not feel different than any other camera, according to the research group. However, they believe the camera can output photos in which almost everything, near or far, will be in focus. But something much more fascinating is happening without the user's knowledge.
In ordinary digital cameras, the lens focuses its image directly on the camera's image sensor. In this project the image is focused about 40 microns above the image sensor arrays. As a result, thousands of overlapping views are created, providing a different angle onto every pixel within an image: Any point in the photo is captured by at least four of the chip's mini-cameras. The researchers believe that this approach will not only create depth perception, but a detailed depth map. A camera would even be able to calculate the distance of each pixel to the lens, enabling the creation of a detailed 3D image.
Application areas for this technology, if it can be successfully developed, include facial recognition, biological imaging, 3D printing or 3D modeling of buildings. Abbas El Gamal's research group thinks that this technology in fact would be superior to the visions of humans, providing robots much better spatial vision capability - "to perform delicate tasks now beyond their abilities"
There are less exciting but very practical advantages of this technology over today's digital cameras well. At least in theory, the importance of a single lens in a camera will be reduced. "With a billion pixels on one chip, some of them are sure to go bad, leaving dead spots," El Gamal said. The overlapping views provided by the multi-aperture sensor could provide backups when pixels fail.
The researchers said they are now working out the manufacturing details of fabricating the micro-optics onto a camera chip.
The big question of course is: How much will such a chip cost, especially if it has thousands of lenses? Surprisingly, the researchers believe it could be very affordable. In fact, the chip may cost less "than existing digital cameras". Keith Fife, a member of the group said that "the complexity of the main lens [can be reduced] by shifting the complexity to the semiconductor." As a result, the quality of a camera's main lens will "no longer be of paramount importance" and its cost may not be as substantial as today.