Stanford researchers have developed a new camera chip that can see in 3D:
Most folks think of a photo as a two-dimensional representation of a scene. Stanford University researchers, however, have created an image sensor that also can judge the distance of subjects within a snapshot.
To accomplish the feat, Keith Fife and his colleagues have developed technology called a multi-aperture image sensor that sees things differently than the light detectors used in ordinary digital cameras.
Instead of devoting the entire sensor for one big representation of the image, Fife's 3-megapixel sensor prototype breaks the scene up into many small, slightly overlapping 16x16-pixel patches called subarrays. Each subarray has its own lens to view the world--thus the term multi-aperture.
After a photo is taken, image-processing software then analyzes the slight location differences for the same element appearing in different patches--for example, where a spot on a subject's shirt is relative to the wallpaper behind it. These differences from one subarray to the next can be used to deduce the distance of the shirt and the wall.
"In addition to the two-dimensional image, we can simultaneously capture depth info from the scene," Fife said when describing the technology in a talk at the International Solid State Circuits Conference earlier this month in San Francisco.
The result is a photo accompanied by a "depth map" that not only describes each pixel's red, blue, and green light components but also how far away the pixel is. Right now, the Stanford researchers have no specific file format for the data, but the depth information can be attached to a JPEG as accompanying metadata, Fife said.