For centuries, people were able to picture only a two-dimensional image when taking photo. Even today, the most expensive and powerful cameras still give us a clear and beautiful, but flat image. We can't feel the depth in them, can't say how far the objects are from the camera. People tried to bring the stereo effect to photography in many ways, from a stereo slides to 3D glasses, but it always required image to be taken with two lenses. However, something that thought to be impossible before, now is one step closer from being real - Stanford University engineers have made a lens system capable of capturing the distance of subjects within a snapshot!
The whole system works differently from usual sensors work. Instead of having one lens for a whole sensor, it captures image in parts. Each part is a 16 x 16 pixel sensor with its own lens to view the world - thanks to that this system is called multi-aperture. Every lens gives to this mini-sensor a different picture than to others, which makes a final image composed of many small images which overlap slightly with that of the neighboring ones. Special hardware is capable of calculating the "depth map" according to the data captured by this sensor. This distance data is a result of work of special image-processing software, which analyzes slight differences between the positions of the same elements on neighbor sub arrays.
All this improvements not only can show a distance for each pixel, this technology can be used for making stereo images in future. It also reduces digital noise at the photos, makes the sensor more adaptive to shooting conditions and raises the overall quality of the final image. Researchers say that they see the future usage of this type of sensors even in the portable gadgets like cell phones, PDAs and other camera-enabled devices, because with this technology the sensors with small sizes, used in portable devices, will suffer less quality loss than they are suffering now.
Right now, the researchers have no specific file format for this data, but they say that this depth information can be written in the JPEG metadata. Unfortunately, this revolutionary idea has its flaws. First, it consumes ten times more power than usual sensor does, which is a very serious problem for portable devices. Second, necessity of using a small lens system for each sub array means that these sensors are more expensive to produce. And third, depth field calculation is possible only with subjects that have some texture and other small details. If you picture a smooth white wall, this information cannot be analyzed and calculated. However, this technology is developing fast, and if all this flaws will be corrected, it can be a new word in the world of digital photography.
[Source: http://www.gadgets-reviews.com/camera-that-shoots-in-3d.html]
Saturday, March 1, 2008
Camera that shoots in 3D
ป้ายกำกับ:
Camera,
Cool gadget
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment