Microsoft has officially gone public with its Kinect Fusion add-on for the Kinect for Windows platform: a system which allows users to easily and quickly create 3D models of real-world objects or environments.
Originally developed as part of an internal research project at the Microsoft Research Lab in Cambridge, Kinect Fusion was never intended for public dissemination. When details of the software leaked out, however, the community of Kinect for Windows developers demanded access - and Microsoft promised that access at its BUILD 2012 conference last week.
Now, Chris White, senior programme manager for Kinect for Windows, has detailed exactly what Kinect Fusion can and can't do ahead of its inclusion in the Kinect software development kit (SDK) package.
Put simply, Kinect Fusion is a streaming system for the depth data received by the Kinect's 3D camera system. As the data is streamed from the cameras, it's combined into a 3D representation of an object or environment - and the longer the object is placed in front of the camera, the more accurate the model becomes.
'
Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments,' White explains in a
blog post on the subject. '
The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading.
'This allows Kinect Fusion to gather and incorporate data not viewable from any single view point. Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements. You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.'
While White has not yet indicated a formal date for the inclusion of Kinect Fusion in the Kinect for Windows SDK, he has indicated that it will be available in a future release.
Want to comment? Please log in.