I lifted the introduction to this article from Neowin.net:
Microsoft filed a patent for Xbox 360 Kinect that allows users to use pre-defined hand gestures (sign language) to be captured by the camera for use as a keyboard.
The patent, called “Gesture Keyboarding,” will be able to read American Sign Language (ASL) and translate it into keystrokes without the need for a keyboard. This confirms that Kinect will be able to detect small movements of your fingers, turning sign language into individual letters, words or even phrases.
…
[read the rest at http://www.neowin.net/news/kinect-will-interpret-sign-language]
NOT JUST A TOY
I think it’s important that people understand the power of Kinect as a technology. This is NOT a toy, despite the configuration in which it’s currently used. Kinect’s recognition technology FAR AND AWAY exceeds the state-of-the-art for visual recognition above and beyond even that which is available in professional circles – much less to consumers.
If you don’t believe me, read Carnegie Mellon PhD Researcher Johnny Lee’s comments, who was famous for producing interactive 3D head tracking technology using the Nintendo Wii: (Note: Johnny is now a scientist in the Applied Sciences division of Microsoft Research.)
Speaking as someone who has been working in interface and sensing technology for nearly 10 years, this is an astonishing combination of hardware and software. The few times I’ve been able to show researchers the underlying components, their jaws drop with amazement… and with good reason.
The 3D sensor itself is a pretty incredible piece of equipment providing detailed 3D information about the environment similar to very expensive laser range finding systems but at a tiny fraction of the cost. Depth cameras provide you with a point cloud of the surface of objects that is fairly insensitive to various lighting conditions allowing you to do things that are simply impossible with a normal camera.
But once you have the 3D information, you then have to interpret that cloud of points as "people". This is where the researcher jaws stay dropped. The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I’ve seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.
Read more at: http://procrastineering.blogspot.com/2009/06/project-natal.html

