For millions of people worldwide, sign language is their primary method of communicating with other people. And now, thanks to Microsoft Kinect, they'll also be able to 'talk' to computers.
Researchers from Microsoft Research Asia are collaborating with colleagues from the Institute of Computing Technology at the Chinese Academy of Sciences to explore how Kinect’s body-tracking abilities can be used to recognize sign language. Results have been encouraging in enabling people who use sign language as their primary language to interact more naturally with their computers, in much the same way that speech recognition does, according to the Microsoft Research blog. (To see the research in action, check out the video.)
Kinect's 3-D trajectory matching capabilitieshave made it possible to translate sign language into text or speech and to guide communications between a hearing person and a deaf or hard-of-hearing person by use of an avatar.
Guided by text input from a keyboard, the avatar can display the corresponding sign-language sentence. The deaf or hard-of-hearing person responds using sign language, and the system converts that answer into text.
“We believe that IT should be used to improve daily life for all persons,” says Guobin Wu, a research program manager from Microsoft Research Asia. “While it is still a research project, we ultimately hope this work can provide a daily interaction tool to bridge the gap between the hearing and the deaf and hard of hearing in the near future.”
So far, the technology only supports American sign language, but it will be expanded in the future as research continues.