Researchers at Cornell University have created a silent-speech recognition interface called EchoSpeech that uses acoustic-sensing and artificial intelligence to recognize up to 31 unvocalized commands based on lip and mouth movements. The low-power, wearable interface can be run on a smartphone and requires just a few minutes of user training data before it will recognize commands. EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.
>>Join our Facebook Group be part of community. <<