News on April 10th, researchers at Cornell University in the United States have developed a new technology that enables silent communication through sonar glasses. The glasses use tiny speakers and microphones to read words spoken silently by the wearer, allowing them to perform a variety of tasks without requiring physical input.
This technology was developed under the leadership of Zhang Ruidong (transliteration), a doctoral student at Cornell University, and is an improvement based on a similar project that used A wireless headset, whereas previous models relied on cameras.
According to IT House, the sonar glasses use a silent speech recognition interface called EchoSpeech, which uses sonar to sense mouth movements and uses a deep learning algorithm to analyze echo characteristics in real time. This allows the system to recognize words spoken silently by the wearer with about 95% accuracy.
One of the most exciting prospects of this technology is that people with speech disabilities can use it to silently feed conversations into a speech synthesizer and then speak the words out loud. The glasses could also be used to control music playback in a quiet library or dictate information at a loud concert.
The technology is both small and low-power, and does not invade privacy because no data leaves the user’s phone. This way, there are no privacy concerns. The glasses are very convenient to wear and are more practical and feasible than other available silent speech recognition technologies.
The researchers say the system only requires a few minutes of training data to learn a user's speech patterns. Once learning is complete, it can send and receive sound waves toward the user's face, sense mouth movement, and use depth at the same time. Learning algorithms analyze echo characteristics. The system is currently able to identify 31 isolated commands and a sequence of consecutive numbers with an error rate of less than 10%.
The current version of the system offers about 10 hours of battery life and communicates wirelessly with the user's smartphone via Bluetooth. The smartphone is responsible for processing and predicting all the data, and transmits the results to a number of "action keys" that allow it to play music, interact with smart devices, or activate a voice assistant.
Cornell University’s Intelligent Computer Interfaces for Future Interaction (SciFi) Laboratory is leveraging a Cornell grant program to explore the possibility of commercializing this technology.
The above is the detailed content of Scientists develop artificial intelligence sonar glasses that can recognize lip reading with an accuracy of 95%. For more information, please follow other related articles on the PHP Chinese website!