Home > Article > Technology peripherals > The latest AI technology non-invasively decodes 'brain language' with an accuracy of 73%
According to reports, artificial intelligence technology has taken non-invasive brain decoding technology a step further! Although this technology cannot yet enable people who cannot communicate to talk and laugh like normal people, it can allow scientists to accurately decode their speech content.
This artificial intelligence technology can decode words and sentences from participants’ brain activity with incredible accuracy, but it is still not 100% accurate. People only need a few seconds Using brain activity data that can use artificial intelligence technology to infer what people hear, they found in a preliminary study that participants got the right answer from a selective test an average of 73 percent of the time.
Giovanni Di Liberto, a computer scientist at Trinity College in Dublin, Ireland, who was not involved in the research, said that artificial intelligence has exceeded the level of performance that many people thought was possible.
On August 25, some media reported that Facebook parent company Meta has developed a new artificial intelligence technology, which is expected to eventually be applied to tens of thousands of people around the world who cannot use voice, People who communicate by typing or gesturing include those who are in a minimally conscious state, locked-in syndrome, or a "vegetative state," now commonly referred to as unresponsive wakefulness syndrome.
It is reported that most of the current technologies to help people with language expression disorders are physically invasive to a certain extent and require high-risk brain surgery to implant electrodes, said Meta Company AI researcher and neuroscientist Jean-Remi King said that newly developed artificial intelligence technology is expected to provide a feasible solution to help patients with communication disorders instead of using invasive methods.
Jenny and her colleagues developed a computational tool that can detect words and sentences in 56,000 hours of speech recordings in 53 languages. The tool, also known as a language model, learns how to model letters at a granular level such as letters. or syllables) and broader levels (e.g., words or sentences) to identify specific features of language.
The research team applied an artificial intelligence system with a language model to databases from 4 institutions, which included the brain activity of 169 volunteers. In these databases, participants listened to different stories and sentences, For example: the writer Ernest Hemingway's "The Old Man and the Sea" and Lewis Carroll's "Alice's Adventures in Wonderland", during which the staff used a magnetoencephalograph (MEG) or electroencephalograph to examine the brains of the participants. To perform scans, these devices can measure the magnetic or electrical components of brain signals.
The team then tried to use three seconds of each participant's brain activity data to decode what they heard, with the help of a computational method that helps explain the physical differences between actual brains. The AI system is instructed to match the speech in the story recording with brain activity patterns that the AI calculates correspond to what people heard, and then predict what participants are likely to hear in a short period of time based on more than 1,000 possibilities.
The researchers found that when tested using a magnetoencephalograph, the accuracy of the top 10 possible answers selected by participants reached 73%. However, the accuracy dropped to insufficient when testing with an electroencephalograph. 30%, so the magnetoencephalograph performs very well. Liberto said: "But we are not optimistic about the practical application of this system. What can it be used for? Magnetoencephalography is a bulky and expensive machine. Applying this technology to clinics requires technological innovation and improvement. , thus making the device less expensive and easier to use.
Linguist Jonathan Brennan of the University of Michigan, Ann Arbor, said: "In this latest study, understanding 'decoding The true meaning of ' is very important. The word is often used to describe the process of deciphering information directly from its source. In this case, it specifically refers to deciphering language from brain activity. Artificial intelligence technology can achieve this because the system can provide a limited range of information. Possible answers, greatly improved accuracy. For language, if we want to extend the artificial intelligence system to practical applications, it is difficult to achieve because language applications are unlimited. ”
More importantly, AI can decode information from participants who are passively listening to audio, which is not directly relevant to non-verbal patients. In order for it to become a meaningful communication tool, scientists need to learn how to Brain activity decodes the information the patient wants to express, such as expressions of hunger, discomfort, or simple "yes" or "no" expressions.
In fact, this artificial intelligence technology decodes speech perception, not Speech generation, although speech generation is the ultimate goal of scientists, for now, there is an urgent need to further improve and improve related science and technology.
The above is the detailed content of The latest AI technology non-invasively decodes 'brain language' with an accuracy of 73%. For more information, please follow other related articles on the PHP Chinese website!