An AI from Meta can analyze your brainwaves and ‘read’ what you hear

⇧ [VIDÉO] You might also like this partner content (after ad)

Meta researchers have developed a new artificial intelligence capable of analyzing a person’s brain waves to deduce the words they hear. This type of program could one day be used to help mute people communicate.

As the researchers point out in their preprint paper, decoding language from brain activity is a long-awaited goal in both health and neuroscience. Today there are intracranial devices, which trained on brain responses to basic linguistic tasks, manage to efficiently decode interpretable features (eg, letters, words, spectrograms). These devices are, however, rather invasive, and are generally not suitable for natural speech.

Jean-Rémi King and his colleagues at Meta have therefore developed an AI capable of translating magneto- and electro-encephalography recordings (which are non-invasive techniques) into words. The technology is still in its infancy, but early results are encouraging: for each recording, the AI ​​predicted a list of 10 words, and 73% of the time that list included the correct word; in 44% of cases, the first predicted word was the correct one. The next step might be to try to interpret a person’s thoughts.

Translate brain activity into words

To train their AI, King and his collaborators used public brainwave datasets from 169 volunteers, collected as they listened to recordings of people speaking naturally. These wave data, recorded by magneto- or electro-encephalography (M/EEG), were segmented into three-second blocks; these were submitted to the AI, along with the corresponding sound files – the objective being that the software compare them to identify patterns.

Of the available data, 10% was reserved for the test phase. In other words, these brain waves had never been examined by AI before. And the program passed the test brilliantly: it was able to infer from brain waves which individual words, from a list of 793 words, each person was listening to at that time.

The results show that our model can identify, from 3s of MEG signals, the corresponding speech segment with an accuracy of up to 72.5% in the top-10 out of 1594 distinct segments (and 44% in the top- 1) “, specify the researchers. For EEG-type recordings, the AI ​​showed lower accuracy: it was able to predict a list of ten words containing the correct word in 19.1% of cases, out of 2604 distinct segments.

Meta has no specific business goals to date, but for the team, these findings point to a promising avenue for decoding real-time natural language processing from noninvasive recordings of brain activity.

Prediction capabilities still far from those of the human brain

Some experts remain skeptical of these performances, believing that this technology is currently very far from being precise enough for a real application. However, according to them, the recordings of magnetoencephalography and electroencephalography will never be sufficiently detailed to be able to increase the precision of the prediction one day. The brain is indeed the seat of many processes, which could at any time interfere with the cerebral waves associated with listening.

King nevertheless remains confident, even if he recognizes that, as is, this AI is of little interest – determining which words a person hears at a time t is indeed of limited use. On the other hand, this technology could lead to the development of a system capable of interpreting a person’s thoughts and therefore potentially allowing people unable to speak to communicate again – a particularly ambitious objective given the complexity of the task.

Meta recently announced a long-term research partnership — with CEA’s NeuroSpin brain neuroimaging center and INRIA — to study the human brain and in particular, how it processes language. The goal is to collect the data needed to develop an AI that can process speech and text as efficiently as humans.

Several studies have already demonstrated that the brain is systematically organized according to a hierarchy surprisingly similar to the language models of AI. However, certain specific regions of the brain anticipate words, but also ideas, relatively far in advance, whereas most current language models are trained to predict only the word that follows. ” Unlocking this long-term forecasting capability could help improve the language models of modern AI “, underlines the company on its blog.

Source: A. Défossez et al., arXiv

Leave a Comment