Reading a person’s thoughts no longer seems as fantastic as it did 10-20 years ago. For example, there have long been bionic prostheses that are controlled by the power of thought, and even brain-computer interfaces. Based on them, several years ago brain implants-decoders were created that allow paralyzed people to mentally compose words or even whole sentences. However, these devices have a number of disadvantages, so they have not become widespread. Now, American scientists have proposed another way of “brain scanning” that does not require an implant to be implanted in the brain. Instead, artificial intelligence recognizes thoughts based on brain activity.
Why brain-computer interfaces are flawed
A year ago, Dutch scientists developed a brain-computer interface (BCI) that can translate a patient’s brain activity into words. The researchers detailed it in the journal Nature Communications. Even earlier, a similar brain-computer interface was created by Chinese developers.
These devices effectively recognize words, but work on the basis of electrodes that are implanted in the human brain. In addition, the interfaces focus on the part of the brain that is responsible for the mouth movements made by a person to pronounce a word. That is, in order for the device to recognize a thought, the person has to try to pronounce it with their mouth.
Why brain-computer interfaces are flawed. A brain-computer interface designed to voice thoughts has a number of flaws that make it difficult to use. Photo.
Brain-computer interface, designed to voice thoughts, has a number of shortcomings, which complicates its use
Other ways of reading thoughts, for example, based on electroencephalogram (measuring brain activity with the help of electrodes fixed on the head), were less effective. They only allowed to decipher individual words, but are not able to construct a coherent text.
How the mind-reading device works
Employees of the University of Texas at Austin have developed an interface based on functional magnetic resonance imaging (fMRI). It allows you to monitor the activity of certain areas of the brain during its functioning, and has a high resolution. Sometimes this technology is used to monitor blood flow in the brain. Earlier we told you that with the help of fMRI, during one of the studies scientists tracked the level of stress in people.
In this case, fMRI monitors the part of the brain that is responsible for a person’s imagined speech. The human brain responds to each word in a specific way. So the scientists’ task was to link each word to a specific pattern of brain activity. To do this, the team scanned the brains of three people for 16 hours while they listened to podcasts.
In this way, the team was able to create a specific set of maps of brain activity triggered by different words, phrases or phrase meanings. The authors then trained the artificial intelligence to determine what a person was thinking based on fMRI data. Unlike developments that existed before, this technology does not recognize individual words, but determines the overall meaning of each phrase or sentence.
To construct text based on the fMRI data and processed by the AI, the researchers connected the language model of the GPT-1 neural network, which was the predecessor of ChatGPT. Gradually, they were able to train the AI to recognize words, phrases and sentences. As reported by the authors of the work in the journal Nature Neuroscience, even if the AI developed by them is wrong in some words, it well conveys the essence of the speech that a person hears or mentally pronounce.
In addition, the system has even learned to voice what a person sees in front of him. For example, in one of the experiments, participants watched a video without sound, in which a dragon knocks a man down. In doing so, the system, based on brain activity, voiced the scene as: “He’s knocking me down.” However, participants were not asked to mentally voice what they saw. The short video below shows how accurately the system recognizes thoughts.