Researchers at tech giants are working on new ways to better understand people’s minds, but to what extent?
Have you ever wondered what it would be like if someone could read your thoughts? Several researchers at various institutions are doing just that with AI, each in their own way.
As technology continues to advance day by day, scientists will soon be able to read a person’s mind with precision, what they are thinking and seeing, and even diagnose schizophrenia and depression. It may even reveal why people are more susceptible to mental illnesses such as
In fact, with a better understanding of how the brain functions, similar techniques are already being used to diagnose and treat various disorders.
For example, it is now used to help surgeons plan how to operate on brain tumors while preserving as much good tissue as possible.
In addition, psychologists and neurologists have been able to map the relationships between different brain regions and cognitive processes such as memory, language, and vision.
But now, experts are opening more doors for AI to continue to advance.
read brain waves
The latest attempt, from Facebook parent company Meta, revolves largely around hearing.
On August 31, the company’s experts revealed that a newly developed artificial intelligence (AI) can “hear” what people are hearing simply by analyzing brain wave activity.
This research is currently in its very early stages and is intended to serve as a building block for technology that could help people with traumatic brain injuries who are unable to communicate verbally or via keyboard. , scientists can capture brain activity without surgically inserting electrodes into the brain and recording them.
“There are conditions ranging from traumatic brain injury to anoxia. [an oxygen deficiency], which basically renders people uncommunicative. And one of the pathways identified for these patients over the past few decades is the brain-computer interface,” said Jean-Remi King, a research scientist at the Facebook Artificial Intelligence Research (FAIR) Lab. (Jean Remi King) told his TIME.
King explained that one of the methods scientists have used to enable communication is to place electrodes in motor areas of the patient’s brain. However, such techniques can be highly invasive, so his team is working to adopt other “safer” methods.
“Therefore, we intended to test using non-invasive brain activity recordings. The aim was to create an AI system that could decipher the brain’s response to stories being told.”
As part of the experiment, researchers had 169 healthy adults listen to stories and words read aloud while a variety of devices (electrodes attached to the head) monitored brain activity.
To uncover patterns, researchers loaded the data into an AI model. We wanted the algorithm to “listen” or see what the participants were hearing based on their electrical and magnetic activity in their brains.
While the experiment was a successful starting point, King’s team said it ran into two major challenges: accuracy and sharpness.
“The signals we pick up from brain activity are very ‘loud’. The sensor is far away from the brain. They have skulls, they have skin, and they can disrupt the signals that we can pick up.
Another challenge, experts say, is understanding how the brain represents language. He said, “Even if you have a very clear signal, without machine learning, it’s very difficult to say, ‘OK, this brain activity means this word, this phoneme, or the intention of an action, etc.'” would be difficult.
The goal for the next step, King added, is to learn how to match representations of speech with representations of brain activity and assign both to the AI system.
Meta’s research revolves around reading brain waves, but researchers at Radboud University in the Netherlands are aiming for more visual results.
According to Nature, the world’s leading multidisciplinary scientific journal, experts are working on “mind-reading” technology that could allow them to capture pictures from a person’s brain waves.
To test the AI technique, the researchers showed volunteers pictures of random faces during functional magnetic resonance imaging (fMRI). fMRI is a type of non-invasive brain imaging device that measures changes in blood flow to identify brain activity.
fMRI monitored neuronal activity in brain regions associated with vision when volunteers viewed images of faces. The data was then fed into an artificial intelligence (AI) program, which could use the data from the fMRI scans to create accurate images.
Experimental results show that the fMRI/AI system was able to reproduce most of the original visuals exactly presented by the volunteers.
The study’s principal investigator, AI researcher and cognitive neuroscientist Thirza Dado, told the Mail Online that these “impressive” results show the future potential of fMRI/AI systems to correctly read minds. He said he was.
“We believe that we have trained our algorithms to accurately portray not just the face you see, but any face you vividly imagine, such as your mother’s face,” Dado explained. .
“By developing this technology, it’s fascinating to decode and recreate subjective experiences, perhaps your dreams,” says Dado. “Such technical knowledge could also be incorporated into clinical applications such as communicating with patients trapped in deep coma.”
Experts say the research will focus on developing technology to help people who have lost their sight due to illness or accidents regain their sight.
“We are already developing brain implant cameras that will stimulate people’s brains to make them see again,” Dado added.
Volunteers had previously been exposed to a variety of faces while having their brains scanned to “train” the AI system.
Experiments show that, importantly, the photographic images they saw were not of a real person, but a computer-generated paint-by-number image, with a special computer program assigned to each tiny dot of light or dark. It means that code.
The volunteer’s neural response to these “training” images can be viewed using fMRI scans. Portraits were then recreated by an artificial intelligence system by translating each volunteer’s neural responses into computer code.