Researchers created the mind-reading algorithm using fMRI scans. They were able to decode what the subjects were hearing and thinking without even touching them.
Previous mind-reading techniques relied on implanting electrodes deep into people’s brains. The new method, described in a paper published in database bioRxivinstead relies on a non-invasive brain scanning technique called functional magnetic resonance imaging (fMRI).
fMRI tracks the flow of oxygenated blood through the brain, and because active brain cells need more energy and oxygen, this information provides an indirect measure of brain activity.
The algorithm that reads thoughts through a non-invasive method
By its nature, this scanning method cannot capture brain activity in real time because the electrical signals released by brain cells move much faster than blood moves through the brain. But remarkably, the study authors found they could still use this imperfect measurement to decode the semantic meaning of people’s thoughts, even though they couldn’t produce word-for-word translations, they write Live Science.
“If you had asked any cognitive neuroscientist in the world 20 years ago if the mind-reading algorithm was feasible, they would have laughed and kicked you out of the room,” says Alexander Huth, a neuroscientist at the University of Texas at Austin, of the US, and lead author of the study.
The “decoder” of thoughts
For the new study, which has not yet been peer-reviewed, the team scanned the brains of a woman and two men in their 20s and 30s. Each participant listened to 16 hours of different podcasts and radio shows over multiple sessions inside a scanner. The team then fed these scans to a computer algorithm they called a “decoder,” which compared patterns in the audio recordings to patterns in brain activity.
The mind-reading algorithm was then able to take an fMRI recording and generate a story based on its content, and that story matched the content of the podcast or radio show “pretty well,” Huth said.
What will this algorithm be used for?
In other words, the decoder could infer which story each participant heard based on their brain activity. The algorithm made some mistakes, such as changing characters’ pronouns and using first and third person. “He knows pretty much exactly what’s going on, but not who’s doing things,” Huth said.
In additional tests, the algorithm could fairly accurately explain the plot of a silent film that participants watched in the scanner. It could even show a story that participants imagined they were telling in their heads. In the long term, the research team aims to develop this technology so that it can be used in brain-computer interfaces designed for people who cannot speak or type.