Scientists Use “The Moth Radio Hour” to Create Brain Atlas
Using a functional MRI, scientists created a visual dictionary to show how areas of the brain process language
The most popular model of how the brain works says that speech and language are processed in specialized sections of the left hemisphere, like Broca’s Area, Wernicke’s Area and the Angular Gyrus. And while those spots are critical to producing speech, new research shows that understanding speech takes place all over the brain, and single words are often processed in multiple parts of the brain, writes Benedict Carey for The New York Times.
Using a functional MRI scanner, researchers Jack Gallant and Alexander Huth of the University of California, Berkeley, recorded blood flow in the brains of seven test subjects as they listened to two hours of "The Moth Radio Hour," a podcast of sometimes funny and sometimes emotional autobiographical stories told by regular people.
The study, published this week in Nature, describes changes in bloodflow as the subjects processed the podcast. Researchers then compared that data to a transcription of the radio show. This allowed them to understand exactly where in the cerebral coretx the meanings of each word are encoded. Combining this information, the team created a “brain dictionary” showing where each word and the concept behind the word is processed.
It turns out that words aren’t just processed in the language centers—they light up areas all over the cortex.
A word like “love” can stimulate an area of the brain associated with strong emotion. It can also activate networks all over the cortex at once associated with sexuality, parents, or pets. “Murder” it turns out, sparks lots of areas.
“Consider the case of just the word ‘dog,’” Gallant tells Carey. “Hearing that is going to make you think about how a dog looks, how it smells, how the fur feels, the dog you had as a kid, a dog that bit you on your paper route. It’s going to activate the entire network for ‘dog.’”
The researcher's "semantic atlas" of the brain shows where exactly each word activates, and is available online through a 3-D brain viewer. "To be able to map out semantic representations at this level of detail is a stunning accomplishment," Kenneth Whang, a program director in the National Science Foundation's Information and Intelligent Systems division, says in a press release.
It turns out that among the seven individuals studied, the brain processes specific words and emotions in similar areas. This has implications for “mind reading” applications, like developing ways that people with motor neuron diseases who are otherwise unable to communicate could be understood. “It is possible that this approach could be used to decode information about what words a person is hearing, reading, or possibly even thinking,” Huth tells Ian Sample at The Guardian.
But we’re not there just yet. Though the map was pretty consistent from person to person, there were still discrepancies. And overall the number of people studied was small. Gallant notes in the press release: “We will need to conduct further studies across a larger, more diverse sample of people before we will be able to map these individual differences in detail.”