A team of neuroscientists and psychologists at the University of California, Berkeley, has created a detailed ‘semantic atlas’ showing which human brain areas respond to hearing different words. The results were published this week in the journal Nature.
“Our goal in this study was to map how the brain represents the meaning (or semantic content) of language,” explained lead author Alexander Huth, from the University of California’s Helen Wills Neuroscience Institute. “Most earlier studies of language in the brain have used isolated words or sentences.”
“We used natural, narrative story stimuli because we wanted to map the full range of semantic concepts in a single study. This made it possible for us to construct a semantic map for each individual, which shows which brain areas respond to words with similar meaning or semantic content.”
“Another aim of this study was to create a semantic atlas by combining data from multiple subjects, showing which brain areas represented similar information across subjects.”
Huth and six other native English-speakers served as subjects for the experiment.
They listened passively to several stories selected from The Moth Radio Hourwhile brain activity was monitored using functional magnetic resonance imaging (fMRI). The stories were then transcribed and annotated with the time each word was spoken.
Then the scientists used the fMRI data and story transcripts to build computational models that predict brain activity as a function of which words the subject heard. To validate these models, they were used to predict fMRI responses to a new story that had not been used before.
“We found that the models were able to predict responses relatively well throughout several broad regions of the cerebral cortex,” Huth said.
“Next, we aimed to discover what types of semantic information were represented at each point in cortex. In order to visualize the very high-dimensional semantic models, we used a dimensionality reduction technique called principal components analysis (PCA).”
PCA finds the most important dimensions in a dataset, which allowed the team to reduce the 985-dimensional models to only three dimensions, while preserving as much information as possible.
“We used these three dimensions to visualize roughly which types of semantic information were represented at every location in the cortex, revealing complex semantic maps that tile the brain,” Huth said.
“Finally, to discover which aspects of these maps are shared across subjects we developed and applied a new computational approach called PrAGMATiC. This approach finds functional areas that are shared across subjects, while also allowing for individual variability in the anatomical location of each area.”
According to Huth and co-authors, detailed maps showing how the brain organizes different words by their meanings could eventually help give voice to those who cannot speak, such as victims of stroke or brain damage, or motor neuron diseases such as ALS.
“While mind-reading technology remains far off on the horizon, charting how language is organized in the brain brings the decoding of inner dialogue a step closer to reality,” they said.
For example, clinicians could track the brain activity of patients who have difficulty communicating and then match that data to semantic language maps to determine what their patients are trying to express.
Another potential application is a decoder that translates what you say into another language as you speak.
Alexander G. Huth et al. 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532, 453-458; doi: 10.1038/nature17637