Creating a Map of Word Meaning in the Brain

Researchers used neural activity to understand how the human brain represents the words it hears

A pile of black words on rectangular white magnets
Image: tigermad/iStock/Getty Images Plus

At a glance:

  • Researchers recorded neural activity in the brain to create a map of word meaning.

  • The team was able to predict the meaning of words a person heard in real time during speech.

  • The findings could help scientists develop brain-machine interfaces to improve. communication for people with speech-related disorders.

In a first, neuroscientists have created a microscopic “brain thesaurus” that reflects how the meaning of words is represented.

Using a novel technology to record the activity from single neurons in the human brain, the team was also able to predict the meaning of words heard in real-time speech.

Get more HMS news here

The research, published July 3 in Nature and led by scientists at Harvard Medical School and Massachusetts General Hospital, opens the door to understanding how humans comprehend language, and provides insights that could be used to help individuals with conditions that affect speech.

“Humans possess an exceptional ability to extract nuanced meaning through language, yet how the human brain processes language at the basic computational level of individual neurons has remained a challenge to understand,” said senior study author Ziv Williams, HMS associate professor of neurosurgery at Mass General.

Building a “brain thesaurus”

Williams and colleagues set out to construct a detailed map of how neurons in the human brain represent word meaning — for example, how our brains represent the concept of animal when we hear the word cat or dog, or how they distinguish between the concept of a dog and a car.

The researchers also wanted to explore how humans can process diverse meanings of words during natural speech and rapidly comprehend these meanings across a wide array of sentences, stories, and narratives, Williams said.

The scientists used a novel technology that allowed them to simultaneously record the activity of up to a hundred neurons in the brain while people listened to sentences and short stories.

Using this approach, they discovered how neurons in the brain represent words with particular meanings. For example, they found that certain neurons became active when people heard action words such as “ran” or “jumped,” and other neurons became active when people heard emotion words such as “happy” or “sad.”

“When looking at all of the neurons together, we could start building a detailed picture of how word meanings are represented in the brain,” Williams said. Another important part of language is being able to rapidly distinguish the meaning of words within sentences, even when those words sound the same — for example, discerning “sun” from “son” or “see” from “sea.”

The team found that certain neurons in the brain can reliably distinguish between sound-alike words, and those neurons continuously anticipate the most likely meanings of words based on sentence context.

Perhaps most surprisingly, the researchers found that by recording the activity of a relatively small number of neurons, they could predict the meanings of words as they were heard in real time during speech. In other words, the team could use neural activity to determine the general ideas and concepts experienced by individuals as they were comprehended during speech.

“By being able to decode word meaning from the activities of small numbers of brain cells, it may be possible to predict, with a certain degree of granularity, what someone is listening to or thinking,” Williams said.

This ability, Williams added, could eventually allow scientists to develop brain-machine interfaces that could improve communication for people with conditions such as motor paralysis or stroke.

Adapted from a Mass General news release.

Authorship, funding, disclosures

Additional authors on the paper include Mohsen Jamali, Benjamin Grannan, Jing Cai, Arjun Khanna, William Muñoz, Irene Caprara, Angelique Paulk, Sydney Cash, and Evelina Fedorenko.
The research was supported by the National Institutes of Health (R25NS065743; UG3NS123723; P50MH119467; R44MH125700; U01NS121471; R01DC016950; R01DC019653; U01NS121616), the Canadian Institutes of Health Research, a Brain & Behavior Research Foundation Young Investigator Grant, the Foundations of Human Behavior Initiative, the Neurosurgery Research & Education Foundation, an NIH National Research Service Award, and the Tiny Blue Dot Foundation.