The Isolation of Social Media
Social media should promote conversation and exchange, yet increasingly it doesn't
Though the United States accounts for only about 4 percent of the global population, it leads the world in COVID-19 cases and deaths, and among high-income countries is behind only Russia in vaccine opposition. Studies find that anywhere from 8 to 22 percent of the population remains unwilling to get jabbed, despite the reams of data and anecdotal evidence showing the shots are overwhelmingly safe and effective.
Early in the pandemic, this hesitancy could have been partly accounted for by the fact that “denial and superstition are typical human responses to plague,” according to Nicholas Christakis, MD ’89, director of the Human Nature Lab at Yale University and co-director of the Yale Institute for Network Science. But a larger effect comes from the ways in which behaviors, attitudes, and emotions spread among members of a group, moving as fast as and often as virulently as any biological pathogen. A 2021 study by the Center for Countering Digital Hate, an international nonprofit that works to stop online hate and misinformation, showed that two-thirds of content opposing vaccines and vaccination shared on Facebook and Twitter originated from just 12 people, which is in keeping with research Christakis has done on how information advances through time and space.
“A very deep and fundamental principle of human social networks is that they magnify whatever they are seeded with,” Christakis says. “They don’t give rise to things. But once you put something into the network, the network will make more of it. If you put Nazism into the network you get more Nazis; if you put love into the network you get more love. If you put antivaccine sentiment into the network, what’s going to happen?”
All together now
Social scientists first postulated that relationships matter to an individual’s health in the late 1800s, but it wasn’t until 2009 that the all-encompassing effects of relationships entered the public consciousness, as a result of Christakis’s book Connected, co-written with James Fowler, a political scientist at the University of California San Diego. In it, the authors show how people you don’t even know influence nearly every aspect of your life, from behaviors like smoking, drinking, voting, cooperation, and divorce to conditions like obesity to attitudes like happiness or vaccine acceptance. “If your friends are obese, your risk of obesity is higher,” Christakis explains in a 2010 TED Talk. “If your friend’s friends are obese your risk of obesity is 25 percent higher.” It’s only when you get to your friend’s friend’s friend’s friends, he continues, “that there’s no longer a relationship between that person’s body size and your own body size.”
This association could be due to homophily, in which birds of a feather flock together, so to speak, or to confounding, in which you and your friend’s friends might share a common exposure to, say, a new pizza place down the street. A key effect, however, comes from induction, in which weight gain becomes a norm—it’s simply more acceptable within the group. “We found that if your friend becomes obese it increases your risk of obesity by about 57 percent in the same given time period,” Christakis says. Recognizing some of the possible ramifications of these findings, Christakis points out it would be a misuse of the findings to justify prejudice against people with larger body sizes.
The more Christakis and Fowler studied social networks, the more they came to see them as living things independent of the individuals who comprise them. “This network that’s changing across time has a memory,” Christakis says in the TED Talk. “It moves; things flow within it. It has a kind of consistency; people can die but it doesn’t die. … It has a kind of resilience that allows it to persist across time.” In the 1960s the Harvard social psychologist Stanley Milgram showed that we are each separated by an average of six connections from everyone else in the world. To that proposition, which inspired the 1990 play Six Degrees of Separation and the 1993 film of the same name, Christakis and Fowler added the finding, since replicated many times over, that we each have three degrees of influence—on our friends, their friends, and their friends’ friends.
“Human brains are designed to function in concert with other human brains,” says Ian Corbin, an HMS research fellow in neurology at Brigham and Women’s Hospital, co-director of the hospital’s Human Network Initiative (HNI), an interdisciplinary research center, and senior fellow at Capita, a think tank. “That is our optimal form of cognition, and if you pull us out of social contexts of intersubjective feedback loops, our brains have to work harder.”
“Behaviors require a complicated mix of dense ties that are designed to reinforce shared knowledge and practices, but also loose ties that allow access to dissimilar knowledge.”
Amar Dhand, MD ’08, an HMS associate professor of neurology and Corbin’s co-director at HNI, explains how this came about. For millions of years, when population numbers were low, hominid brains grew steadily larger, until around the Neolithic period in the Stone Age, when complex societies began to emerge. “The Neolithic human brain shrank compared to its evolutionary ancestors, while perhaps becoming more sophisticated internally to maintain cognitive capacity,” Dhand says, adding that the metabolism and energy that might have been required by a larger brain could then be reallocated to endurance and other adaptive functions. “Neolithic humans had to be hyperaware of threats because they were in very small groups or maybe alone in the wild most of the time, which would create an incredible stress response,” he says. “The fact that you could now be interdependent and rely on friends to be watching for threats too would reduce your cortisol level and contribute to an increased life span for the next step on the evolutionary tree.”
Social networking also frees up energy previously devoted to cognition. “If you’re a trusted co-perceiver of mine,” Corbin says, “then my brain has to do less work than if I have to arrive at solutions by myself.” And it allows for behaviors that can be understood only by studying the collective, Christakis points out, as when a hive of bees finds a new nesting site or a school of fish evades a predator. Examples like this, he says, “require a complicated mix of dense ties that are designed to reinforce shared knowledge and practices, but also loose ties that allow access to dissimilar knowledge. The ability to exploit your environment is enhanced by your close ties. But what happens when the well goes dry and no one among us knows where new water is to be found? There has to be a trade-off between intensity and novelty, and networks have optimized that.”
“All reality is social reality,” writes Jay Van Bavel, an associate professor of psychology and neural science at New York University, in his book The Power of Us, coauthored with Dominic J. Packer, a professor of psychology at Lehigh University. But how do individuals come to identify with the thinking of a particular group?
“Children develop their understanding of the world in a feedback loop with their caregivers,” Corbin says. “Staying embedded with people who share your assumptions is the easiest and most comfortable thing. Sometimes this begins to change when you start to see the world and have other experiences. You might look around and be like, ‘I think my parents and their community are missing some stuff. My friends at school seem to get it.’ That’s when you may break off and go into other groups.”
Not only do your circumstances at birth and your foundational friendships affect what you believe, but thanks to magnetic resonance imaging, scientists now know that your brain structure may play a role. While correlation doesn’t necessarily equal causation, we know that environment can help shape brain structure.
A 2013 book, Predisposed, which investigates how people develop their political predespositions, discusses a Current Biology paper by scientists at University College, London. Using structural MRI to view brain activity in young adult participants, the researchers found that the volume of the amygdala, associated with the fight-or-flight response and encoding emotional memories, tends to be greater in people with conservative views, while the gray matter volume in the anterior cingulate cortex, associated with empathy-related responses and resolving emotional conflict, is greater in those whose views are more liberal.
Whatever the answer to the chicken-and-egg question may be, social media’s exploitation of these opposing biological responses underlies, at least in part, the political polarization in the United States and other parts of the world. For many people, that polarization has become extreme enough to warrant the label “cultlike,” Corbin maintains, with extreme beliefs existing on both ends of the political spectrum.
“What used to be subcultures are now in constant confrontation with each other because of cable news and the internet,” Corbin says. “If I’m being bombarded by groups that think my worldview is totally wrong, that’s threatening to me. So we circle the wagons. Why on God’s green earth should a vaccine be a political issue? It only became one because each group feels a heightened threat level.”
Social media’s exploitation of opposing biological responses contributes to the political polarizations in the U.S. and elsewhere.
That these conflicts have increasingly played out online is “different quantitatively but not qualitatively” from real-life interactions, says Christakis. “Our desire for social connection and our susceptibility to its influence are not changed by new technologies,” he notes, mentioning advances from the printing press to the telephone to, of course, social media. “It’s just amplified.”
Van Bavel agrees. “Our brains’ facility at cooperating and coordinating was very successful for outcompeting other groups and species,” he says. “But it also can lead to massive intergroup conflicts.” He is quick to point out that much of the current outrage we see online is intentionally manufactured, in part to increase group cohesion. This is done in many ways, including the use of what Van Bavel calls moral-emotional language—words like hate, shame, ruin, blame, attack, and wrong.
Such words and tactics increase defensiveness among those receiving them, which can then lead those who first delivered the emotion-laden language to “double down,” touching off a vicious cycle. “Making the people in your group feel that they have been attacked creates the perception that you are all under threat … and helps to generate a shared sense of identity,” Van Bavel writes in The Power of Us. That shared identity, he says, “can affect all manner of psychological processes, including attention, memory, empathy, schadenfreude, and perception.” He cites plenty of research showing he means perception literally as well as figuratively, documenting many instances of groupthink affecting what people see, hear, taste, and smell, including a Princeton alumnus who thought the fouls his team made during a particularly rough game were taken from a tape, and a study that had students sniffing a dirty T-shirt: When the shirt bore a rival school’s logo, students reported more disgust than for T-shirts from their own school.
A proper perspective
But the research isn’t all bad news. While changing minds isn’t easy, there are ways to counteract the chauvinistic tendencies engendered by group dynamics and also to use social networks to promote positive discourse and behavior. For example, as Van Bavel writes, although “most of the information people get about politics reinforces the idea of unbridgeable divides,” when attitudes are objectively examined, it turns out the left and right are closer than one might think on controversial issues like health care, immigration, and gun control.
“Human brains are designed to function in concert with other human brains. That is our optimal form of cognition.”
How information is presented matters. For instance, though election maps are starkly delineated in red and blue, a more accurate depiction would include shades of purple. When one study showed people more nuanced maps, the participants, Van Bavel writes, “stereotyped their political out-groups less and saw the United States as less divided.”
Once misinformation is in circulation, fact-checking can be helpful, but whether it is effective depends largely on how it’s presented and by whom. A 2019 study published in Communication Research by communications studies scholars Riva Tukachinsky of Chapman University and Nathan Walter of Northwestern University neatly summarizes the parameters necessary to make an impression: “Corrective messages were found to be more successful when they are coherent, consistent with the audience’s worldview, and delivered by the source of the misinformation itself,” the authors write. “Corrections are less effective if the misinformation was attributed to a credible source, the misinformation has been repeated multiple times prior to correction, or when there was a time lag between the delivery of misinformation and the correction.”
In other words, if a member of your in-group tells you something, it’s more likely to take hold. “One study told people that the head of the National Institutes of Health was a scientist and also a Christian,” Van Bavel says. “They found the group hearing that was more likely to get vaccinated.”
But this doesn’t always work, as former president Donald Trump discovered in December when he was booed for admitting he’d been boosted. That’s because, Corbin says, “when you’re confronted with a dissonant fact, either your worldview has to change or your perception of the fact has to change. We crunch numbers like that all the time without even realizing it, but when the fact challenges a nonnegotiable part of your worldview, it has to be wrong, even when you’re seeing something with your own eyes.”
On the upside, people can be accurate when they choose to be. Van Bavel cites one study in which participants were asked whether they would share controversial headlines, and many said yes largely because they were motivated by a goal such as needling the other side, amusing their friends, or getting online attention. When the same group was asked to consider accuracy when deciding, fewer admitted they would share the post.
Incentivizing evidence-based reasoning also works. While science, journalism, law, and some other professions reward this kind of thinking, social media doesn’t—but with some effort, it could be made to. “People care a lot about social rewards,” Van Bavel says. “If they could gain status by being accurate, they’d be more likely to share based on that.” In research Van Bavel conducted with two colleagues, “simply offering participants a dollar for forming accurate beliefs was sufficient to reduce their partisan biases.”
One reason people identify so strongly with online groups, sometimes to the exclusion of their own families, is declining trust. Since Gallup’s Confidence in Institutions surveys began in 1973, trust in organized religion, government, public schools, and journalism has decreased significantly, with “the medical system” being one of the hardest-hit categories.
“People care a lot about social rewards. If they could gain status by being accurate, they’d be more likely to share based on that.”
In 1975, 80 percent of respondents said they put a “great deal” or “quite a lot” of trust in medicine; last year only 44 percent did, a slight uptick from a low of 36 percent just before the pandemic. “If you have a trusted system of medical experts you don’t have to go on the internet and find a bunch of wackos,” says Corbin. “A lot of the mistrust is warranted, so people go scrambling for other things to trust”—in some cases in those who provide what Van Bavel calls symbols of trustworthiness without the integrity to back them up, as often happens online.
Corbin says he has no magic bullet for counteracting this massive crisis of authority systemically, but he does think that individuals, particularly physicians, can make a difference. “Trust is built on a pretty intimate level,” he says, “and a lot of it is just treating people with respect and acknowledging up front that a lot of the things they think are happening actually are. With the vaccine that might mean you start by saying, ‘Look, I realize that health care has turned into a massive moneymaking operation for those at the top, but this intervention can really help you and the people you love.’ If you try to deny it and paint the health care system as a beautiful savior, people will think you’re not taking them seriously and seeing what they see.”
Telling patients that a large percentage of their neighbors are vaccinated, for example, can help to depoliticize the issue because, Van Bavel says, “people trust their personal doctor a lot, even though they may be on a weird corner of the internet where they’re hearing that nobody gets the vaccine. Accurate numbers can nudge people toward the right behaviors.”
So can supportive social networks. Dhand, Corbin, and their colleagues at HNI have been trying to change the model of care for patients recovering from a stroke by enlisting the influence of family and friends.
“In Western medicine disease is considered an individual biologic entity,” says Dhand. “But it’s well documented in animal and human models that interaction with other members of your species increases recovery and all kinds of biomarkers for health. Our idea is to take the focus off a patient’s willpower and say, ‘OK, Lucy, Matt, and Frank, you guys really care about your mom. Can you take her blood pressure daily? Eric, you’re far away but can you find her a gym? Sally, you’re the neighbor, we think you could get her to eat a healthier diet by bringing over a nonsalty meal once a week.’” This treatment model requires a fair amount of buy-in, Dhand concedes, but the rise of virtual meeting places has made the job easier, and early results have been encouraging.
“I’m often tempted to think bigger and to ask how we fix the system,” says Corbin. “That’s an important question. But for most of us, most of the time, the kind of power and leverage we have to do good work is on a local scale. As that adds up, it could eventually lead to big changes we can’t even foresee today.”
Elizabeth Gehrman is a Boston-based writer.
Images: Traci Daberko (top, illustration); Evan Mann (Christakis); John Soares (Dhand)