Social Media and Political Violence
Issue 179: How social media fuels negativity, division, and hostility—and what we can all do about it
This week social media platforms are being blamed for inflaming divisions before—and after—Charlie Kirk’s murder.
After the conservative activist Charlie Kirk was fatally shot at a university rally in Utah last week, Spencer Cox, the state’s Republican governor, called social media companies a “cancer.”
Senator Chris Coons, a Democrat of Delaware, blamed the internet for “driving extremism in our country.”
President Trump, who helped found the Truth Social platform, also pointed fingers at social media on Monday and said the accused gunman had become “radicalized on the internet.”
The aftermath of Mr. Kirk’s killing reveals how inflammatory and hateful online content has become. Countless people have been fired for their comments on social media, misinformation and conspiracy theories are running rampant, and the companies themselves are no longer even promising to solve it. Mentions of “civil war” have surged on X and Elon Musk seems committed to fanning the flames of conflict himself.
For years, social media companies pledged to cleanse their sites of toxic content. But lately several have gutted their fact checking and internet safety operations. And Tyler Robinson, whom the authorities have identified as the gunman, has been described by his friends as “terminally online.”
We have been studying this topic for the past decade, trying to understand the role that social media plays in fomenting conflict, polarization, and hatred. To help make sense of the current moment, we decided to share a summary of our work based on a podcast that Jay did with Derek Thompson about the 4 Dark Laws of the Internet—or why the internet feels so broken. He distilled our research into a "devil's playbook" of online engagement.
I have summarized our conversation here, but encourage you to check out the full conversation on his podcast or read Derek’s summary (here) or Twitter/X thread on our conversation. He also wrote a recent substack post weaving together our research with several other lines of evidence to explain political violence.
“Jays’ lab has published papers on how the internet became a fun-house mirror of extreme political opinions, why the news media has a strong negativity bias, why certain emotions go viral online, why tribalism is inflamed by online activity, and how the internet can make us seem like the worst versions of ourselves. At the same time, he emphasizes that many of the group psychology dynamics that can make social media seem like a dumpster fire are also core to what makes humankind such a special and ingenious species. We discuss the four dark laws of online engagement and the basics of group psychology.”
Each day, we scroll through roughly 300 feet of social media content—that is equivalent to the height of the Statue of Liberty! This constant engagement is not just habitual; it fundamentally alters our identities and realities. Social media holds a fun-house mirror up to society, distorting our perceptions of reality, and the incentives and engagement structures of social media put us in conflict with one another.
The fundamental aspects of group psychology—our ability to join and identify with groups—is among the most unique human traits. Indeed, we are the only primate who will engage with prosocial behavior with in-group members even if they are anonymous. But too often, social media exploits this aspect of human nature to keep us hooked in the attention economy through engagement based algorithms and design features. Too often, this fosters a cynical, us-vs-them view of the world.
1. Negativity Drives Engagement
It’s no secret that our brains are wired to respond to negative information more strongly than positive cues. This evolutionary trait, meant to protect us from dangers, now underpins the very fabric of our digital interactions. Our research (led by Claire Robertson) finds that negative words in news headlines significantly increase click rates. We analyzed 105,000 different variations of news stories generating 5.7 million clicks and found that "for a headline of average length, each additional negative word increased the click-through rate by 2.3%"
This negativity bias is not just a reflection of media practices but a fundamental aspect of human psychology. Bad is stronger than good. Online, this predisposition is amplified by algorithms, shaping a landscape where the bleakest content often beckons the loudest. This creates a demand for even more negative content.
More recently, we have found that this is especially powerful when it comes to negative stories about out-groups. In a new paper, we find that headlines containing both negativity and identity language increased the click rate by 9.9%—which is much stronger than the effects of negativity alone. This was based on an analysis of 43,932 headlines tested in randomized controlled trials that generated 164,053,523 impressions. Which brings us to our next line of research…
2. Out-Group Animosity Captures Clicks
Sharing news about our political enemies (those perceived as different or oppositional) can dramatically increase the odds of content being shared by huge margins. In a paper led by Steve Rathje, we analyzed posts from news media accounts and US congressional members (n = 2,730,215). We found that "Posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%."
We recently found causal evidence in support of out-group animosity: Negativity about out-groups boosted engagement by 14.1%, compared to just 1.6% for negativity about in-groups. In short, bad news about out-groups seems to drive engagement online.
This phenomenon taps into our primal group dynamics—identifying and rallying against a common enemy solidifies in-group bonds and is a powerful motivator for online engagement. Social media platforms, driven by the desire to maximize user interaction, often promote divisiveness by prioritizing content that pits us against one another.
3. Extremism Commands Attention
In the vast echo chambers of the internet, moderate voices often get drowned out by more extreme perspectives. On Twitter, 97% of political posts on Twitter come from 10% of the most active users, and 90% of political opinions are represented by less than 3% of tweets. Because these users are disproportionately extreme, it creates a situation where the moderate majority, which might be dominant in reality, is absent online. This can distort public discourse and create false norms.
In line with this reasoning, a new analysis from John Burn-Murdoch found that extreme views are heavily over-represented on social media compared to traditional forms of media and cable TV. This suggests that the information diet and body of active users from social media might more extreme than other forms of media. He argues that social media’s tendency to reward hostile content creates incentives that systematically reward simplistic messages and extreme positions and this fuels populism.
Why does this happen? Because extremism is engaging, it sparks outrage, drives comments, and fuels the fires of viral content. This skew creates a false perception of polarization, where the middle ground seems to vanish into the ether of online rage and fervor. This is why we call social media a “funhouse mirror factory”. I have a recent paper on this topic with Claire Robertson and Kareena del Rosario.
4. Moral Emotional Language Magnifies Messages
Finally, the language we use online significantly impacts how content is shared and perceived. Using moral-emotional language—words that convey moral indignation or virtue—can make a post far more shareable. In a paper led by my students Billy Brady and Julian Wills, “Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word.” Framing ideas in high-arousal, highly moral, and highly emotional language makes ideas go viral within our own ideological networks but alienates people who are different. So, the righteous tone of much online conversation essentially fortifies the walls of our echo chambers.
Whether it’s outrage or admiration, using language with emotional weight fuels virality in a way that sober, neutral phrasing seldom achieves. This factor is crucial in understanding why certain narratives gain traction quickly and dominate our feeds. It also creates an incentive to share more content with this language.
Using the Four Dark Laws for Good
These four dark laws—negativity, extremism, out-group animosity, and moral emotional language—form the pillars of engagement in the digital world. They are not merely bugs in the system but features of a platform designed to capture and retain our attention at any cost. Understanding these dynamics not only offers us insight into our digital behaviors; it provides a roadmap for navigating and improving our online environments.
In recognizing the patterns using the four laws, we can strive to create digital spaces that promote healthier interactions and more balanced perspectives. With over 5 billion social media users around the world, more should be done to address these issues. Here are some brief suggestions to counter the influence of the Four Dark Laws?
Recognize when language is being used to create division by leaders, propagandists, trolls, and foreign operatives. They are trying to manipulate you and erode public trust.
When engaging with others online, try to frame issues in a more inclusive way around a shared identity. There is new evidence that in-group solidarity—rather than out-group animosity—often drives online engagement after political crises.
Choose language that emphasizes common values and goals, rather than dehumanizing people who disagree with you. You can easily express your opinion or disagree without using these words, and perhaps even convince someone. We need to regain the art of persuasion rather than derogation.
Avoid sources that use alarmist, us-vs-them, or black-and-white language. Be sure to seek out reliable news sources that report news and events, and share these sources with people you know. (For example, a source like the Associated Press is often more convincing to a wide spectrum of readers).1
Push for change. Ask your leaders to reform these systems before the exact an even greater toll on our society. Every other news medium has faced reform and regulation—from news media to TV. These technologies can, and should, serve the public interest.
Finally, it should go without saying that social media can also be useful, information, and even positive. We use social media every day to connect with friends, share research, and learn about the world. Our goal here is not to say it’s uniformly bad. Social media companies vary in their design, user base, and impact. Some are better than others. And each of these features can be changed for the better.
News and Updates
Jay will be giving a keynote address at the 2025 Healthcare Design Conference + Expo on Sunday, Oct. 26, in Kansas City, Mo.. He will discuss increasing cooperation to improve performance. His presentation will look at how collective concerns such as group identities, moral values, and political beliefs shape the mind, brain, and behavior. You can learn more here.
Catch up on the last one…
Last week’s post was about a new project we are conducting on the impact of non-fiction books. Learn more about the power of books and tell us about a book that changed your life.
Can reading a book make you a better person?
This newsletter describes one of our new projects and is brought to you by Brynn Pedrick’s blog post, “Reading and Repair: Tacking Social Division Through Contemplative Research.“ We made some minor revisions and decided to share it with you to give you a sneak peak at our latest research.