Can AI Help Us Escape Our Echo Chambers?
Issue 183: Our latest research provides evidence that people see AI as a neutral source of political information and trust it over in-group or out-group members
We have access to more knowledge than ever before. Yet, many people sucked into “echo chambers” that reinforce our existing beliefs. A key reason for this is that we often choose information sources based on social identity rather than accuracy. In the United States, Democrats prefer MSNBC or the New York Times while Republicans favor Fox News or the Wall Street Journal.
This tendency to avoid out-group sources narrows our perspectives, deepens political polarization, and can undermine trust in democratic institutions. This is part of a broader trend of political sectarianism, where the other side is seen not just as wrong, but as evil (as shown in the figure below). So, what can be done?
Our new research finds evidence for a fresh solution: Artificial Intelligence (AI) can help people bypass these deep-seated partisan biases
Nearly a billion people now use AI chatbots. And as tools like ChatGPT become more common, they are transforming how we access information. We reasoned that because AI systems are not—yet—seen as partisan, they might be perceived as more neutral. In our new paper, we conducted three studies to test whether people would prefer to receiving information from an AI over partisan human sources.
Key Finding 1: People Prefer AI for Political Topics
To start, we surveyed a nationally representative sample of over 1,000 Americans and asked a simple question: If you needed to understand a political conflict, would you rather have it explained by a person or an AI?.
The majority of people (55.6%) chose AI over a human. This suggests that for political topics, AI is already seen as a more neutral or competent information source.
Key Finding 2: AI Is Preferred Over Both In-group and Out-group Partisan Sources When Identity Matters
Does this preference for AI hold up when people actually have to choose between an AI and sources from their own political party (in-group) or the opposing party (out-group)?.
We conducted a fact-checking experiment where people first learned how accurate different sources were. People were perfectly capable of identifying which sources were competent, regardless of whether they were in-group, out-group, or AI.
But when it came to choosing who to get advice from, identity mattered a lot. As expected, people preferred sources from their own party over those from the opposing party. This is the typical pattern of in-group bias and it’s clearly irrational in a context where people knew that their own group was not more accurate!
However, they preferred getting advice from an AI over both in-group and out-group sources! This reveals that even when we like our own side, we may recognize their potential for bias in political contexts and turn to AI as a neutral alternative.
Key Finding 3: The Preference for AI Is Context-Dependent
Do these identity biases persist even when politics are completely irrelevant?. To find out, we ran a third study using an identical design but replaced the political fact-checking task with a neutral shape-categorization task.
Once again, people were great at identifying who was accurate. However, their choices of who to get advice from revealed a key difference:
People still avoided the out-group. Participants preferred both AI and their in-group sources over sources from the opposing political party. This suggests a general bias against out-group sources that persists even in non-political contexts.
But, people no longer preferred AI over their own party. When politics were off the table, sources from one’s own party were just as trusted as AI. The penalty against in-group sources seen in political contexts completely disappeared when political identity became irrelevant. This suggests the preference for AI over in-group sources in our second study was a strategic choice to avoid potential partisan bias.
Using AI To Bypass Bias
These findings suggest AI could play a crucial role in breaking down the echo chambers that contribute to political polarization. By serving as an identity-neutral source, AI has the potential to broaden our exposure to diverse information, especially when we might otherwise avoid out-group sources. Our results show this is most powerful in politically charged contexts, where people even seem to recognize the potential for bias from their own side and turn to AI as a neutral alternative.
However, this advantage depends on a critical factor: the public’s perception of AI as neutral. If AI systems become branded as partisan—much like news networks—they may no longer be able to circumvent our biases and could even end up reinforcing them. For AI to truly benefit democratic discourse, the systems must be designed for transparency and accuracy, not just user engagement.
By understanding the psychology behind how we select our information sources, we can begin to design healthier information environments and maybe, just maybe, find some common ground again. You can read our new paper here.
This post was drafted by Laura Globig with edits from Jay Van Bavel
The Power of Us is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
News and Updates
Another Ask Me Anything session is happening this month! Paid Subscribers can join us for our monthly live Q&A with Jay or Dom where you can ask us anything from workshopping research questions, career advice to opinions and recommendations on pop culture happenings—for paid subscribers only. Upgrade your subscription to join
Oct 16th 4:00 EST with Dom
Nov 6th 4:00 EST with Jay
Dec 11th 4:00 EST with Dom
Catch up on the last one…
Last newsletter, we shared a podcast on political polarization and health: