How people make moral decisions in the age of driverless cars
Issue #155: Why the Trolley Problem still matters in the age of Artificial Intelligence
Annie Duke, former professional poker player, expert on human decision making, and all-round brilliant person is one of our favorite thinkers. Annie is the author of multiple books, including Thinking in Bets, How to Decide, and, most recently, Quit: The Power of Knowing When to Walk Away. She also writes the Thinking In Bets Newsletter in which she explains how to make good decisions. We highly recommend subscribing if you haven’t already.
We decided to share a column from Annie in which she talks about the real world challenges of moral decision making in the modern age of artificial intelligence. In particular, she describes fascinating research about the moral programming people want in driverless cars. This turns out to be a really critical issue because no one wants to buy a driverless car that will make a moral decision they wouldn’t make (e.g., smashing them into a pole to save a stray cat). Annie explains how deeply human moral intuitions inherited from the selective pressures exerted on our evolutionary ancestors likely shape our decisions about artificial intelligence and self-driving cars.
“The trolley problem has been a fun thought experiment in the past but now it is a real-life, high-stakes problem to address.” - Annie Duke
The classic trolley problem, developed by Philippa Foot in 1967, explores of the moral dilemma that action vs. inaction can pose. In the basic scenario, you see a runaway trolley barreling down a track toward five workers. The only way to avoid those workers’ certain deaths is by pulling a lever to switch the trolley to an alternate track. The problem is that there is one worker on that other track who will be killed if you do it.
Do you pull the lever?
Most people say, “yes, I would pull the lever.” It seems like a simple problem: one person is getting run over, instead of five. A good utilitarian always pulls the lever and about 80% to 90% of people behave like good utilitarians here.
Yet, you can make small changes to the hypothetical that result in big changes to how likely people are to sacrifice one life to save five.
Let’s suppose you are standing on a footbridge and you see the same trolley barreling toward the five workers on the track below you. There is a man standing next to you will a large backpack on, large enough that if you push him off the footbridge onto the track, his mass will stop the trolley from hitting the five workers but he will die.
This problem presents the same calculus as the original trolley problem: sacrifice one person to save five. Yet now the willingness to intervene drastically reduces. Only about half of people are willing to push people someone off the bridge.
This exposes the difference we feel when committing an act (pushing someone off the bridge) vs. omitting to act (not intervening and allowing the trolley to continue on its path). Omission-commission bias tells us that we judge harms resulting from action more harshly that identical harms resulting from a failure to act.
Omission-commission bias partly explains the flip in the percentage of people willing to intervene in the two versions of the trolley problem.
The trolley problem is not just a cute thought experiment exploring an esoteric moral conundrum. With machine intelligence increasingly taking over complex human activities, the trolley problem spills over into worldwide debates about transformative technologies like self-driving cars and their programming, autonomy, regulation, and ultimately, public acceptance.
Think about the dilemma as it applies to self-driving cars. What choice should an autonomous vehicle make when faced with the choice of hitting several people in a crosswalk or driving into a concrete barrier and killing its passengers? That is a real life trolley problem that we are facing today.
In 2014, the Media Lab at MIT and researchers from other institutions created the Moral Machine, a game-like platform to gather responses worldwide on aspects of the trolley problem (including versions about self-driving cars). In addition to variations in the action taken, sets of hypotheticals also alter the number, age, social standing, gender, physical condition, and other features of the potential victims.
Between 2014 and 2018, people from 233 countries and territories logged 40 million decisions. As you can imagine, this kind of data made it possible for researchers to produce numerous papers on global moral preferences.
In 2020, one of these papers, by lead researcher Edmond Awad and four colleagues, examined whether there were universal similarities in preferences, as well as cases where there were differences between countries and cultures. They analyzed responses by 70,000 participants in 10 languages and 42 countries on their preferences between different versions of the trolley problem, including the original and the footbridge version.
They found an identical pattern of preferences in every country between the versions of the trolley problem. Switching in the original version of the problem was the most acceptable (with a country-level average of 81% endorsing this sacrifice), while pushing someone off the footbridge was the least acceptable (51%).
The authors suggest that these results are evidence “that this ordering is best explained by basic cognitive processes rather than cultural norms.”
Although the order of preference for intervening in the various permutations was universal, the authors did find that culture does also have an influence on one’s preferences. They found an association between relational mobility (the fluidity with which people can develop new relationships) and the likelihood to endorse sacrifice in each scenario. In countries with low relational mobility (where people are more cautious about not alienating their current social partners), there was a greater tendency to reject sacrifices for the greater good. This was especially true in Asian countries, which spanned the lower half of relational mobility.
It’s important to note that participants in the study weren’t expressing or sharing their views with others (except, obviously, the researchers). But one can hypothesize that the external pressure discouraging people from sending a negative social signal bleeds into their underlying attitudes themselves, becoming internalized. If sharing a particular view is bad form and has negative social consequences, that can eventually change attitudes (whether expressed or unexpressed to others), “making certain ideas morally ‘unthinkable.’”
As we enter the age of rapidly advancing AI, these moral questions are becoming more urgent as a practical matter and less esoteric. The trolley problem has been a fun thought experiment in the past but now it is a real-life, high-stakes problem to address.
Catch up on the last one…
Last week, we reviewed some important climate polarization studies from psychology. Learn about what conservatives and liberals have in common about environmental priorities:
Thank you Annie! -,for another iinsightful view of decision making. Artificial Intelligence combined with Liberal Arts ageless knowledge will help give humans - the edge they need. Cognitive diversity is powerful.