Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108.
Nguyen, T. D., Lyall, G., Tran, A., Shin. M., Carroll, N. G., Klein, C., Xie, L. (2022). Mapping topics in 100,000 real-life moral dilemmas. ArXiv preprint arXiv:2203.16762
Aquino, K., & Reed, I. I. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83, 1423-1440.
What Do Moral Dilemmas Tell Us?
Aquino, K., & Reed, I. I. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83, 1423-1440.
Commentary: Why Dilemmas? Which Dilemmas?
In this symposium, Paul Conway (University of Portsmouth) argues that, although moral dilemmas do not tell us whether laypeople are deontologists or utilitarians, they can be used to study the psychological processes underlying moral judgment. Guy Kahane (University of Oxford) provides commentary, suggesting that the moral dilemmas we care about go far beyond trolley-style scenarios.
Meanwhile, people who care deeply about morality, such as those high in moral identity (caring about being a moral person, Aquino & Reed, 2002), deep moral conviction that harm is wrong (Skitka, 2010), and aversion to witnessing others suffer (Miller et, al., 2014) simultaneously score high in both aversion to sacrificial harm and concerns about outcomes—in other words, they seem to show a genuine dilemma: tension between deontological and utilitarian responding (Conway et al., 2018; Reynolds & Conway, 2018; Körner et al., 2020).
However, there is another way to conceptualize dilemma responses: from a psychological perspective. Returning to the point about the role of psychological mechanisms involved in decision-making, such as apartment hunting, one can ask the question, What psychological mechanisms give rise to acceptance or rejection of sacrificial harm?
I don’t think Paul wants to say that by studying trolley scenarios, we can understand moral dilemmas in general. He rather says that there is a category of moral dilemma that people in fact face in certain contexts—especially military and medical ones—which is well worth studying at the psychological level. This seems right, but if such cases don’t really tell us much about grand philosophical disputes, nor hold the key to the psychology of moral decision-making (as many psychologists seem to assume), or even just to the psychology of moral dilemmas, then do trolley really deserve this much attention? Why spend all that effort studying people’s responses to these farfetched situations rather than to the numerous other moral dilemmas people face? Moreover, even if we want to uncover the psychology of this kind of actual choice situation, why bring in runaway trains instead of cases that actually resemble such (pretty rare) real-life situations. Doctors sometimes need to make tragic choices but a doctor who actively killed a patient to save five others will simply be committing murder. Finally, while it will no doubt be interesting to discover the psychological processes and factors that shape decisions in real-life sacrificial choices, it would be good to know what exactly we are supposed to do with this knowledge. Will it, in particular, help us make better choices when we face such dilemmas? But this takes us right back to the tantalising gap between ‘is’ and ‘ought’, and to those pesky ethical theories we thought we’d left behind…
This argument is corroborated by modelling approaches, such as process dissociation (Conway & Gawronski, 2013) and the Consequences, Norms, Inaction (CNI) Model (Gawronski et al., 2017). Rather than examining responses to sacrificial dilemmas where causing harm always maximizes outcomes, these approaches systematically vary the outcomes of harm—sometimes harming people (arguably) fails to maximize outcomes.* Instead of examining strict decisions, these approaches analyse patterns of responding: some participants systematically reject sacrificial harm regardless of whether doing so maximizes outcomes (consistent with deontological ethics). Other participants consistently maximize outcomes, regardless of whether doing so requires causing harm (consistent with utilitarian ethics). Some participants simple refuse to take any action for any reason, regardless of harm or consequences (consistent with general inaction).
Importantly, most of these findings disappeared when looking at regular sacrificial judgments where causing harm always maximizes outcomes—such judgments force people to ultimately select one or the other option, losing the ability to test any tension between them. Some people might be extremely excited about both a house and apartment, yet ultimately have to choose just one—other people might be unenthusiastic about either choice, yet also must ultimately choose one. What matters is not so much the choice they make, but the complex psychology behind their choice. This is worth studying, even if it does not reflect some abstract principle.
Singer, P. (1980). Utilitarianism and vegetarianism. Philosophy & Public Affairs, 325-337.
Note this question shifts away from the question of endorsement of abstract philosophy and focuses instead on understanding judgments. In other words, instead of trying to understand abstract ‘apartmentness,’ researchers should ask, Why did this person select this apartment over that house?
Kahane, G., Wiech, K., Shackel, N., Farias, M., Savulescu J. and Tracey, I. (2012). The neural basis of intuitive and counterintuitive moral judgement. Social, Cognitive and Affective Neuroscience, 7(4), 393- 402.
Patil, I., Zucchelli, M. M., Kool, W., Campbell, S., Fornasier, F., Calò, M., … & Cushman, F. (2021). Reasoning supports utilitarian resolutions to moral dilemmas across diverse measures. Journal of Personality and Social Psychology, 120(2), 443.
Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44, 389–400.
Think of an administrator of a hospital overwhelmed by COVID deciding who will be sacrificed to save others—do you want that person to be high in moral identity, struggling between two moral impulses, or high in psychopathy, largely indifferent either way? The psychological processes involved in dilemma decision-making are vital for society to study, regardless of any philosophical connotations they (don’t typically) have.
Kahane, G., Everett, J. A., Earp, B. D., Caviola, L., Faber, N. S., Crockett, M. J., & Savulescu, J. (2018). Beyond sacrificial harm: A two-dimensional model of utilitarian psychology. Psychological Review, 125(2), 131.
Importantly, the dual process model suggests there is no one-to-one match between judgment and process. Instead, a given judgment people arrive at reflects the relative strength of these processes. Just as someone who typically might choose an apartment may get swayed by a particularly nice house, sacrificial judgments theoretically reflect the degree to which emotional aversion to harm competes with deliberation about outcomes. Increases or reductions of either process should have predictable impacts on sacrificial judgments.
A better answer, which Paul favours, is that by studying how ordinary people approach these scenarios, we can uncover the processes that underlie people moral decision-making. It’s precisely because people often don’t make moral decisions by applying explicit principles, and because what really drives such decisions is often unconscious, that studying them using psychological methods (or even functional neuroimaging) is so interesting. According to one influential theory, when people reject certain ways of sacrificing one to save a greater number (for example, by pushing them off a footbridge), these moral judgments are driven by emotion (Greene et al. 2004). When they endorse such sacrifice, this can be (as in the case of psychopaths) just because they lack negative emotional response most of us have to directly harming others, but it can also be because people engage their capacity for effortful reasoning. This is an intriguing theory that Paul has done much to support and develop (see e.g. Conway & Gawronski, 2013). As a theory of what goes in people’s brains when they response to trolley scenarios, it seems to me still incomplete. What, for example, are people doing exactly when they engage in effortful thinking and decide that it’s alright to sacrifice one for the greater good? We’ve already agreed they aren’t applying some explicit theory such as utilitarianism. Presumably they also don’t need a great effort to calculate that 5 is greater than 1—as if those who don’t endorse such sacrifices are arithmetically challenged! But if the cognitive effort is simply that needed to overcome a strong immediate intuition or emotion against some choice, this doesn’t tell us anything very surprising. Someone might similarly need to make such an effort in order, for example, to override their immediate motivation to help people in need and selfishly walk away from, say, the scene of a train wreck (Kahane, et al. 2012; Everett & Kahane, 2020).
For an analogy, consider a person choosing between buying a house or renting an apartment. The decision they arrive at might reflect many factors, including calculations about size and cost and location of each option, but also intuitive feelings about how much they like each option—importantly, how much they like each option relative to one another (a point we will return to).
Imagine you could kill a baby to save a village or inject people with a vaccine you know will harm a few people but save many more lives. Imagine a self-driving car could swerve to kill one pedestrian to prevent it from hitting several more. Suppose as a doctor in an overburdened healthcare system you can turn away a challenging patient to devote your limited time and resources to saving several others. These are examples of sacrificial dilemmas, cousins to the famous trolley dilemma where redirecting a trolley to kill one person will save five lives.
Byrd, N., & Conway, P. (2019). Not all who ponder count costs: Arithmetic reflection predicts utilitarian tendencies, but logical reflection predicts both deontological and utilitarian tendencies. Cognition, 192, 103995.
Reynolds, C. J., & Conway, P. (2018). Not just bad actions: Affective concern for bad outcomes contributes to moral condemnation of harm in moral dilemmas. Emotion.
Some theorists have taken dilemma responses as a referendum on philosophical positions. On the surface, this may seem reasonable–after all, philosophers who identify as consequentialist tend to endorse sacrificial harm more often than those who identify as deontologists or virtue ethicists. For example, Fiery Cushman & Eric Schwitzgebel found this pattern when assessing philosophical leanings and dilemma responses of 273 participants holding an M.A. or Ph.D. in philosophy, published in Conway and colleagues (2018). Also, Nick Byrd found that endorsement of consequentialist (over deontological ethics) correlated with choosing to pull the switch on the trolley problem in two different studies that recruited philosophers (2022).
Mill, J. S. (1998). Utilitarianism (R. Crisp, Ed.). New York, NY: Oxford University Press. (Original work published 1861)
Miller, R. M., Hannikainen, I. A., & Cushman, F. A. (2014). Bad actions or bad outcomes? Differentiating affective contributions to the moral condemnation of harm. Emotion, 14(3), 573.
If dilemma responses should be treated as a referendum on adherence to abstract philosophical principles, then clearly the sacrificial dilemma paradigm is broken.
* * * * * * * * * * * *
Greene, Joshua (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Penguin Press.
*The CNI model also manipulates whether action harms or saves a focal target.
Kant, I. (1959). Foundation of the metaphysics of morals (L. W. Beck, Trans.). Indianapolis, IN: Bobbs-Merrill. (Original work published 1785)
Consistent with this argument, people higher in emotional processing, such as empathic concern, aversion to causing harm, and agreeableness, tend to reject sacrificial harm (e.g., Reynolds & Conway, 2018), whereas people higher in logical deliberation, such as cognitive reflection test performance, tend to accept sacrificial harm (e.g., Patil et al., 2021; Byrd & Conway, 2019). There is plenty more evidence, some of it a bit mixed, but overall, the picture from hundreds of studies roughly aligns with this basic cognitive-emotional distinction, though evidence suggests roles for other important processes as well.
We sometimes face moral dilemmas: situations where it’s incredibly hard to know what’s the right thing to do. One reason why moral philosophers develop elaborate ethical theories like utilitarianism or deontology is in order to give us principled ways to deal with such difficult situations. When philosophers argue over which ethical theory or principle is right, they sometimes consider what their theories would tell us to do in various moral dilemmas. But they often also consider how these principles would apply in thought experiments: carefully designed hypothetical scenarios (which can be outlandish but needn’t be) that allow us to tease apart various possible moral factors. Thought experiments aren’t meant to be difficult—in fact, if we want to use them to test competing moral principles, the moral question posed by a thought experiment should be fairly easy to answer. What is hard—what requires philosophical work—is to identify moral principles that would make sense of our confident intuitions about various thought experiments and real-life cases.
Skitka, L. J. (2010). The psychology of moral conviction. Social and Personality Psychology Compass, 4(4), 267-281.
The problems interpreting dilemmas deepened when researchers stared noting the robust tendency for antisocial personality traits, such as psychopathy, to predict acceptance of sacrificial harm (e.g., Bartels & Pizarro, 2011). Do such findings suggest that psychopaths genuinely care about the well-being of the most people, and may in fact be moral paragons? Or do such findings suggest that sacrificial dilemmas fail to measure the things philosophers care about after all (see Kahane et al., 2015)?
Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision-making: a process dissociation approach. Journal of Personality and Social Psychology, 104, 216-235.
However, this is a non sequitur. It assumes that philosophers and laypeople endorse judgments for the same reasons and that the only reasons to endorse or reject sacrificial harm reflect abstract philosophical principles. It also inappropriately reverses the inference: sacrificial judgments are utilitarian because they align with utilitarian philosophy; that does not mean that all judgments described as utilitarian reflect only adherence to that philosophy.
Paul offers another justification for focusing on trolley-style scenarios. We can forget about philosophers and their theories. The fact is that people actually regularly face moral dilemmas, and it’s important to understand the psychological processes in play when they try to solve them. This seems plausible enough. But although some psychologists (not Paul) sometimes use ‘moral dilemmas’ to just mean trolley-style scenario, there are very many kinds of moral dilemmas—and as we saw, some classical trolley scenarios aren’t even dilemmas, properly speaking! Should we keep a promise to a friend if this will harm a third party? Should we send our children to private schools that many other parents can’t afford? Should we go on that luxury cruise instead of donating all this money to charities that save lives? It’s doubtful (or at least needs to be shown) that the psychology behind peculiar trolley cases really tells us much about these very different moral situations. If we want to understand the psychological of moral dilemmas we need to cast a much wider net.*
Yet, questions arise as to what dilemmas actually tell us. Academic work on dilemmas originated with Philippa Foot (1967), who used dilemmas as thought experiment intuition pumps to argue for somewhat arcane phenomena (e.g., the doctrine of double effect, the argument that harm to save others is permissible as a side effect but not focal goal). However, subsequent theorists began interpreting sacrificial dilemmas in terms of utilitarian ethics focused on outcomes and deontological ethics focused on rights and duties. Sacrificing an individual violates most interpretations of (for example) Kant’s (1959/1785) categorical imperative by treating that person as a means to an end/without dignity. Yet, saving more people maximizes outcomes, in line with most interpretations of utilitarian or consequentialist ethics, as described by Bentham (1843), Mill (1998/1861), and Singer (1980), among others.
Bartels, D. M., & Pizarro, D. A. (2011). The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition, 121, 154–161.
Caviola, L., Kahane, G., Everett, J., Teperman, E., Savulescu, J., and Faber, N. (2021). Utilitarianism for animals, Kantianism for people? Harming animals and humans for the greater good. The Journal of Experimental Psychology: General, 150(5), 1008-1039.
One need only take the most cursory glance at the psychological literature on decision-making to note that many decisions human beings make do not reflect abstract adherence to principles (e.g., Kahneman, 2011). Instead, decisions often reflect a complex combination of processes.
Kahane, G., Everett, J. A., Earp, B. D., Farias, M., & Savulescu, J. (2015). ‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition, 134, 193-209.
Luke, D. M., & Gawronski, B. (2021). Psychopathy and moral dilemma judgments: A CNI model analysis of personal and perceived societal standards. Social Cognition, 39, 41-58.