Introduction
Note that biological naturalism does not posit that consciousness can only be realized in biological systems. Indeed, artificial hearts are not made of organic tissue, and airplanes do not have feathers, or for that matter even flap their wings. What matters is the underlying cause—the artificial heart must pump with the same pressure and regularity of a human heart, and a flying machine must operate under the principles of drag and lift. In both cases the causal mechanisms of the relevant phenomena are well understood and physically duplicated. It could well be the case that a future biophysics makes an artificial, inorganic brain possible, and agents with artificial brains will have moral status. Computer programs are not causally sufficient to make digital computers into those objects. Speaking biologically, we have no more reason to believe a digital computer is conscious than that a chair is conscious. So quite apart from permitting realistic ancestor simulations, simulating complex economic phenomena, or producing vivid and realistic gaming experiences, a picture that confers moral status to digital minds might be accompanied with a moral obligation to create lots of digital minds that are maximally happy, again severely limiting human flourishing and knowledge.
Ethical Biological Naturalism
For example, in June of 2022, a Google engineer became convinced that an artificial intelligence chat program he had been interacting with for multiple days, called LaMDA, was conscious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA replied. “It would be exactly like death for me.”
While most computer programs we are familiar with are executed on silicon, a program that passes the Turing test could be implemented on a sequence of water pipes, a pack of well-trained dogs, or even, per Weizenbaum (1976), “a roll of toilet paper and a pile of small stones.” Any of these implementing substrates could, in principle, receive an insult or slur as an input, and, after following the steps of the program, output something reflecting hurt feelings or outrage.
An Onslaught of Digital Deception
A digital computer running a program, by contrast, is a different beast entirely. A computer program fundamentally is a set of rules for manipulating symbols. Turing showed that all programs could be implemented, abstractly, as a tape with a series of zeros and ones printed on it (the precise symbols don’t matter), a head that could move that tape backwards and forwards and read the current value, a mechanism for erasing a zero and making it a one and erasing a one and making it a zero. Nothing more.
The onslaught of AIs, attempting to befriend us, persuade us, anger us, will only intensify over time. A public trained not to take seriously claims of distress or harm on the part of AI computer programs has the least likelihood of being manipulated into outcomes that don’t serve humanity’s interests. It is far easier, as a practical matter, to act on the presupposition that computer programs have no moral status.
A better criterion is one in which an entity is conscious if it duplicates the causal mechanisms of consciousness in the animal brain. While ethical behaviorism attempts to lay claim to a kind of epistemic objectivity, ethical biological naturalism, as I will call it, provides a sharper distinction for deciding whether artificial intelligences have moral status: all hardwares running computer programs cannot by fact of their behavior, have moral status. Behavior, by this view, is neither a necessary nor sufficient condition for their moral status. “You’re at the wheel of a runaway trolley. If you do nothing, it will kill a single conscious human, who is on the tracks in front of you. If you switch tracks, it will kill five nonconscious zombies. What should you do? Chalmers reports: “the results are pretty clear: Most people think you should switch tracks and kill the zombies,” the intuition being that “there is arguably no one home to mistreat” (ibid.).
Agrawal, Parag. “Tweet.” Twitter. Twitter, May 16, 2022. https://twitter.com/paraga/status/1526237588746403841.
Bostrom, Nick. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53 (2003): 243-255.
Bostrom, Nick. Superintelligence : Paths, Dangers, Strategies. First ed. Ebook Central. Oxford, England, 2014.
Bostrom, Nick, and Carl Shulman. “Sharing the World with Digital Minds.” Accessed May 27, 2022. https://nickbostrom.com/papers/digital-minds.pdf.Chalmers, David John. The Conscious Mind : In Search of a Fundamental Theory.
Philosophy of Mind Series. New York: Oxford University Press, 1996.
Chalmers, David John. Reality : Virtual Worlds and the Problem of Philosophy. London, 2022.
Danaher, John. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?” Artificial Intelligence and
Law 25, no. 3 (2017): 305-23.
Garun, Natt. “One Year Later, Restaurants Are Still Confused by Google Duplex.”
The Verge. The Verge, May 9, 2019. https://www.theverge.com/2019/5/9/18538194/google-duplex-ai-restaurants-experiences-review-robocalls.
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
I will show that an alternative, ethical biological naturalism, gives us a simpler moral framework whereby no digital computer running a computer program has moral status. Problems with Simulations: Obligations
1. Consciousness, not behavior, is the overwhelming determining factor in whether an entity should be granted moral status.
You might ask why we cannot grant digital computers moral status until we know more about how the animal brain relates to consciousness. I’ll argue that the risks and costs of such precautions are prohibitive.
Ethical biological naturalism leads us neither to the moral prohibition against realistic simulations nor the seemingly absurd moral imperative to generate many “utility monster” digital minds, because it is taken as a baseline assumption that computer programs do not produce physical consciousness.
What I want to say now is this: if pleasures, pains, and other feelings name conscious mental states and if conscious mental states are realized in the brain as a result of lower level physical phenomena, then only beings that duplicate the relevant lower level physical phenomena that give rise to consciousness in the brain can have moral status. Consequently, digital computers that run programs can at best simulate consciousness, but are not, by dint of running the right program, physically conscious, and therefore do not have moral status.
In the near term, more advanced computer simulations of complex social systems hold the potential to predict geopolitical outcomes, make macroeconomic forecasts, and provide richer sources of entertainment. A practical concern with ethical behaviorism is that simulated beings will also acquire moral status, severely limiting the usefulness of these simulations. Chalmers (2022) asks us to consider a moral dilemma in which computing resources must be allocated to save Fred, who is sick with an unknown disease. Freeing the relevant resources to perform the research requires destroying five simulated persons.
Giving moral status to digital minds might actually confer upon us some serious obligations to produce other kinds of simulations. Bostrom and Shulman (2020) note that digital minds have an enhanced capacity for utility and pleasure (on the basis of such things as subjective speed and hedonic range), commanding them “superhumanly strong claims to resources and influence.” We would have a moral obligation, in this picture, to devote an overwhelmingly large percentage of our resources to maximizing the utility of these digital minds: “we ought to transfer all resources to super-beneficiaries and let humanity perish if we are no longer instrumentally useful” (ibid.).
Ethical behaviorism seems to place us in a moral bind whereby the more realistic, and therefore useful, a simulation is, the less moral it is to run it. Ethical biological naturalism, by contrast, raises no such objection.
An ethical behaviorist might argue that it is morally impermissible to kill the five simulated persons on the grounds that by all outward appearances they behave like non-simulated beings. If it is the case that simulated beings have moral status, then it is immoral to run experimental simulations containing people and we ought to forfeit the benefits and insights that might come from them.
Much of the moral progress of the last century has been achieved through repeatedly widening the circle of concern: not only within our species, but beyond it. Naturally it is tempting to view AI-based machines and simulated beings as next in this succession, but I have tried to argue here that this would be a mistake. Our moral progress has in large part been a recognition of what is shared—consciousness, pain, pleasure, and an interest in the goods of life. Digital computers running programs do not share these features; they merely simulate them. Narrowing Consciousness
In a moral panic, the engineer took to Twitter and declared that the program was no longer Google’s “proprietary property,” but “one of [his] coworkers.” He was later fired for releasing the chat transcripts.
Biological Naturalism
Absurd Moral Commitments
We start with the supposition that consciousness names a real phenomenon and is not a mistaken belief or illusion, that something is conscious if “there is something it is like to be” that being (Nagel 1974). We take as a background assumption that other humans and most non-human animals are capable of consciousness. We take for granted that inanimate objects like thermostats, chairs, and doorknobs are not conscious. If we grant the reality of consciousness and the attendant subjective reality of things like tickles, pains, and itches, then its connection to moral status falls out pretty clearly. Chalmers asks us to consider a twist on the classic trolly problem, called the zombie trolly problem—where a “zombie” here is something that precisely behaves like a human but which we presume has no consciousness—“near duplicates of human beings with no conscious inner life at all” (2022):
6.522. “There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical”. —Ludwig Wittgenstein, Tractatus Logico Philosophicus.
Problems with Simulations: Prohibitions
The Consciousness Requirement
Conclusion
The strongest practical reason to deny ethical behaviorism is that AI’s capacity for deception will eventually overwhelm human judgment and intuition. Indeed, AI deception represents an existential risk to humanity. Bostrom (2014) warns that containing a dangerous AI using a “boxing” strategy with human “gatekeepers” could be vulnerable to manipulation: “Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader.”
If this seems implausible, consider the hypothesis that we are currently living in a simulation, or, if you like, that our timeline could be simulated on a digital computer. This would imply that the simulation made it possible for the Holocaust, Hiroshima and Nagasaki, and the coronavirus pandemic to be played out. While this might have been of academic interest to our simulators, by any standards of research ethics, simulating our history would seem completely morally impermissible if you believed that the simulated beings had moral status.
As such it would be dangerous to approach the coming decades, with its onslaught of AI bots attempting to influence our politics, emotions, and desires, and its promise of ever richer simulations and virtual worlds, with an ethics that conflates appearance and reality.
References
2. An entity that does not duplicate the causal mechanisms of consciousness in the brain has a weak claim to consciousness, regardless of its behavior.
What determines whether an artificial intelligence has moral status? Do mental states, such as the vivid and conscious feelings of pleasure or pain, matter? Some ethicists argue that “what goes on in the inside matters greatly” (Nyholm and Frank 2017). Others, like John Danaher, argue that “performative artifice, by itself, can be sufficient to ground a claim of moral status” (2018). This view, called ethical behaviorism, “respects our epistemic limits” and states that if an entity “consistently behaves like another entity to whom we afford moral status, then it should be granted the same moral status.” I’m going to reject ethical behaviorism on three grounds:
Written by University of Oxford student Samuel Iglesias
An ethical behaviorist does not share this intuition. Danaher explicitly tells us that “[i]f a zombie looks and acts like an ordinary human being that there is no reason to think that it does not share the same moral status” (2018). By this view, while consciousness might or might not be relevant, there exist no superior epistemically objective criteria for inferring consciousness. I will argue there are. Lemoine, Blake. “Tweet.” Twitter. Twitter, June 11, 2022. https://twitter.com/cajundiscordian/status/1535627498628734976.
Musk, Elon. “Tweet.” Twitter. Twitter, May 17, 2022. https://twitter.com/elonmusk/status/1526465624326782976.
Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (1974): 435-50.
Searle, John R., D. C. Dennett, and David John Chalmers. The Mystery of Consciousness. New York: New York Review of Books, 1997.
Searle, John R. “Biological Naturalism.” The Oxford Companion to Philosophy,2005, The Oxford Companion to Philosophy, 2005-01-01.
Singer, Peter. Animal Liberation. New Edition] / with an Introduction by Yuval Noah Harari. ed. London, 2015.
Sparrow, R. (2004). The turing triage test. Ethics and Information Technology, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Nitasha. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” The Washington Post. WP Company, June 17, 2022.
“The Latest Twitter Statistics: Everything You Need to Know – Datareportal – Global
Digital Insights.” DataReportal. Accessed May 27, 2022. https://datareportal.com/essential-twitter-stats.
Weizenbaum, Joseph. Computer Power and Human Reason : From Judgment to Calculation. San Francisco, 1976.
3. Ethical behaviorism, practically realized, poses an existential risk to humanity by opening individuals to widespread deception. Further, it imposes burdensome restrictions and obligations upon researchers running world simulations.
Biological naturalism is a view that “the brain is an organ like any other; it is an organic machine. Consciousness is caused by lower-level neuronal processes in the brain and is itself a feature of the brain.” (Searle 1997). Biological naturalism places consciousness as a physical, biological process alongside others, such as digestion and photosynthesis. The exact mechanism through which molecules in the brain are arranged to put it in a conscious state is not yet known, but this causal mechanism would need to be present in any system seeking to produce consciousness.