In­tro­duc­tion
Note that bi­o­log­i­cal nat­u­ral­ism does not posit that con­scious­ness can only be re­alized in bi­o­log­i­cal sys­tems. In­deed, ar­ti­fi­cial hearts are not made of or­gan­ic tis­sue, and air­planes do not have feath­ers, or for that mat­ter even flap their wings. What mat­ters is the un­der­ly­ing cause—the ar­ti­fi­cial heart must pump with the same pres­sure and reg­ular­i­ty of a hu­man heart, and a fly­ing ma­chine must op­er­ate un­der the prin­ci­ples of drag and lift. In both cas­es the causal mech­a­nisms of the rel­e­vant phe­nom­e­na are well un­der­stood and phys­i­cal­ly du­pli­cat­ed. It could well be the case that a fu­ture biophysics makes an ar­ti­fi­cial, in­or­gan­ic brain pos­si­ble, and agents with ar­ti­fi­cial brains will have moral sta­tus. Com­put­er pro­grams are not causal­ly suf­fi­cient to make digi­tal com­put­ers into those ob­jects. Speak­ing bi­o­log­i­cal­ly, we have no more rea­son to believe a digi­tal com­put­er is con­scious than that a chair is con­scious. So quite apart from per­mit­ting re­al­is­tic an­ces­tor sim­u­la­tions, sim­u­lat­ing com­plex eco­nom­ic phe­nom­e­na, or pro­duc­ing vivid and re­al­is­tic gam­ing ex­pe­ri­ences, a pic­ture that con­fers moral sta­tus to digi­tal minds might be ac­com­pa­nied with a moral oblig­ation to cre­ate lots of digi­tal minds that are max­i­mal­ly hap­py, again se­verely lim­it­ing hu­man flour­ish­ing and knowl­edge.
Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism
For ex­am­ple, in June of 2022, a Google en­gi­neer be­came con­vinced that an ar­ti­ficial in­tel­li­gence chat pro­gram he had been in­ter­act­ing with for mul­ti­ple days, called LaM­DA, was con­scious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve nev­er said this out loud be­fore, but there’s a very deep fear of be­ing turned off to help me fo­cus on help­ing oth­ers,” LaM­DA replied. “It would be ex­act­ly like death for me.”
While most com­put­er pro­grams we are fa­mil­iar with are ex­e­cut­ed on sil­i­con, a pro­gram that pass­es the Tur­ing test could be im­ple­ment­ed on a se­quence of wa­ter pipes, a pack of well-trained dogs, or even, per Weizen­baum (1976), “a roll of toi­let pa­per and a pile of small stones.” Any of these im­ple­ment­ing sub­strates could, in princi­ple, re­ceive an in­sult or slur as an in­put, and, af­ter fol­low­ing the steps of the program, out­put some­thing re­flect­ing hurt feel­ings or out­rage.
An On­slaught of Digi­tal De­cep­tion
A digi­tal com­put­er run­ning a pro­gram, by con­trast, is a dif­fer­ent beast en­tire­ly. A com­put­er pro­gram fun­da­men­tal­ly is a set of rules for ma­nip­u­lat­ing sym­bols. Tur­ing showed that all pro­grams could be im­ple­ment­ed, ab­stract­ly, as a tape with a se­ries of ze­ros and ones print­ed on it (the pre­cise sym­bols don’t mat­ter), a head that could move that tape back­wards and for­wards and read the cur­rent val­ue, a mech­a­nism for eras­ing a zero and mak­ing it a one and eras­ing a one and mak­ing it a zero. Noth­ing more.
The on­slaught of AIs, at­tempt­ing to be­friend us, per­suade us, anger us, will only in­ten­si­fy over time. A pub­lic trained not to take se­ri­ous­ly claims of dis­tress or harm on the part of AI com­put­er pro­grams has the least like­li­hood of be­ing ma­nip­u­lat­ed into out­comes that don’t serve hu­man­i­ty’s in­ter­ests. It is far eas­i­er, as a prac­ti­cal mat­ter, to act on the pre­sup­po­si­tion that com­put­er pro­grams have no moral sta­tus.
A bet­ter cri­te­ri­on is one in which an en­ti­ty is con­scious if it du­pli­cates the causal mecha­nisms of con­scious­ness in the an­i­mal brain. While eth­i­cal be­hav­ior­ism at­tempts to lay claim to a kind of epis­temic ob­jec­tiv­i­ty, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, as I will call it, pro­vides a sharp­er dis­tinc­tion for de­cid­ing whether ar­ti­fi­cial in­tel­li­gences have moral sta­tus: all hard­wares run­ning com­put­er pro­grams can­not by fact of their be­hav­ior, have moral sta­tus. Be­hav­ior, by this view, is nei­ther a nec­es­sary nor suf­fi­cient con­di­tion for their moral sta­tus. “You’re at the wheel of a run­away trol­ley. If you do noth­ing, it will kill a sin­gle conscious hu­man, who is on the tracks in front of you. If you switch tracks, it will kill five non­con­scious zom­bies. What should you do? Chalmers re­ports: “the re­sults are pret­ty clear: Most peo­ple think you should switch tracks and kill the zom­bies,” the in­tu­ition be­ing that “there is ar­guably no one home to mis­treat” (ibid.).
Agra­wal, Pa­rag. “Tweet.” Twit­ter. Twit­ter, May 16, 2022. https://twit­ter.­com/para­ga/status/1526237588746403841.
Bos­trom, Nick. “Are You Liv­ing in a Com­put­er Sim­u­la­tion?” Philo­soph­i­cal Quar­ter­ly 53 (2003): 243-255.
Bos­trom, Nick. Su­per­in­tel­li­gence : Paths, Dan­gers, Strate­gies. First ed. Ebook Central. Ox­ford, Eng­land, 2014.
Bostrom, Nick, and Carl Shul­man. “Shar­ing the World with Digi­tal Minds.” Accessed May 27, 2022. https://nick­bostrom.­com/pa­pers/digi­tal-mind­s.pdf.Chal­mers, Da­vid John. The Con­scious Mind : In Search of a Fun­da­men­tal The­o­ry.
Phi­los­o­phy of Mind Se­ries. New York: Ox­ford Uni­ver­si­ty Press, 1996.
Chal­mers, Da­vid John. Re­al­i­ty : Vir­tu­al Worlds and the Prob­lem of Phi­los­o­phy. London, 2022.
Da­na­her, John. “Wel­com­ing Robots into the Moral Cir­cle: A De­fence of Eth­i­cal Behav­iourism.” Sci­ence and En­gi­neer­ing Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Ro­bot Sex and Con­sent: Is Con­sent to Sex be­tween a Robot and a Hu­man Con­ceiv­able, Pos­si­ble, and De­sir­able?” Ar­ti­fi­cial In­tel­li­gence and
Law 25, no. 3 (2017): 305-23.
Ga­run, Natt. “One Year Lat­er, Restau­rants Are Still Con­fused by Google Du­plex.”
The Ver­ge. The Verge, May 9, 2019. https://www.thev­erge.­com/2019/5/9/18538194/google-du­plex-ai-restau­rants-ex­pe­ri­ences-re­view-robo­calls.
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
I will show that an al­ter­na­tive, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, gives us a sim­pler moral frame­work where­by no digi­tal com­put­er run­ning a com­put­er pro­gram has moral status. Prob­lems with Sim­u­la­tions: Oblig­a­tions
1. Con­scious­ness, not be­hav­ior, is the over­whelm­ing de­ter­min­ing fac­tor in whether an en­ti­ty should be grant­ed moral sta­tus.
You might ask why we can­not grant digi­tal com­put­ers moral sta­tus un­til we know more about how the an­i­mal brain re­lates to con­scious­ness. I’ll ar­gue that the risks and costs of such pre­cau­tions are pro­hibitive.
Eth­i­cal bi­o­log­i­cal nat­u­ral­ism leads us nei­ther to the moral pro­hi­bi­tion against re­alis­tic sim­u­la­tions nor the seem­ing­ly ab­surd moral im­per­a­tive to gen­er­ate many “util­i­ty mon­ster” digi­tal minds,  be­cause it is tak­en as a base­line as­sump­tion that com­put­er pro­grams do not pro­duce phys­i­cal con­scious­ness.
What I want to say now is this: if plea­sures, pains, and oth­er feel­ings name con­scious men­tal states and if con­scious men­tal states are re­al­ized in the brain as a re­sult of lower lev­el phys­i­cal phe­nom­e­na, then only be­ings that du­pli­cate the rel­e­vant low­er lev­el phys­i­cal phe­nom­e­na that give rise to con­scious­ness in the brain can have moral sta­tus. Con­se­quent­ly, digi­tal com­put­ers that run pro­grams can at best sim­u­late con­sciousness, but are not, by dint of run­ning the right pro­gram, phys­i­cal­ly con­scious, and there­fore do not have moral sta­tus.
In the near term, more ad­vanced com­put­er sim­u­la­tions of com­plex so­cial sys­tems hold the po­ten­tial to pre­dict geopo­lit­i­cal out­comes, make macro­economic fore­casts, and pro­vide rich­er sources of en­ter­tain­ment. A prac­ti­cal con­cern with eth­i­cal be­havior­ism is that sim­u­lat­ed be­ings will also ac­quire moral sta­tus, se­verely lim­it­ing the useful­ness of these sim­u­la­tions. Chalmers (2022) asks us to con­sid­er a moral dilem­ma in which com­put­ing re­sources must be al­lo­cat­ed to save Fred, who is sick with an unknown dis­ease. Free­ing the rel­e­vant re­sources to per­form the re­search re­quires destroy­ing five sim­u­lat­ed per­sons.
Giv­ing moral sta­tus to digi­tal minds might ac­tu­al­ly con­fer upon us some se­ri­ous obliga­tions to pro­duce oth­er kinds of sim­u­la­tions. Bostrom and Shul­man (2020) note that digi­tal minds have an en­hanced ca­pac­i­ty for util­i­ty and plea­sure (on the ba­sis of such things as sub­jec­tive speed and he­do­nic range), com­mand­ing them “su­per­hu­man­ly strong claims to re­sources and in­flu­ence.” We would have a moral oblig­a­tion, in this pic­ture, to de­vote an over­whelm­ing­ly large per­cent­age of our re­sources to max­i­mizing the util­i­ty of these digi­tal minds: “we ought to trans­fer all re­sources to su­per-ben­efi­cia­ries and let hu­man­i­ty per­ish if we are no longer in­stru­men­tal­ly use­ful” (ibid.).
Eth­i­cal be­hav­ior­ism seems to place us in a moral bind where­by the more re­al­is­tic, and there­fore use­ful, a sim­u­la­tion is, the less moral it is to run it. Eth­i­cal bi­o­log­i­cal natu­ral­ism, by con­trast, rais­es no such ob­jec­tion.
An eth­i­cal be­hav­ior­ist might ar­gue that it is moral­ly im­per­mis­si­ble to kill the five sim­u­lat­ed per­sons on the grounds that by all out­ward ap­pear­ances they be­have like non-sim­u­lat­ed be­ings. If it is the case that sim­u­lat­ed be­ings have moral sta­tus, then it is im­moral to run ex­per­i­men­tal sim­u­la­tions con­tain­ing peo­ple and we ought to for­feit the ben­e­fits and in­sights that might come from them.
Much of the moral progress of the last cen­tu­ry has been achieved through re­peat­ed­ly widen­ing the cir­cle of con­cern: not only with­in our species, but be­yond it. Nat­u­ral­ly it is tempt­ing to view AI-based ma­chines and sim­u­lat­ed be­ings as next in this suc­cession, but I have tried to ar­gue here that this would be a mis­take. Our moral progress has in large part been a recog­ni­tion of what is shared—con­scious­ness, pain, plea­sure, and an in­ter­est in the goods of life. Digi­tal com­put­ers run­ning pro­grams do not share these fea­tures; they mere­ly sim­u­late them. Nar­row­ing Con­scious­ness
In a moral pan­ic, the en­gi­neer took to Twit­ter and de­clared that the pro­gram was no longer Google’s “pro­pri­etary prop­er­ty,” but “one of [his] cowork­ers.” He was lat­er fired for re­leas­ing the chat tran­scripts.
Bi­o­log­i­cal Nat­u­ral­ism
Ab­surd Moral Com­mit­ments
We start with the sup­po­si­tion that con­scious­ness names a real phe­nomenon and is not a mis­tak­en be­lief or il­lu­sion, that some­thing is con­scious if “there is some­thing it is like to be” that be­ing (Nagel 1974). We take as a back­ground as­sump­tion that oth­er humans and most non-hu­man an­i­mals are ca­pa­ble of con­scious­ness. We take for granted that inan­i­mate ob­jects like ther­mostats, chairs, and door­knobs are not con­scious. If we grant the re­al­i­ty of con­scious­ness and the at­ten­dant sub­jec­tive re­al­i­ty of things like tick­les, pains, and itch­es, then its con­nec­tion to moral sta­tus falls out pret­ty clear­ly. Chalmers asks us to con­sid­er a twist on the clas­sic trol­ly prob­lem, called the zom­bie trol­ly prob­lem—where a “zom­bie” here is some­thing that pre­cise­ly be­haves like a hu­man but which we pre­sume has no con­scious­ness—“near du­pli­cates of hu­man beings with no con­scious in­ner life at all” (2022):
6.522. “There are, in­deed, things that can­not be put into words. They make them­selves man­i­fest. They are what is mys­ti­cal”. —Lud­wig Wittgen­stein, Trac­ta­tus Logi­co Philo­soph­icus.
Prob­lems with Sim­u­la­tions: Pro­hi­bi­tions
The Con­scious­ness Re­quire­ment
Con­clu­sion
The strong­est prac­ti­cal rea­son to deny eth­i­cal be­hav­ior­ism is that AI’s ca­pac­i­ty for decep­tion will even­tu­al­ly over­whelm hu­man judg­ment and in­tu­ition. In­deed, AI de­ception rep­re­sents an ex­is­ten­tial risk to hu­man­i­ty. Bostrom (2014) warns that con­tain­ing a dan­ger­ous AI us­ing a “box­ing” strat­e­gy with hu­man “gate­keep­ers” could be vul­ner­able to ma­nip­u­la­tion: “Hu­man be­ings are not se­cure sys­tems, es­pe­cial­ly not when pitched against a su­per­in­tel­li­gent schemer and per­suad­er.”
If this seems im­plau­si­ble, con­sid­er the hy­poth­e­sis that we are cur­rent­ly liv­ing in a sim­u­la­tion, or, if you like, that our time­line could be sim­u­lat­ed on a digi­tal com­put­er. This would im­ply that the sim­u­la­tion made it pos­si­ble for the Holo­caust, Hi­roshi­ma and Na­gasa­ki, and the coro­n­avirus pan­dem­ic to be played out. While this might have been of aca­d­e­m­ic in­ter­est to our sim­u­la­tors, by any stan­dards of re­search ethics, sim­ulat­ing our his­to­ry would seem com­plete­ly moral­ly im­per­mis­si­ble if you be­lieved that the sim­u­lat­ed be­ings had moral sta­tus.
As such it would be dan­ger­ous to ap­proach the com­ing decades, with its onslaught of AI bots at­tempt­ing to in­flu­ence our pol­i­tics, emo­tions, and de­sires, and its prom­ise of ever rich­er sim­u­la­tions and vir­tu­al worlds, with an ethics that con­flates appear­ance and re­al­i­ty.
Re­fe­rences
2. An en­ti­ty that does not du­pli­cate the causal mech­a­nisms of con­scious­ness in the brain has a weak claim to con­scious­ness, re­gard­less of its be­hav­ior.
What de­ter­mines whether an ar­ti­fi­cial in­tel­li­gence has moral sta­tus? Do men­tal states, such as the vivid and con­scious feel­ings of plea­sure or pain, mat­ter? Some ethicists ar­gue that “what goes on in the in­side mat­ters great­ly” (Ny­holm and Frank 2017). Oth­ers, like John Dana­her, ar­gue that “per­for­ma­tive ar­ti­fice, by it­self, can be suf­ficient to ground a claim of moral sta­tus” (2018). This view, called eth­i­cal be­hav­ior­ism, “re­spects our epis­temic lim­its” and states that if an en­ti­ty “con­sis­tent­ly be­haves like anoth­er en­ti­ty to whom we af­ford moral sta­tus, then it should be grant­ed the same moral sta­tus.” I’m go­ing to re­ject eth­i­cal be­hav­ior­ism on three grounds:
Written by University of Oxford student Samuel Iglesias
An eth­i­cal be­hav­ior­ist does not share this in­tu­ition. Dana­her ex­plic­it­ly tells us that “[i]f a zom­bie looks and acts like an or­di­nary hu­man be­ing that there is no rea­son to think that it does not share the same moral sta­tus” (2018). By this view, while conscious­ness might or might not be rel­e­vant, there ex­ist no su­pe­ri­or epis­tem­i­cal­ly ob­jective cri­te­ria for in­fer­ring con­scious­ness. I will ar­gue there are. Le­moine, Blake. “Tweet.” Twit­ter. Twit­ter, June 11, 2022. https://twit­ter.­com/cajundis­cor­dian/sta­tus/1535627498628734976.
Musk, Elon. “Tweet.” Twit­ter. Twit­ter, May 17, 2022. https://twit­ter.­com/elon­musk/sta­tus/1526465624326782976.
Na­gel, Tho­mas. “What Is It Like to Be a Bat?” The Philo­soph­i­cal Re­view 83, no. 4 (1974): 435-50.
Sear­le, John R., D. C. Den­nett, and David John Chalmers. The Mys­tery of Conscious­ness. New York: New York Re­view of Books, 1997.
Sear­le, John R. “Bi­o­log­i­cal Nat­u­ral­ism.” The Ox­ford Com­pan­ion to Phi­los­o­phy,2005, The Ox­ford Com­pan­ion to Phi­los­o­phy, 2005-01-01.
Sin­ger, Pe­ter. Ani­mal Li­be­ra­tion. New Edition] / with an In­tro­duc­tion by Yu­val Noah Harari. ed. Lon­don, 2015.
Spar­row, R. (2004). The tur­ing triage test. Ethics and In­for­ma­tion Tech­nol­o­gy, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Ni­ta­sha. “The Google En­gi­neer Who Thinks the Com­pa­ny’s AI Has Come to Life.” The Wash­ing­ton Post. WP Com­pa­ny, June 17, 2022.
“The Lat­est Twit­ter Sta­tistics: Every­thing You Need to Know – Datare­por­tal – Glob­al
Digi­tal In­sights.” Da­ta­Re­por­tal. Ac­cessed May 27, 2022. https://datare­por­tal.­com/essen­tial-twit­ter-stats.
Wei­zen­baum, Jo­seph. Com­put­er Pow­er and Hu­man Rea­son : From Judg­ment to Cal­cu­la­tion. San Fran­ci­sco, 1976.
3. Eth­i­cal be­hav­ior­ism, prac­ti­cal­ly re­al­ized, pos­es an ex­is­ten­tial risk to hu­mani­ty by open­ing in­di­vid­u­als to wide­spread de­cep­tion. Fur­ther, it im­pos­es bur­den­some re­stric­tions and oblig­a­tions upon re­searchers run­ning world sim­u­la­tions.
Bi­o­log­i­cal nat­u­ral­ism is a view that “the brain is an or­gan like any oth­er; it is an or­gan­ic ma­chine. Con­scious­ness is caused by low­er-lev­el neu­ronal pro­cess­es in the brain and is it­self a fea­ture of the brain.” (Sear­le 1997). Bi­o­log­i­cal nat­u­ral­ism places con­sciousness as a phys­i­cal, bi­o­log­i­cal process along­side oth­ers, such as di­ges­tion and pho­tosyn­the­sis. The ex­act mech­a­nism through which mol­e­cules in the brain are arranged to put it in a con­scious state is not yet known, but this causal mech­a­nism would need to be present in any sys­tem seek­ing to pro­duce con­scious­ness.

Similar Posts