Consider, for example DishBrain,[ii] a very recent – and on-going – experiment in which scientists from eminent universities have grafted human neurons from induced stem cells onto a silicone base and integrated them into computer software to produce an entity which they term ‘sentient.’ Kegan and his team argue that their approach (synthetic biological intelligent) is a pre-curser to artificial general intelligence. Their pre-print article makes it very clear that they are working toward such long awaited non-human sentient beings.
The rise of AI presents humanity with an interesting prospect: a companion species. Ever since our last hominid cousins went extinct from the island of Flores almost 12,000 years ago, homo Sapiens have been alone in the world.[i] AI, true AI, offers us the unique opportunity to regain what was lost to us. Ultimately, this is what has captured our imagination and drives our research forward. Make no mistake, our intentions with AI are clear: artificial general intelligence (AGI). A being that is like us, a personal being (whatever person may mean). Institute for Biomedical Ethics, Basel University
[iii] David H. Kelsey, Eccentric Existence: A Theological Anthropology (Louisville: Westminster John Knox, 2009); United Nations, “Universal Declaration of Human Rights,” 1948, https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf.
I cannot help but wonder, however, whether we are truly mindful of what it means to create sentient beings that will share this planet with us. While some people are excited (and terrified) at the prospect, many seem to be progressing without the slightest idea of the consequences of possibly creating persons. Unlike other categories of beings which we have created, say complex chemicals or even single-celled ‘life’, members of the category of person are evaluated to have a certain status: universal, inalienable, inherent dignity to be exact.[iii] Creating members of this category, therefore, has serious consequences.
Is it so inconceivable that in millennia to come, or even a few decades, our posterity may be having to grapple with our mistreatment of GAI persons? Will they, like us, come to recognise the error of failing to acknowledge other people as persons? Will they begin speeches with ‘Dear Robots, we are sorry.”
[vii] Rossi in Ganesh Mani, “Artificial Intelligence’s Grand Challenges: Past, Present, and Future,” AI Magazine (American Association for Artificial Intelligence, March 22, 2021), Business Insights: Essentials.
How we think, talk, write, and treat the ancestors of our future friends will have implications. Small steps along the way toward AGI may well be beneficial to human creators, but the ultimate result need not – nay, must not – be directed for human ends. Imagine what a future AGI person might think reading back on the prescriptive, archaic, and inhuman guidelines that we have produced in recent years. Imagine explaining to them that, while we were trying to create non-human persons, we did so all along with the view that they would be subject to our whims!
Written by Stephen Milford, PhD
Naturally, the authors of these principles have in mind a certain type of AI, an AI that is more akin to an advanced mathematical algorithm than a sentient being. While that may be the case now, their views fail to take into consideration the ultimate driving force behind our development of AI. Let us not forget that the ordinary use of the term ‘AI’ in the public is one which foresees a sentience of some kind. The public is obsessed with AIs that reflect human personhood. Think, for example, of the numerous films in which AIs become the object of love or hate. Forget not, that this same public comprises present as well as future AI programmers and developers.
[ii] Brett J. Kagan et al., “In Vitro Neurons Learn and Exhibit Sentience When Embodied in a Simulated Game-World” (bioRxiv, 2021), https://doi.org/10.1101/2021.12.02.471005.
[viii] John Harris, “Reading the Minds of Those Who Never Lived. Enhanced Beings: The Social and Ethical Challenges Posed by Super Intelligent AI and Reasonably Intelligent Humans,” Cambridge Quarterly of Healthcare Ethics 28, no. 4 (October 1, 2019): 587, https://doi.org/10.1017/S0963180119000525.
Would we grant AGI beings the same rights and duties as human persons?[iv] Or, more accurately, perhaps it is more a matter of respecting their already existing rights and duties. Were they to be sentient, to be persons, then surely they should be treated in a certain way, which includes – within a Kantian framework – not to be treated as a means to an end, but as ends themselves.[v] This is certainly not how they are treated now and does not seem to be our future intention. Reading the numerous guidelines on the development and implementation of AI impresses upon us just the opposite.
[vi] “Asilomar AI Principles,” Future of Life Institute, 2017, https://futureoflife.org/2017/08/11/ai-principles/.
Before the reader gawks at what is being written here, consider that for the larger part of human history, we have used other persons for our own ends. For millennia large portions of our own species were barely acknowledged as even being part of the category of persons. We think, for example, of the hundreds of millions of slaves, the atrocities of Apartheid, or the horrors of concentration camps. In these examples, entire sections of our species were perceived to be of lesser value, often considered impersonal objects for the means and ends of other persons.
The small programmes we create today are ultimately building up to the non-human persons, with whom we will live tomorrow. It is no good positing that we will simply limit their development. Doing so is itself problematic. Consider the work of Harris in his Reading the Minds of Those Who Never Lived (2019). In this fascinating article, he calls to mind the moral limits of our control on super intelligent AIs.[1] He warns humans from assuming that they might be entitled to simply destroy an AI, or to reprogram them if they start to disobey instructions or “get out of control.” Doing so, according to Harris, is like “disabling the capacities for growth of human children so that they could not ‘get above themselves’ and outstrip their parents, or, if that fails, simply to kill them out of hand.”[viii]
Take, for example, the Asilomar AI Principles, of which the first principle is to limit AIs for human benefits. The principles continue to claim that AI should only contain human values, be under human supervision, and have the ability to improve itself subjected to strict controls.[vi] Similarly, the EU Statement on Artificial Intelligence states that only humans are truly autonomous, and therefore only humans can have dignity and value. Consequently, only humans should remain in control, and AIs should be deployed for the benefit of humans alone. Or furthermore still, consider Rossi, who argues for the self-termination of an AI should it recognise that its behaviour is outside pre-defined design parameters.[vii]
[1] We note his reference to super intelligent AI, and not to AGI persons. To Harris, these are synonymous.
[iv] John-Stewart Gordon and Ausrine Pasvenskiene, “Human Rights for Robots? A Literature Review,” AI and Ethics 1, no. 4 (2021): 579–91, https://doi.org/10.1007/s43681-021-00050-7.
To say the ultimate goal of AI development is simply a highly advanced algorithm is misleading. Afterall, without a theistic or metaphysical philosophy, what is human personhood, save for a highly advanced algorithm? And if the theistic or metaphysical accounts of personhood are coherent, who shall determine that they will not be applicable to advanced learning machines? Kegan’s experiments dispel any doubt that we are trying to push the boundaries between programming and sentience.
[i] Wentzel Van Huyssteen, Alone in the World?: Human Uniqueness in Science and Theology, The Gifford Lectures: 2004 (William B. Eerdmans Pub. Co., 2006), https://nwulib.nwu.ac.za/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=cat01185a&AN=nwu.b1543580&site=eds-live.
If any of us are in any doubt about this, consider Turing’s famous test. The aim is not to see how intelligent the AI can be, how many calculations it performs, or how it shifts through data. An AI will pass the test if it is judged by a person to be indistinguishable from another person. Whether this is artificial or real is academic, the result is the same; human persons will experience the reality of another person for the first time in 12 000 years, and we are closer now than ever before.
[v] Lawrence Paternack, ed., Immanuel Kant: Groundwork of the Metaphysic of Morals in Focus (London: Routledge, 2002).