One reason that a gardener may be unmoved by the plight of the slime mould is that, for all of its impressive navigational abilities, it is highly doubtful that it has phenomenal consciousness. This is the ability to ‘feel’ things, to subjectively experience what it is like to be in a particular mental state. The same is true of the Pong-playing neural network; indeed, researchers on the study are actively working with bioethicists to ensure they do not accidentally create a conscious brain.
Consider slime mould. Sime mould is a single-celled amoeba that looks like a mass of yellow sponge; if you found it in your garden, you might be inclined to fetch a spade and get rid of it. What’s in a Name?
This is a remarkable finding, but is this system thereby sentient? According to media reports, the lead author, Brett Kagan suggests that
Recent media reports have highlighted a study suggesting that so-called “lab grown brain cells” can “play the video game ‘Pong’”. Whilst the researchers have described the system as ‘sentient’, others have maintained that we should use the term ”thinking system” to describe the system that the researchers created.
Phenomenal consciousness and the ability to process information whilst interacting with an environment are quite different abilities. One can occur in the absence of the other.
Ultimately, it is the concepts rather than the labels that matter; however, it is hard to deny that, for many, the label ‘sentience’ connotes a substantial degree of phenomenal consciousness. Consider, for example, the appellation of the Animal Welfare (Sentience) Act 2022 passed in the UK this year. This bill affords legal protection to vertebrates, cephalopod molluscs, and decapod crustaceans with a view to preventing suffering; but the Act does not extend to invertebrates (capable of certain forms of information processing), or indeed slime moulds. Importantly, there is a considerable philosophical tradition suggesting a close relationship between moral status and phenomenal consciousness. It is far less clear that the abilities evidenced by slime moulds and Pong-playing neural networks can alone ground moral status.
Phenomenal consciousness is therefore plausibly sufficient for some degree of moral status. Some philosophers would advance the stronger claim that it may even be necessary; perhaps, if something lacks phenomenal consciousness, it cannot have the sort of interests that warrant moral protection.
Of course, this description could also be readily applied to many artificial intelligence systems – witness recent debates about whether it is appropriate to describe Google LaMDA as sentient. However, it is also illuminating to consider that it is an apt description of some very basic non-synthetic life-forms.
Yet, scientists have discovered that slime moulds are capable of remarkable feats. They can solve complex spatial problems despite lacking a brain. When tasked with finding ‘food’ in a space replicating large urban areas in the real world on a miniature scale, researchers found that slime mould did not do so in a random manner. Instead, it essentially recreated the transport networks that actually exist in those real-world places.
Slime moulds challenge the assumption that intelligence requires a brain, but should this research change our thinking about ethics? Do these findings suggest that we have a moral reason to stop slime mould from being dug up by fastidious gardeners?
“We could find no better term to describe the device. . . It is able to take in information from an external source, process it and then respond to it in real time.”
According to the study, researchers developed in vitro neural networks from human or rodent origins, and integrated them with a computing system (via a high density array of electrodes). Following electrical stimulation and recording, this allowed the team to embed this system into the ‘game-world’ of something like the classic arcade game Pong (a very basic representation of table-tennis). The study results suggest that the system showed evidence of apparent learning in the game.
Answering this question requires us to have a view about the sorts of capacities that warrant moral protection, or on what grounds ‘moral status’.
A Brief Overview of the Study The upshot here is that the embedded neural network played Pong, but we have no reason to believe that it enjoyed doing so. That says more about the limits of the system than it does about the merits of the computer game. But it is something that matters morally – and there are good reasons to make sure that the language we use in this area reflects this.
Phenomenal consciousness is a crucial ability, because only beings with phenomenal consciousness are able to experience suffering. This is important because all being capable of experiencing suffering have a very strong interest in not doing so, an interest that plausibly warrants strong moral consideration. This is an idea Jeremy Bentham captured when he wrote:
The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?
The problem is that ‘sentience’ is sometimes employed as an umbrella term to cover all of these different kinds of ability. This is unfortunate because it serves to obscure whether and where significant moral issues arise. When sentience is used to connote phenomenal consciousness of the sort that is sufficient for the experience of suffering, then establishing that something is sentient raises important moral questions about the strength of our reasons to prevent that being’s suffering. When sentience is used as a short-hand for other abilities, these particular moral questions do not arise.
Does it matter whether we describe this as a thinking system, or a sentient one?
Phenomenal Consciousness and Moral Status