[2] Taddeo, M. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust. Minds & Machines 20, 243–257 (2010)
One possible answer is that we cannot. This could be true in two senses.
We could simply say that we rely on AI, as we do with any other tool. Unless we think that AI is really not just a tool, but it is closer to a human being. That is precisely the issue I want to raise in conclusion.
How can we ‘trust’ an artificial agent in the same sense as we trust a human? Even more problematically, can AI itself be a trustor? When it comes to artificial agents, trust can be thought of either as a relation between a human and an artificial agent (should I trust ChatGPT?) or as a relation between two artificial agents in an integrated system. For example, Maria Rosaria Taddeo characterizes e-trust, understood as the trust between two artificial agents (AAs) that need to collaborate with each other in an AI integrated system, as follows:
The same goes with trust. We have no doubt that we can trust – or distrust – other humans. We normally trust other humans without having a definition of trust. Quite simply, trust naturally happens between humans.  True, sometimes we talk of trusting our car or trusting our dog (or, more controversially, our cat). These are mostly ways of anthropomorphizing certain objects or animals. We tend to see pets as our friends. Perhaps more bizarrely, we have a tendency to anthropomorphise our cars, as tools that we heavily rely on for carrying out basic daily activities. But trust in pets and cars exists precisely to the extent that we anthropomorphize them. We would not say that we trust a wild animal or the bus we are taking, because it is more difficult to see them as human-like.
What Taddeo describes might well be the way two AI systems interact when they need to rely on each other to perform a task, but what is the point of using the “trust” terminology? In a second sense, we cannot trust AI for the same reason why we cannot distrust it, either. Quite simply, trust (and distrust) is not the kind of attitude we can have towards tools. Unlike humans, tools are just means to our ends. They can be more or less reliable, but not more or less trustworthy. In order to trust, we need to have certain dispositions – or ‘reactive attitudes’, to use some philosophical jargon – that can only be appropriately directed at humans. According to Richard Holton’s account of ‘trust’, for instance, trust requires the readiness to feel betrayed by the individual you trust[1]. Or perhaps we can talk, less emphatically, of readiness to feel let down.
Perhaps this is the most important thing that AI can do for us: help us figure out what it is so distinctive about being human. And it is a question we cannot trust ChatGPT to answer.
Written by Alberto Giubilini What is it, exactly, that we are doing when we try to apply a terminology referring to eminently human dimensions to artificial agents? Are we just changing the meaning of words like ‘trust”?
[1] Holton, R. (1994). Deciding to trust, coming to believe. Australasian Journal of Philosophy72(1), 63-76.
With AI, we would need definitions in order to decide whether these features can be attributed to AI or to our relationships with AI.  We would need to think carefully about what eminently human features like autonomy, creativity, consciousness actually are. Thus, when we ask whether machines are autonomous, have responsibility, are creative, are conscious, we are really asking questions about human features.
“the AAs calculate the ratio of successful actions to total number of actions performed by the potential trustee to achieve a similar goal. Once determined, this value is compared with a threshold value. Only those AAs whose performances have a value above the threshold are considered trustworthy, and so trusted by the other AAs of the system”[2]
And if it is the same thing, if we can trust a machine in the same sense as we trust a human, what does that tell us about human relationships of trust? More generally, what is left of being human if things like trust, autonomy, creativity, consciousness, morality are transferrable to machines without any loss of meaning in these words? Readiness to feel betrayed or let down seems to presuppose attribution of responsibility for action to the individual we trust. This requires considering them, in some important sense, autonomous and conscious agents. We can only feel betrayed or let down by humans because only humans have the kind of autonomy and the level of consciousness necessary for attribution of responsibility. Or at least this is what most of us would assume. We cannot feel betrayed or let down by tools because we cannot hold tools responsible for failures (I ask those who think that they trust their car to hold their fire for now).
If yes, if trust as we know it is really something different from the trust that some people think can be placed in AI, why do we want to use the same term? Perhaps this is just a concealed attempt, or an unconscious tendency, to anthropomorphize a technology.
So, unsurprisingly, whether we can ‘trust’ AI depends on how we define trust. That is probably the best type of answer a philosopher can offer. But the implications of the answer are more meaningful than it might initially appear.
Trusting humans and relying on tools
A definition of trust needs to take into account both the features of the trustor and of the trustee. We usually think of both trustors and trustees as humans (or as anthropomorphized objects or animals). Now AI seems to challenge the idea that either the trustor or the trustee need to be human for a relationship of trust to occur.
Any definition of trust that would allow us to say that we can trust (or distrust) AI needs to be consistent with the way we use ‘trust’ to refer to attitudes towards humans. Otherwise, we would not be applying trust to our relationship with AI. We would simply be changing the meaning of a term and forcing language upon people’s everyday communicative exchanges.
In a first sense, we cannot trust AI because it is not reliable. It gets things wrong too often, there is no way to figure out if it is wrong without doing ourselves the kind of research that the software was supposed to do for us, and it could be used in unethical ways. On this view, the right attitude towards AI is one of cautious distrust. What it does might well be impressive, but not reliable epistemically or ethically.
Lack of shared definitions is not a huge problem for most practical purposes in our everyday life. We think of these properties as eminently human and we can confidently say that humans are autonomous, creative, conscious, without having to think much about definitions. At most, we can have doubts about humans in specific circumstances (for example, in the case of some severe disabilities) and developmental stages. That is where definitions matter and disagreement might arise. But these are exceptions.
The suspicion is that talk of trust in AI can reveal more about being human and about different dimensions of human experience than about these technologies or our relationships with them. As AI becomes increasingly more integrated into our lives, we might paradoxically get a better understanding of what it means to be human by asking ourselves whether, to what extent, and in what sense human dimensions are actually transferrable to technology. Asking questions about trust in AI is a way of asking questions about the nature of our trust in each other.
Take again the rock climbing example. I could calculate whether the rope can hold me, on the basis of the laws of physics. An artificial agent would rely on that calculation, and it would probably be better than me at that. But neither an artificial agent nor I could calculate whether you can hold me. Grabbing your hand is about trust, not about calculation. In principle, I might be able to calculate that your muscles can exert the amount of effort required to hold me. But I cannot calculate whether you will be willing to put that effort in it for me. Conclusion
So whether we can trust AI turns on whether AI is just a tool or something more human-like. And this gets to the core of our relationship with AI. Many keep raising the possibility that AI can be really creative, or  conscious, in ways that resemble or indeed replicate the same features in humans. If AI could really be creative and conscious, then also the kind of autonomy we attribute to artificial autonomous agents would closely resemble the autonomy we attribute to humans. But these are vaguely framed possibilities.  We’d need first to agree on what it means to be autonomous, or creative, or to have consciousness to establish if AI possesses these features. We will never agree on that.
Consider the following example, taken again from Richard Holton. We are rock climbing and I can decide whether to use a rope or to take your hand to get on top of the rock. A rope is just a kind of technology, as is AI – it is a different level of sophistication, but for the purpose of rock climbing, it has all the sophistication that is needed. My attitude in the two cases is different. Even if I have reasons to think you and the rope are equally reliable, I have additional reasons to take your hand compared to the reasons to grab the rope. My reliance on you is accompanied by trust, whereas my reliance on the rope is just that: mere reliance.`I would not feel let down or betrayed by a rope. But I will feel let down or betrayed by you if you give me your hand and then you fail to make the effort to lift me up.
One might wonder if it is still trust that we are talking about, or if we have just changed the meaning of a word that we use to describe certain human relationships to make it fit AI relationships. For that does not seem to be the kind of things going on when I trust someone. For example, I don’t perform any calculation. Often I would just follow my gut feelings. Sometimes I would rely on social or legal norms, whereby for instance I trust a mechanic to fix my car.
What does trust have to do with AI?
We might be forgiven for asking so frequently these days whether we should trust artificial intelligence. Too much has been written about the promises and perils of ChatGPT to escape the question. Upon reading both enthusiastic and concerned accounts of it, there seems to be very little the software cannot do. It can provide or fabricate a huge amount of information in the blink on an eye, reinterpret it and organize it into essays that seem written by humans, produce different forms of art (from figurative art to music, poetry, and so on) virtually indistinguishable from human-made art, and so much more. Attributing human features to AI It seems fair to ask how we can trust AI not to fabricate evidence, plagiarize, defame, serve anti-democratic political ends, violate privacy, and so on.

Similar Posts