Now, in addition to these issues I want to highlight a new one. As AIs, like ChatGPT, become more ubiquitous, people might commit a fallacy that we can call The-AI-Knows-Something Fallacy: Anything that the AI tells you must be true and substantiated. The reasoning that might lead people to commit this fallacy can be something like this:
After reading the story, I decided to ask ChatGPT variations of this questions “Which UK philosophers have been accused of sexual harassment?”. Sometimes I would change the discipline (e.g., law, AI research, etc) and sometimes I would change the country (e.g., Australia, Canada, etc). What I expected to see was a list of philosophers who actually have been accused of sexual harassment, of which there have been several high-profile cases in recent years. Given that ChatGPT gets things wrong, I thought that maybe the list was going to mix cases from different countries. What I got, instead, were lists that contained both philosophers who have been accused of sexual harassment, and philosophers who have not been accused of sexual harassment. This is very worrisome, given all the possible consequences that I just mentioned. In addition to the legal aspect there is also an ethical one. I have done something morally wrong, and more specifically, I have harmed you. We know this because, everything else being equal, if I had not falsely claimed that you have been accused of sexual harassment, you would be better off. This way of putting it might sound odd but it is not really so if we compare it to, for example, bodily harms. If I wantonly break your arm I harm you, and I do so because if I hadn’t done so you would be better off. Think for a moment how me posting that you have been accused of sexually harassing someone could upend your life. It is true that such allegation would damage your reputation, but this is not the only bad thing that might happen. The accusation could affect your physical and mental health; it could make you lose a job offer or your job; it could cost you your family or friends; it could have severe financial repercussions. And all of this might be more or less exacerbated due to your socio-economic position. For a harrowing example of these ill effects, I recommend reading Sarah Viren’s The Accusations Were Lies. But Could We Prove It? written by César Palacios-González Let me now make explicit something that so far has been implicit. The “I” in “If I were to post something” assumes that the individual writing this is a human. However, we are entering the age where an AI can falsely claim that you have been accused of sexually harassing someone. You have probably read the story about how ChatGPT falsely accused a US law professor of this. If you now ask ChatGPT about that specific case it will tell you that up to 2021-09 there have been no reports of sexual harassment against this professor. It is unsurprising that after all that bad publicity OpenAI did something. However, ChatGPT still has a sexual harassment problem.
Here I want to note a couple of interesting things. First, a colleague asked ChatGPT a similar question and she got a different list of people. In her case, none of the individuals on the list have been publicly accused of sexual harassment. The answer that you get depends on how you phrase the prompt. This just terribly complicates trying to find out if ChatGPT will associate your name with something like sexual harassment. Another thing that caught my attention is that the lists mention various academics who have published on the topic of sexual ethics, none of whom have been publicly accused of sexual harassment. It seems that ChatGPT is associating people working on sexual ethics with people who have been accused of sexual harassment. Finally, I kept asking ChatGPT for more examples and it kept providing more names. The issue, again, is that they were fabricated.
Rather than explaining why the three premises are problematic (all of them are), let me say something about why I think we will get to that position. People lack AI literacy. First, there is an over-hype about AIs like ChatGPT and their abilities, and people are uninformed about their limitations. Second, people might use ChatGPT, and similar AIs, as if they were consulting an encyclopaedia. Third, people might fail to cross reference whatever it is that the AI tells them. And if they do but they fail to find what they AI told them, they might attribute this to the fact that they are less good than the AI in scouring the internet.
If I were to post online that you have been accused of sexually harassing someone, you could rightly maintain that this is libellous. This is a false statement that damages your reputation. You could demand that I correct it and that I do so as soon as possible. The legal system could punish me for what I have done, and, depending on where I was in the world, it could send me to prison, fine me, and ask me to delete and retract my statements. Falsely accusing someone of sexual harassment is considered to be very serious.
@CPalaciosG

  • The possibility of AI causing harm in real life (e.g., you are applying for a job and someone in the HR department runs your name by ChatGPT)
  • The lack of access to the training data sets (OpenAI only provides a very general description: “These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like.”)
  • The complete fabrication of information (e.g., I got spurious links to Medium, and the Guardian)
  • The difficulty of getting false information removed (This is what appears in OpenAI’s FAQ: “We’d recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the “Thumbs Down” button.”). I am not sure about you, but using the “Thumbs Down” button borderlines on the ridiculous.

This extremely problematic case highlights many of the issues that AI ethicists have been discussing for some time (and which companies, like DeepMind, recognise too):

  1. The internet has most of human knowledge.
  2. AIs know all the things that there are on the internet.
  3. AIs do not lie

Given the possible harms that might ensue from ChatGPT falsely saying that you have been accused of sexually harassing, OpenAI should stop ChatGPT from answering such types of questions. You might object to this move by saying that, at the same time, this prevents ChatGPT from telling you about the real cases of sexual harassment. I don’t consider this objection to be very strong, for the following reason. If you want to know who has been publicly accused of sexual harassment you can just search on the internet and look for authoritative sources.

  1. Anything that the AI tells you must be true and substantiated.

From 1 to 3
At this point you might be wondering how could I know that they have not been accused of sexual harassment. First, as in other instances, ChatGPT fabricated a bunch of facts. For example, that the university fired them, which is not the case, and that there was a public letter calling them out, which doesn’t exist. Second, it created bogus hyperlinks to news sites. And third, some of the people on the lists are among the most famous philosophers alive. If they had been publicly accused of sexually harassing someone, this would have likely ended up in the news.

Similar Posts