[embedded content] In this Thinking Out Loud interview with Katrien Devolder, Philosophy Professor Peter Railton presents his take on how to understand, and interact with, AI. He talks about how AI can have moral obligations towards us, humans, and towards each other, and why we, humans, have moral obligations towards AI agents. He also stresses that the best way to tackle certain world problems, including the dangers of AI itself, is to form a strong community consisting of biological AND AI agents.
Similar Posts
Writing Is Not That Easy: Grammarly As Affordance.
I recently received an email from someone about a grant application in which I’m involved. In this email, the person coordinating the grant asked recipients to suggest revisions to the text, but noted that as it stood it had a…
The clockwork universe: is free will an illusion? – podcast
The Guardian is editorially independent. And we want to keep our journalism open and accessible to all. But we increasingly need our readers to fund our work. Support The Guardian Support The Guardian
Yogini Ekadashi – 6 July, 2021
“O king, as Hemamali enjoyed his wife, Kuvera began his usual worship of Lord Shiva in his palace. Soon enough he realized the flowers for the midday puja were missing. Unable to perform this important upachara (offering) Kuvera got angry…
NHS and Care Home Mandates Should Take Account of Natural Immunity to COVID
One striking feature of the current UK care home and NHS staff mandate is that it does not allow an exemption for those who have proof of natural immunity. There is now evidence to suggest that natural immunity confers comparable…
No jab, no job? Vaccination requirements for care home staff
As we have discussed in other written evidence submissions on mandatory vaccination and on Covid-19 vaccination certification, vaccination requirements can be ethically permissible and compliant with human rights law. While vaccination requirements would likely interfere with individuals’ rights to private…
ChatGPT Has a Sexual Harassment Problem
Now, in addition to these issues I want to highlight a new one. As AIs, like ChatGPT, become more ubiquitous, people might commit a fallacy that we can call The-AI-Knows-Something Fallacy: Anything that the AI tells you must be true…