[embedded content] In this Thinking Out Loud interview with Katrien Devolder, Philosophy Professor Peter Railton presents his take on how to understand, and interact with, AI. He talks about how AI can have moral obligations towards us, humans, and towards each other, and why we, humans, have moral obligations towards AI agents. He also stresses that the best way to tackle certain world problems, including the dangers of AI itself, is to form a strong community consisting of biological AND AI agents.

Similar Posts

Head to Head: the ethics of vaccine passports and COVID passes
HK: I disagree with you on the idea that everyone can have a vaccine if they want to because they’re available for everyone. It assumes an equality that doesn’t exist. Maybe by law and maybe technically they’re available, but as I…

John Zeimbekis: Malleability or cognitive effects on recognition?
If recognition is cognitively penetrable, then recognition supported by perceptual expertise—as opposed to just any visual recognition—is not an extra symptom of malleability or penetrability of perception. A difference between novices and experts is that experts (when working as experts)…

Who Cares?
The person who cares doesn’t have just one motivation, and they are not just one type of person. It’s important to challenge simplistic ‘social contract’ descriptions, but similarly simplistic accounts of ‘special’ family relationships won’t stand up to scrutiny. People…

Alan Arkin on Hollywood success: ‘I was miserable pretty much all of the time’
• In the UK and Ireland, Samaritans can be contacted on 116 123 or email jo@samaritans.org or jo@samaritans.ie. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. In Australia, the crisis support service Lifeline is 13 11 14. Other…

Lockdown Erodes Agency
Lockdown has trashed businesses and made economies shrivel. But I wonder if its most lasting legacy will be ethical. It has eroded agency. It has made us less moral, less ourselves, and hence more manipulable. By Charles Foster The other…

Cognitive Science of Philosophy Symposium: Adversarial Collaboration
In a good adversarial collaboration, if you win you win. But if you lose, you also win. You’ve shown something new and (at least to you) surprising. Plus, you get to parade your virtuous susceptibility to empirical evidence by uttering those…