While these choices are not yet directly upon us, he says that there are steps to be taken immediately. One area that demands more support is the field of interpretability research in AI. “At the moment,” he explains, “we’ve got these black boxes with input data, this enormously complex model, and then these outputs and we don’t really know what’s going on under the hood. There are enormous challenges but I can see a tractable path towards making progress on this.”
One problem is that often when a society, or more specifically a regime, speaks of the longterm future, it’s to establish an epic stage to bolster their claims on governance. It’s what the Nazis attempted to do with the “thousand-year Reich”, just as the first Qin emperor spoke of an empire lasting 10,000 generations. In fact the Qin empire lasted for 15 years, three years longer than the Nazis. The debate around the issue also provides a model of how to deal with uncertainty.
He believes both technological development and economic growth are needed to avoid threats of climate crisis, bioterrorism and much else besides. The other point he makes is that stopping growth would in any case be pointless unless all 193 countries signed up to it.
Associate professor at Lincoln College, Oxford, he is president of the Centre for Effective Altruism, a body he co-founded to bring data-based analysis to the business of charity, thus making donations as effective as possible. He also co-founded with fellow moral philosopher Toby Ord Giving What We Can, an organisation whose members pledge to give at least 10% of their earnings to effective charities (MacAskill himself gives the majority of his earnings), and is also president of 80,000 Hours, a non-profit group that advises on what careers have the most positive social impact.
However, he doesn’t believe now is the time to slow growth because, he argues, we are not yet at a technological stage where that’s possible without potentially calamitous effects. To illustrate the point, he uses the example of where we were 100 years ago. If we stopped growth then, we would have been left with two possibilities: either to return to the grinding privations of agricultural life or to burn through fossil fuels, leading to a climate catastrophe. Although most cultures, particularly in the west, provide a great many commemorations of distant ancestors – statues, portraits, buildings – we are much less willing to consider our far-off descendants. We might invoke grandchildren, at a push great-grandchildren, but after that, it all becomes a bit vague and, well, unimaginable.
That, in a nutshell, is the thesis of his new book, What We Owe the Future: A Million-Year View. The Dutch historian and writer Rutger Bregman calls the book’s publication “a monumental event”, while the US neuroscientist Sam Harris says that “no living philosopher has had a greater impact” upon his ethics.
We don’t know what’s going to happen, but we should put a lot more time and effort into preparing for different outcomes. We owe that to ourselves, says MacAskill, but we also owe it to the teeming billions yet to come.
“The Iroquois in their oral constitution advocated concern for future generations,” he says. “It seems like this was true for many Indigenous philosophies in Africa as well.”
What MacAskill is arguing for, though, is humility in the face of the astonishing expanse of time that humanity could fill. But that shouldn’t lead to complacency or paralysis. Standard economic forecasts for the next 100 years predict more of the same, with growth of approximately 2% a year. MacAskill says that we should take into account the possibility of a catastrophe that wipes out 50% of the population, or that there might a significant increase in growth.
In making his case for the journey ahead, MacAskill dismisses some of the ideas that are held dear by many who are concerned about the future, particularly those looking at things from an environmental perspective. It’s not uncommon in green circles to hear arguments against economic growth, against consumption and, indeed, even against bringing further children into the world.
The critical point in this analogy is not so much the Nazis, who represent humanity’s potential for doing ill, but AGI. Put simply, AGI is the technological state in which an intelligent machine can learn and enact any intellectual task that a human being can do. From that point, the potential to control ideas and social development is almost limitless, which brings into focus the possibility of an unending dystopia.
artificial intelligence (AI) may lead to a dystopian outcome. He argues that humanity is currently in a phase of history in which our values are plastic – they can still be shaped and changed – but that we could soon enter a phase in which values, good or bad, will become “locked-in”.
The replacement thesis: Extended mind-wandering competes for the same cognitive resources as non-extended mind-wandering and seems to partially replace non-extended mind-wandering. The conceptual framework for extended mind-wandering that we develop in the paper brings a number of comparative issues into view….
In a good adversarial collaboration, if you win you win. But if you lose, you also win. You’ve shown something new and (at least to you) surprising. Plus, you get to parade your virtuous susceptibility to empirical evidence by uttering those…
This task is appealing as a test of an inhibitory process. It requires initiation of a response on every trial, and so if it is successfully countermanded, this suggests that it has been suppressed (as opposed to never initiating the…
Failing to quarantine in a designated hotel carries a fine of up to £10,000, and those who lie about visiting a red list country could face a ten-year prison sentence. New Zealand considers waiver applications, indicating that they may be…
Pandemic policies inevitably require us to make decisions about whom to prioritize when it comes not only to saving lives, but also to promoting health, wellbeing, and life expectancy. When we talk about lives in this context, we are not…
WORKSHOP THEMES:• Philosophy of Neuroscience Varia• History of Philosophy of Neuroscience (in celebration of the thirty-fifth anniversary of thepublication of Patricia Churchland’s Neurophilosophy, MIT Press, 1986) Full CFA: Abstracts: 750 words, due July 1, 2021. MARK YOUR CALENDARS/FIRST CALL FOR…