While these choices are not yet directly upon us, he says that there are steps to be taken immediately. One area that demands more support is the field of interpretability research in AI. “At the moment,” he explains, “we’ve got these black boxes with input data, this enormously complex model, and then these outputs and we don’t really know what’s going on under the hood. There are enormous challenges but I can see a tractable path towards making progress on this.”
One problem is that often when a society, or more specifically a regime, speaks of the longterm future, it’s to establish an epic stage to bolster their claims on governance. It’s what the Nazis attempted to do with the “thousand-year Reich”, just as the first Qin emperor spoke of an empire lasting 10,000 generations. In fact the Qin empire lasted for 15 years, three years longer than the Nazis. The debate around the issue also provides a model of how to deal with uncertainty.
He believes both technological development and economic growth are needed to avoid threats of climate crisis, bioterrorism and much else besides. The other point he makes is that stopping growth would in any case be pointless unless all 193 countries signed up to it.
Associate professor at Lincoln College, Oxford, he is president of the Centre for Effective Altruism, a body he co-founded to bring data-based analysis to the business of charity, thus making donations as effective as possible. He also co-founded with fellow moral philosopher Toby Ord Giving What We Can, an organisation whose members pledge to give at least 10% of their earnings to effective charities (MacAskill himself gives the majority of his earnings), and is also president of 80,000 Hours, a non-profit group that advises on what careers have the most positive social impact.
However, he doesn’t believe now is the time to slow growth because, he argues, we are not yet at a technological stage where that’s possible without potentially calamitous effects. To illustrate the point, he uses the example of where we were 100 years ago. If we stopped growth then, we would have been left with two possibilities: either to return to the grinding privations of agricultural life or to burn through fossil fuels, leading to a climate catastrophe. Although most cultures, particularly in the west, provide a great many commemorations of distant ancestors – statues, portraits, buildings – we are much less willing to consider our far-off descendants. We might invoke grandchildren, at a push great-grandchildren, but after that, it all becomes a bit vague and, well, unimaginable.
That, in a nutshell, is the thesis of his new book, What We Owe the Future: A Million-Year View. The Dutch historian and writer Rutger Bregman calls the book’s publication “a monumental event”, while the US neuroscientist Sam Harris says that “no living philosopher has had a greater impact” upon his ethics.
We don’t know what’s going to happen, but we should put a lot more time and effort into preparing for different outcomes. We owe that to ourselves, says MacAskill, but we also owe it to the teeming billions yet to come.
“The Iroquois in their oral constitution advocated concern for future generations,” he says. “It seems like this was true for many Indigenous philosophies in Africa as well.”
What MacAskill is arguing for, though, is humility in the face of the astonishing expanse of time that humanity could fill. But that shouldn’t lead to complacency or paralysis. Standard economic forecasts for the next 100 years predict more of the same, with growth of approximately 2% a year. MacAskill says that we should take into account the possibility of a catastrophe that wipes out 50% of the population, or that there might a significant increase in growth.
In making his case for the journey ahead, MacAskill dismisses some of the ideas that are held dear by many who are concerned about the future, particularly those looking at things from an environmental perspective. It’s not uncommon in green circles to hear arguments against economic growth, against consumption and, indeed, even against bringing further children into the world.
The critical point in this analogy is not so much the Nazis, who represent humanity’s potential for doing ill, but AGI. Put simply, AGI is the technological state in which an intelligent machine can learn and enact any intellectual task that a human being can do. From that point, the potential to control ideas and social development is almost limitless, which brings into focus the possibility of an unending dystopia.
artificial intelligence (AI) may lead to a dystopian outcome. He argues that humanity is currently in a phase of history in which our values are plastic – they can still be shaped and changed – but that we could soon enter a phase in which values, good or bad, will become “locked-in”.
After speaking at length about how metaethics could profit from experimental results, it might not come as a surprise that I believe that even an idea this plausible requires empirical support. Ultimately, whether thick concepts have the disposition to guide…
HK: I disagree with you on the idea that everyone can have a vaccine if they want to because they’re available for everyone. It assumes an equality that doesn’t exist. Maybe by law and maybe technically they’re available, but as I…
Woodward, J. (2008). “Mental Causation and Neural Mechanisms,” in J. Hohwy and J. Kallestrup, eds., Being Reduced: New Essays on Reduction, Explanation, and Causation (. Oxford: Oxford University Press), 218–262. In Woodward and Hitchcock (2003), we proposed a model of causal explanation…
(b) when doing the act, D reasonably believes— If there is no way of knowing on a systemic level whether the Mental Capacity Act is being flouted by the Supreme Court’s implicit guidance being ignored, it seems arguable that the…
His host’s girlfriend had exploded at him, calling him an “incredible pig” for eating 3-4ft of a 6ft sandwich. Alan’s protest that he had brought homemade chicken wings to share (“sort of my specialty”) fell on deaf ears, as did…
In an even worse – though less likely – scenario, Omicron might cause severe disease in vulnerable people and vaccines might not work too well at preventing serious illness. Even in such a case, there are more proportionate responses than blanket…