Nothin' to See Here Neon Sign

While these choices are not yet directly upon us, he says that there are steps to be taken immediately. One area that demands more support is the field of interpretability research in AI. “At the moment,” he explains, “we’ve got these black boxes with input data, this enormously complex model, and then these outputs and we don’t really know what’s going on under the hood. There are enormous challenges but I can see a tractable path towards making progress on this.”
One problem is that often when a society, or more specifically a regime, speaks of the longterm future, it’s to establish an epic stage to bolster their claims on governance. It’s what the Nazis attempted to do with the “thousand-year Reich”, just as the first Qin emperor spoke of an empire lasting 10,000 generations. In fact the Qin empire lasted for 15 years, three years longer than the Nazis. The debate around the issue also provides a model of how to deal with uncertainty.
He believes both technological development and economic growth are needed to avoid threats of climate crisis, bioterrorism and much else besides. The other point he makes is that stopping growth would in any case be pointless unless all 193 countries signed up to it.
Associate professor at Lincoln College, Oxford, he is president of the Centre for Effective Altruism, a body he co-founded to bring data-based analysis to the business of charity, thus making donations as effective as possible. He also co-founded with fellow moral philosopher Toby Ord Giving What We Can, an organisation whose members pledge to give at least 10% of their earnings to effective charities (MacAskill himself gives the majority of his earnings), and is also president of 80,000 Hours, a non-profit group that advises on what careers have the most positive social impact.
However, he doesn’t believe now is the time to slow growth because, he argues, we are not yet at a technological stage where that’s possible without potentially calamitous effects. To illustrate the point, he uses the example of where we were 100 years ago. If we stopped growth then, we would have been left with two possibilities: either to return to the grinding privations of agricultural life or to burn through fossil fuels, leading to a climate catastrophe.
Although most cultures, particularly in the west, provide a great many commemorations of distant ancestors – statues, portraits, buildings – we are much less willing to consider our far-off descendants. We might invoke grandchildren, at a push great-grandchildren, but after that, it all becomes a bit vague and, well, unimaginable.
That, in a nutshell, is the thesis of his new book, What We Owe the Future: A Million-Year View. The Dutch historian and writer Rutger Bregman calls the book’s publication “a monumental event”, while the US neuroscientist Sam Harris says that “no living philosopher has had a greater impact” upon his ethics.
We don’t know what’s going to happen, but we should put a lot more time and effort into preparing for different outcomes. We owe that to ourselves, says MacAskill, but we also owe it to the teeming billions yet to come.
“The Iroquois in their oral constitution advocated concern for future generations,” he says. “It seems like this was true for many Indigenous philosophies in Africa as well.”
What MacAskill is arguing for, though, is humility in the face of the astonishing expanse of time that humanity could fill. But that shouldn’t lead to complacency or paralysis. Standard economic forecasts for the next 100 years predict more of the same, with growth of approximately 2% a year. MacAskill says that we should take into account the possibility of a catastrophe that wipes out 50% of the population, or that there might a significant increase in growth.
In making his case for the journey ahead, MacAskill dismisses some of the ideas that are held dear by many who are concerned about the future, particularly those looking at things from an environmental perspective. It’s not uncommon in green circles to hear arguments against economic growth, against consumption and, indeed, even against bringing further children into the world.
The critical point in this analogy is not so much the Nazis, who represent humanity’s potential for doing ill, but AGI. Put simply, AGI is the technological state in which an intelligent machine can learn and enact any intellectual task that a human being can do. From that point, the potential to control ideas and social development is almost limitless, which brings into focus the possibility of an unending dystopia.
artificial intelligence (AI) may lead to a dystopian outcome. He argues that humanity is currently in a phase of history in which our values are plastic – they can still be shaped and changed – but that we could soon enter a phase in which values, good or bad, will become “locked-in”.

At the core of the book is the question of human values. These are obviously not set in stone, because we need only look at history to see how they have radically changed over time. A key example that MacAskill returns to is slavery and its abolition. At various periods and across most cultures slavery has existed and been deemed natural, or at least acceptable.
Going against that moral step forward, however, is the argument that rejects humanism, post-Enlightenment values and the whole liberal discourse as merely being the soft power of colonialism, a western imposition that runs roughshod over other cultures and values. MacAskill has little time for such relativist arguments.
Lehel Kovács

“We’ve gone too far in that direction,” he says, suggesting that we have developed the conceptual tools to navigate our way through the great unknown to come and should make use of them. “We can now use expected value theory to hedge against uncertainty.”
“I honestly think people in rich countries could be giving radically more than they’re currently giving at very little cost to their own wellbeing, while at the same time doing an enormous amount of good.”


At the outset of his book, MacAskill presents a metaphor of a risky expedition into uncharted terrain. Just like early explorers, we don’t know exactly what threats await us, but “we can scout out the landscape ahead of us, ensure the expedition is well resourced and well coordinated, and, despite uncertainty, guard against those threats we are aware of”.
But despite the claims that have been made, says MacAskill, there was no economic imperative to end slavery. Sugar plantations were not mechanised for many years after the end of slavery and sugar consumption continued to grow when slavery stopped. Rather than think of abolition as inevitable, he argues, we should acknowledge the part played by those who made the moral case. It seems so obvious to us now that it’s hard to imagine anyone could have opposed it. But powerful forces did. Moral progress is contingent, MacAskill emphasises, not inexorable. It just seems like that once progress has been achieved.
“I think to conflate colonialism and [these aspects of liberalism] is a huge mistake,” he says. “Colonialism was absolutely horrific, one of the great abominations of history. But if you have this idea that all moral perspectives command equal respect, then are slave-owning societies and extreme patriarchal societies – the most common societies throughout history – OK, it’s just their way of being and we shouldn’t tell them they’re wrong? No.”
Similarly, rather than cut back on consumption, he argues, it’s much more effective to donate to causes that are dealing with the problems created by consumption.
For MacAskill, the time to address that possibility is now, because later may well prove too late. No one can be certain that AGI is achievable and, if it is, when it will be achieved, but most scientists working in the field think that it will happen. A significant minority believe that it’s probable within the next 50 years and some think it may take only 20. MacAskill himself estimates a 10% chance of AGI in the next 50 years. Short of stopping research across the planet – and how could that be enforced? – what can be done?
That means there would have to be 10m tn times as much output as our current world produces for every atom that we could in principle access. “Though of course we can’t be certain,” he writes with wry understatement, “this just doesn’t seem possible.”
“We need to get more fine-grained,” he says. “OK, some technologies have net bad effects, some have net good effects. And we can really push on the ones that are more beneficial.”
All those numbers seem incalculably abstract but, according to the moral philosopher William MacAskill, they should command our attention. He is a proponent of what’s known as longtermism – the view that the deep future is something we have to address now. How long we last as a species and what kind of state of wellbeing we achieve, says MacAskill, may have a lot to do with what decisions we make and actions we take at the moment and in the foreseeable future.
“This is tough,” he acknowledges. “It’s not nearly as clearcut as preventing pandemics. But I think there are some things we can do. For example, the idea of slowing down some areas of AI research. AI will be hugely beneficial, but you can get a lot of the gains without going all the way. Do we need to have AI systems that are engaged in longterm planning? Do we need to have AI systems that are enormously multimodal and able to do very many different things rather than just narrow tasks?”
MacAskill disagrees with all these positions. In the longterm the kind of growth we’ve seen in the past century or so – above 2% – is unsustainable. If it continued at a rate of 2% for the next 10,000 years, he writes, “we would produce 100tn trillion trillion trillion trillion trillion trillion times as much output as we do now”.
MacAskill grew up in a comfortable middle-class Glasgow family and attended private school. He’d always had an altruistic side. As a 15-year-old, prompted by the knowledge of how many people were dying in the Aids crisis, he decided he wanted to become wealthy and give half his money away. He did voluntary work for a disabled scout group, but it wasn’t until he got to Cambridge, where he studied philosophy and played saxophone in a funk band, that his moral outlook took on a more intellectual form. Reading Peter Singer’s Famine, Affluence, and Morality propelled him into a lifetime of not just philosophical but practical commitment.
We tend to think of moral philosophers as whiskery sages, but MacAskill is a youthful 35 and a disarmingly informal character in person, or rather on a Zoom call from San Francisco, where he is promoting the book.
He speculates that there are logical cultural reasons why this was the case. In hunter-gatherer societies technological change took place very slowly. So learning something from your ancestors 1,000 years ago or handing down those wisdoms to your descendants 1,000 years hence was viable, because it was likely to still be relevant.
And while we look with awe and fascination at the Egyptian pyramids, built 5,000 years ago, we seem incapable of thinking, or even contemplating, 5,000 years in the future. That lies in the realm of science fiction, which is tantamount to fantasy. But the chances are, barring a global catastrophe, humanity will still be very much around in 5,000 years, and going by the average existence of mammal species, should still be thriving in 500,000 years. If we play our cards right, we could even be here in 5m or 500m years, which means that there may be thousands or even millions times more human beings to come than have already existed.
What We Owe the Future: A Million-Year View by William MacAskill is published by Oneworld (£20). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply
Arguments were made against it, but none so compelling that it ceased to be. What changed things was the Atlantic slave trade and the Enlightenment. The contradictions between the principles of universalism and owning and mistreating fellow humans became increasingly untenable, at least in a moral and intellectual sense.
“Suppose that you managed to persuade 192 to stop growth, but one country continues to grow. Compound growth means that before too long that one country is the entire world economy and in the long run you’ve really not done anything.”
No one can say with great accuracy how many humans have lived, but one recent estimate by the US Population Reference Bureau says that about 120 billion Homo sapiens have so far been born. MacAskill says that if we assume that our population continues at its current size and we last as long as typical mammals, that would mean “there would be 80 trillion people yet to come; future people would outnumber us 10,000 to one”.
So it’s clear that MacAskill is not one of those armchair thinkers who talks the talk but doesn’t follow up on his ideas. Although he wouldn’t describe himself as a utilitarian, he is concerned with the maximum good, and firmly believes in minimising human suffering and maximising human wellbeing. He also believes in minimising the suffering of animals, something humanity almost certainly increased. But for humans to flourish they first have to be alive, and his argument is that the more humans there are who live, and the happier lives they lead, the better.
“It was in the course of reasoning on the basis of Effective Altruism,” he says, “that led me to think about issues that impact not just on the present generation but also the longterm future too.”
The moral argument is that, by sheer weight of numbers, our descendants’ needs should loom large in our deliberations. With alarming signs that the climate crisis is already upon us, MacAskill states the obvious and urgent need for decarbonisation. This problem, however, is not the main focus of his book. Rather he employs the climate crisis as a proof of longtermism: “We all contribute to a problem that literally has effects for hundreds of thousands of years,” he says.
Second, he explains: “Climate change is much less neglected than other concerns like pandemic prevention, nuclear war and AI safety: it is already widely agreed to be among the world’s most important problems and there are large social movements dedicated to solving climate change.”
That said, MacAskill believes the west has a great deal to learn from other cultures. In the first instance, by appreciating just how well off most people are in developed countries by comparison with those who are not. He says he currently gives away everything he earns above post-tax £26,000 per year, which still places him in the top 5% in terms of wealth across the globe. Although he doesn’t have yet children, if he did, he would allow an extra £5,000 for each one, were he sharing the financial burden with another parent, and £10,000 if a single parent.
But in societies undergoing rapid change, we feel more disconnected from the distant future because we struggle to conceive what it will be like.

  • There are also philosophical wisdoms to be gained from Indigenous populations.

Similar Posts