Guest Post: High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation

This tension has important philosophical implications. First, it means that unless more is said, many parties to debates about existential risk may have been arguing on behalf of their opponents. To many, it has seemed that a good way to support the moral importance of existential risk mitigation is to make alarmist predictions about the levels of existential risk facing humanity today, and that a good way to oppose the moral importance of existential risk mitigation is to argue that existential risk is in fact much lower than alarmists claim. However, unless more is said, matters are exactly the reverse: arguing that existential risk is high strongly reduces the value of existential risk mitigation, whereas arguing that existential risk is low strongly increases the value of existential risk mitigation.
Second, there has been a wave of recent support for longtermism, the doctrine that positively influencing the long-term future is a key moral priority of our time. When pressed to recommend concrete actions we can take to improve the long-term future of humanity, longtermists often point to existential risk mitigation. By the astronomical value thesis, longtermists hold, existential risk mitigation is very important. But this paper suggests an important qualification, since many longtermists are also pessimists about existential risk. As we have seen, existential risk pessimism may well be incompatible with the astronomical value thesis, in which case the value of existential risk mitigation may be too low to provide good support for longtermism.
On (1): To illustrate the best case, suppose that humanity faces a constant level of risk r per century. Suppose also that each century of existence has constant value v, if only we live to reach it. And suppose that all existential catastrophes lead to human extinction, so that no value will be realized after catastrophe. Then, it can be shown that the value of reducing in our century by some fraction f is f*v. In this model, pessimism has no bearing on the astronomical value thesis, since the starting level r of existential risk does not affect the value of existential risk mitigation. Moreover, the value of existential risk reduction is capped at v, the value of a single century of human life. Nothing to sneeze at, but hardly astronomical.
Parfit, Derek, Reasons and persons (Oxford: Oxford, 1984).
Second, increasingly many philosophers hold that humanity faces high levels of existential risk. In his bestselling book, The Precipice, Toby Ord (2020) puts the risk of existential catastrophe by 2100 at one in six: Russian roulette. Attendees at an existential risk conference at Oxford put existential risk by 2100 at nearly one in five (Sandberg and Bostrom 2008). And the Astronomer Royal, Martin Rees (2003), puts the risk of civilizational collapse by 2100 at fifty-fifty: a coinflip. Let existential risk pessimism be the claim that per-century levels of existential risk are very high.
Sandberg, Anders and Bostrom, Nick, “Global catastrophic risks survey,” Technical Report 2008-1 (2008), Future of Humanity Institute.
 
Written by David Thorstad , Global Priorities Institute, Junior Research Fellow, Kellogg College
In the second scenario, a war kills 100% of all living humans. This, Parfit urges, would be a far greater catastrophe, for in this scenario the entire human civilization would cease to exist. The world would perhaps never again know science, art, mathematics or philosophy. Our projects would be forever incomplete, and our cities ground to dust. Humanity would never settle the stars. The untold multitudes of descendants we could have left behind would instead never be born.
Derek Parfit (1984) asks us to compare two scenarios. In the first, a war kills 99% of all living humans. This would be a great catastrophe – far beyond anything humanity has ever experienced. But human civilization could, and likely would, be rebuilt.
This post is based on my paper “High risk, low reward: A challenge to the astronomical value of existential risk mitigation,” forthcoming in Philosophy and Public Affairs. The full paper is available here and I have also written a blog series about this paper here.
Many philosophers think two things about existential risk. First, it is not only valuable, but astronomically valuable to do what we can to mitigate existential risk. After all, the future may hold unfathomable amounts of value, and existential risks threaten to reduce that value to naught. Call this the astronomical value thesis.

End notes

In this model, pessimism tells against the astronomical value thesis: if you think that existential risk is now 100 times greater than I think it is, you should be 100 times less enthusiastic about existential risk mitigation. Moreover, the value of existential risk reduction is capped at v/r. For the optimist, this quantity may be quite large, but not so for the pessimist. For example, if we estimate per-century risk r at 20%, then the value of existential risk is capped at five times the value of a single century – again, nothing to sneeze at, but not yet astronomical.
On (2): Making the model more realistic only serves to heighten the tension between pessimism and the astronomical value thesis. For example, suppose that centuries grow linearly in value over time, so that if this century has value v, the next century has value 2v, then 3v and so on. Keep the other modelling assumptions the same. Now, it can be shown that the value of reducing existential risk in our century by some fraction f is fv/r.
Surely the following is an obvious truth: existential risk pessimism supports the astronomical value thesis. If we know anything about risks, it is that it is more important to mitigate large risks than it is to mitigate small risks. This means that defenders of the astronomical value thesis should be pessimists, aiming to convince us that humanity’s situation is dire, and opponents should be optimists, aiming to convince us that things really are not so bad.
In my paper, I argue that every word in the previous paragraph is false. At best, existential risk pessimism has no bearing on the astronomical value thesis. Across a range of modelling assumptions, matters are worse than this: existential risk pessimism strongly reduces the value of existential risk mitigation, often strongly enough to scuttle the astronomical value thesis singlehandedly. (See end notes for examples, and see the full paper for further details).
This thought has driven many philosophers to emphasize the importance of preventing existential risks, risks of catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, we might regulate weapons of mass destruction or seek to reduce what some see as a risk of extinction caused by rogue artificial intelligence.

References

This thought has driven many philosophers to emphasize the importance of preventing existential risks, risks of catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, we might regulate weapons of mass destruction or seek to reduce what some see as a risk of extinction caused by rogue artificial intelligence.