The end of the world – or at least the human species – is not something most of us think about very often, much less do anything about. We should not be quite so relaxed. The number of possible events that could kill all human beings everywhere, or at least cause advanced civilisation to collapse beyond repair, is growing.
We should consider these risks like other low-probability but catastrophic events, such as our house being totally destroyed. Just as we take out insurance to protect against that possibility, we should take steps to reduce our exposure to Global Catastrophic Risks (GCRs).
Crucially, though, we need to consider such risks the right way, based on good science, probabilistic reasoning and mathematics – and a sound economic understanding of costs and benefits.
Global Catastrophic Risks are things that would bring about consequences affecting the entire planet and everyone alive plus their descendants. And they would be utterly disastrous – far beyond the merely ‘very bad’. That means the extinction of the human species (or something very close to it) or the collapse of advanced civilisation with no realistic prospect of recovery.
There have always been risks of this kind, such as an asteroid striking Earth, a truly lethal pandemic, or a supervolcano eruption. We and our ancestors lived with them because the probability of their happening was very low, and we could do little or nothing about them. This has all changed in the last eighty years or so.
We (all humans) now face a range of novel GCRs from new or emergent technologies. The first of these was nuclear weapons, with the risk of a civilisation-ending nuclear war. This has been joined by others, such as the threat of a pandemic caused by a bioweapon, super powerful and unaligned AI that supplants humanity, or the unintended results of genetic manipulation of organisms. These are much more likely than natural catastrophes. What’s more, their number is increasing, and they are unpredictable in a way that natural events are not.
Technology and how we live now also make natural risks that have always been around much more likely and, thereby, more of a threat. One case is that of natural pandemics, made more probable by contemporary methods of livestock farming. But the big example is sudden climate change – not a gradual process of warming (or cooling), but a sudden reset of the Earth’s climate system which would leave us no time to adapt.
Finally, our technological development, which in many ways increases our ability to cope with major catastrophes, has also made us more vulnerable. This has two aspects. The first is that we now live in and completely depend upon a set of increasingly complex systems – as we saw with the impact of Covid-19. Many of these are vulnerable to natural events that would have left our ancestors’ simpler technologies and institutions largely untouched.
One example of this is major solar flares, which merely gave our ancestors a huge display of Northern Lights but which would now crash the power grid, and electronic and online systems – and not for a short time. There is also the prospect of a systemic collapse, in which the failure of a part of the interlocking system of systems leads to a runaway collapse of the whole.
All this means we face a much wider variety of risks than our ancestors and with higher and increasing probabilities. As a species, we are in a moment of heightened vulnerability because we have advanced enough to create new risks but have not yet developed ways to deal with many of them. In addition, our current levels of technology mean that if modern civilisation were to collapse, we might not be able to reconstruct it at anything like its present level.
It is clear that we need to pay attention to the threat of catastrophes. That does not mean panicking or worrying about them all the time. It does mean thinking about how likely they are and what can and should be done to make them less likely.
We must also be realistic. Our aim should be to mitigate the impact of such an event to merely ‘bad’ or ‘very bad’ rather than ‘catastrophic’. But this should not stop us from spending resources or taking steps that impose costs (e.g. a pause or stop of AI research, which could delay or prevent certain benefits we might otherwise get). Economic insights will be critical to this effort, since thinking about costs and benefits and trade-offs is the economist’s bread and butter.
Saying we should do some things is not the same as doing anything people suggest. Even with GCRs, there are some ways of responding that are either futile or actively counterproductive, in some cases as bad as the thing they are meant to guard against. An example of this are demands for top-down global government or controls on innovation.
This is the exact opposite of what we should do. Not only would it create a permanent global tyranny (itself a GCR), but it would cut off the best ways we have of producing solutions and leave us at the mercy of the choices of well-intentioned but ignorant and fallible elites. There is undoubtedly a place for governance and regulations, and we should certainly be looking to reform market institutions to give them a greater concern for the long term.
Primarily, though, we should champion market exchange and free institutions, which have proven the most effective tool to deliver sustained innovation. Rather than a global Apollo or Manhattan Project, this means unleashing the ingenuity of billions of individuals to tackle these difficult, existential challenges. This may seem counter-intuitive, but it offers our best chance of achieving the innovation necessary to make it through the next century.
Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.
CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.