By Phil Torres
Published in special issue of Foresight on existential risks. Updated Nov. 6, 2018.
Abstract: This paper provides a detailed survey of the greatest dangers facing humanity this century. It argues that there are three broad classes of risks—the “Great Challenges”—that deserve our immediate attention, namely, environmental degradation, which includes climate change and global biodiversity loss; the distribution of unprecedented destructive capabilities across society by dual-use emerging technolo–gies; and value-misaligned algorithms that exceed human-level intelligence in every cognitive domain. After examining each of these challenges, the paper then outlines a handful of additional issues that are relevant to understanding our existential predicament and could complicate attempts to overcome the Great Challenges. The central aim of this paper is to provide an authoritative resource, insofar as this is possible in an academic journal, for scholars who are working on or interested in existential risks. In my view, this is precisely the sort of big-picture analysis that humanity needs more of if we wish to navigate the obstacle course of existential dangers before us.
This is the first century in the world’s history when the biggest threat is from humanity.—Lord Martin Rees
The present paper offers a comprehensive overview of our rapidly evolving existential predica-ment. In doing this, it outlines what I call the “Great Challenges framework,” which aims to provide a useful mechanism for prioritizing the various threats to humanity in the first half of the twenty-first centu-ry. To qualify as a Great Challenge, a phenomenon must satisfy three criteria: (i) significance: it must have existential risk implications; (ii) urgency: it must require immediate attention if humanity wishes to obviate it; and (iii) ineluctability: it must be more or less unavoidable given civilization’s current devel-opmental trajectory. Put differently, the semantic extension of “Great Challenges” includes all and only those problems that humanity cannot ignore, are time-sensitive, and would produce severe and irre-versible consequences if left unsolved. For the sake of clarity, we can define an existential risk here as “any future event that permanently prevents us from exploiting a large portion of our cosmic endowment of negentropy to create astronomical amounts of those things that we find valuable” (see Torres and Beard, forthcoming). (1) This reconstructs the various definitions articulated by Bostrom in his foundational 1work on the topic (e.g., Bostrom 2002, 2013).(2) The most obvious way of satisfying these definiens is for humanity to go extinct, but there are other events, e.g., civilizational collapse and technological stagna-tion, that could also irreparably compromise our long-term prospects for realizing astronomical value in the universe (see Bostrom 2013).(3)
Why should one care about the long-term survival and flourishing of (post)humanity? The answer to this question goes (far) beyond the scope of this paper, but suffice it to say that many value systems would view the instantiation of an existential risk as tragic. For example, total utilitarianism prescribes the creation of as many happy people in the universe as possible, and since failing to exploit our cosmic en–dowment would hinder this aim, an existential catastrophe would be profoundly bad (see, e.g., Parfit 1984; Bostrom 2013). In contrast, Johann Frick (2017) argues that humanity has non-instrumental or “fi–nal” value and that “when what is finally valuable is a form of life or a species, what we ought to care about, we might say, is the ongoing instantiation of the universal.” It follows that our extinction sooner rather than later “would be very bad, indeed one of the worst things that could possibly happen” (Frick 2017). Finally, Samuel Scheffler (2016, 2018) offers yet another perspective according to which much of what gives current lives value—i.e., makes them “value-laden”—is predicated on an assumption that hu–manity will survive long into the future. Thus, our lives would become largely meaningless, he claims, if we were to discover that humanity is soon to perish. The point is that there are multiple convergent argu–ments for why succumbing to an existential catastrophe would be extremely bad (see Torres and Beard, forthcoming). (4) This implies that mitigating existential risks through, as it were, “broad” or “targeted” strategies “should be a dominant consideration whenever we act out of an impersonal concern for hu–mankind as a whole” (Bostrom 2013; see also Beckstead 2013). Bostrom dubs this the “maxipok” rule of thumb, and it is the motivational decision-theoretic component of the Great Challenges framework. That is to say: The maxipok rule instructs us to reduce the overall probability of existential risk, but it doesn’t tell us which particular existential risk scenarios we should focus on to most effectively achieve this end; an answer to this prioritization question is what the Great Challenges framework aims to provide.
Before examining the three Great Challenges in some detail, let’s begin with a brief survey of (what I called above) “our rapidly evolving existential predicament.” First, the good news: Humanity has made significant progress in multiple epistemic, moral, technological, medical, and so on, domains over time. For example, our scientific models of the universe have never been so complete and future antici–pated technologies promise to eliminate virtually all human diseases and perhaps reverse the process of aging. Even more, studies reveal an appreciable decline in the prevalence of nearly every form of violence across history, including assaults, rapes, murders, genocides, and wars. Contemporary people stand at the vanguard of the Long Peace, during which no two world superpowers have gone to war, and we are riding the wave of what Steven Pinker (2011) calls the “New Peace,” which denotes the period since the end of the Cold War during which “organized conflicts of all kinds—civil wars, genocides, repression by auto–cratic governments, and terrorist attacks—have declined throughout the world.” And the postwar “Rights Revolutions” have ameliorated the plights of “ethnic minorities, women, children, homosexuals, and ani–mals” (Pinker 2011, xxiv). According to Pinker’s “escalator hypothesis,” the driving force behind these trends in recent times has been (a) the Flynn effect, yielding what he calls the “moral Flynn effect,” and (b) the propagation of Enlightenment values like reason, science, and humanism (Pinker 2018; although see Torres 2018 for criticisms). From this perspective—sometimes called “New Optimism”—not only has the world improved in numerous important respects, but there appears to be some justification for san–guinity about our collective future in the cosmos
Yet this cluster of encouraging trends is only half of the diachronic picture of humanity, so to speak. The human condition is indeed better overall today than in the past, but the contemporary world also contains far more risk potential than any previous moment in anthropological history. (5) By “risk po–tential,” I mean a rough measure of the extent to which things could go wrong in a global-transgenera–tional sense; for example, the more existential risk scenarios there are, the greater the risk potential. This leads us to the following two interrelated trends:
First, the total number of global-scale catastrophe scenarios has significantly risen since 1945 (Torres 2017a). Prior to the inauguration of the Atomic Age, the only risks to human survival stemmed from natural phenomena like asteroids, comets, supervolcanoes, gamma-ray bursts, solar flares, cosmic rays, and pandemics. Today the list of existing and emerging threats includes a growing constellation of anthropogenic risks like climate change, global biodiversity loss, species extinctions, nuclear conflict, designer pathogens, atomically-precise 3D printers, autonomous nanobots, stratospheric geoengineering, physics disasters, and machine superintelligence, to name just a few. And let’s not forget that super-pow6-erful future artifacts could introduce entirely new types of risks to human survival and prosperity; these artifacts may be, from our current vantage point, not merely unimagined but unimaginable, perhaps re-quiring a different kind of mind to comprehend. Indeed, I have elsewhere argued that “unknown un-knowns”—or, more playfully, “monsters,” of which there are three types—could constitute the greatest 7long-term threat to humanity (Torres 2016). Monsters could take the form of novel inventions, cosmic 8dangers presently hidden from sight, and unintended consequences. If the overall level of risk grows in proportion to the exponential development of new dual-use technologies, then one might even consider talking about an “existential risk singularity,” in Ray Kurzweil’s (2005) sense of “Singularity” (see Ver-doux 2009).
Second, the overall probability of a global-scale catastrophe appears to be unprecedentedly high on anthropological timescales. Consider that the probability of annihilation per century from what I call 9our “cosmic risk background.” That is to say, the cluster of risks from natural phenomena is almost cer–tainly less than 1 percent; Toby Ord (2015) argues that it may be much less than 1 percent, perhaps circa a 1/300 chance per century. By contrast, Sir Nicholas Stern assumes a 0.01 percent chance of extinction each year in his influential Stern Report (2006), which yields a 9.5 percent chance per century, although this number was chosen as a modeling assumption for the purposes of discussing discount rates. The point is that even on high estimates of natural existential risk per century, the overall probability is relatively low. However, when one seriously considers the ballooning swarm of anthropogenic doomsday scenarios, the overall probability appears unsettlingly high. Consider the following figures from scholars of global risk:
(i) John Leslie estimates that the probability of annihilation in the next 500 years is 30 percent (Leslie 1996).(10)
(ii) Nick Bostrom writes that his “subjective opinion is that setting this probability [of an existen–tial risk] lower than 25 percent would be misguided, and the best estimate may be considerably higher” (Bostrom 2002). (11) Elsewhere Bostrom conjectures that the “probability that humankind will fail to survive the 21st century” is “not less than 20%” (Bostrom 2005), and in 2014, he seems to have assigned a 17 percent probability to the proposition that “humanity goes extinct in the next 100 years” (see Sandberg 2014).
(iii) Lord Martin Rees puts the likelihood of civilizational collapse before 2100 at 50 percent (Rees 2003).
(iv) An informal survey of experts conducted by the Future of Humanity Institute (FHI) yielded a median probability of extinction this century at 19 percent (Sandberg and Bostrom 2008).
(v) Willard Wells uses a mathematical “survival formula” to calculate that, as of 2009, the risk of extinction is almost 4 percent per decade and the risk of civilizational collapse is roughly 10 per–cent per decade (Wells 2009).
(vi) Toby Ord estimates that, given the future development of “radical new technology,” humanity has a 1/6 chance of going extinct this century (see Wiblin 2017).
(vii) And the Doomsday Clock, maintained by the Bulletin of the Atomic Scientists, is currently set to 2 minutes before midnight (or doom). Only in 1953, after the US and Soviet Union detonat–ed thermonuclear bombs, was the minute hand this close to striking twelve (Mecklin 2018). (12)
Continue reading at https://docs.wixstatic.com/ugd/d9aaad_b2e7f0f56bec40a195e551dd3e8c878e.pdf
IPCC steps up warning on climate tipping points in leaked draft report