It won't be because of a Maya prophesy, but humanity may actually meet its doom someday. There is no shortage of threats from the natural world, including asteroid impacts and the eruption of supervolcanoes. But who needs natural disasters when you've got human ingenuity? Here are the top nine ways humanity could eventually bring about its own destruction.
1. Nuclear armageddon
The Cold War may be over, but we're not out of the nuclear woods just quite yet. In fact, the worst is likely still to come. The most frightening aspect of nuclear weapons, aside from their awesome power, is that it's old technology. The Bomb was developed back in the 1940s for goodness sakes — and it's no small miracle that proliferation hasn't been worse. It'll only be a matter of time before nation states hell bent on becoming nuclear capable will do so (Iran and North Korea being the best current examples). Part of the problem is that we live in the information age where the blueprints for these things are readily available for anyone who wants them — including non-state actors.
The trick, however, is for these nuclear wannabes to get their hands on enriched uranium — easier said than done. But where there's a will there's a way. And with molecular assembling nanotechnology on the horizon, it may eventually be as easy as keying in the cook time on your microwave oven.
Now, all this said, it would take a considerable number of nuclear bombs to wipe out all of humanity. Models indicate that an exchange of 100 nuclear bombs at 15-kilotonnes each would instigate a nuclear winter. The initial blasts and ensuing radiation would result in the deaths of anywhere from three million to 16 million people depending on the targets. But the resulting nuclear winter would cause a decade-long famine that could result in billions of deaths — a condition from which human civilization might not be able to recover.
2. Global Ecophagy
Affectionately known as the "grey goo" scenario, this nightmarish possibility was first described by Eric Drexler in his seminal 1986 book, Engines of Creation. The basic idea is that, either by accident or deliberate intent, self-replicating nanobots could convert the entire planet into a useless pile of mush. Drexler wrote:
"Plants" with "leaves" no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous "bacteria" could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop –- at least if we make no preparation. We have trouble enough controlling viruses and fruit flies.
Since the publication of Drexler's book, other experts have warned of similar scenarios involving advanced nanotech. Robert Freitas has speculated that the entire atmosphere could be wiped out in as little as 20 months. He also worries about gray plankton (they would release massive amounts of carbon into the atmosphere), gray dust (a worldwide blanket of airborne replicating dust or "aerovores" that would blot out all sunlight), and gray lichens (the destruction of land-based biology by a maliciously programmed noncarbon epilithic replicators).
To deal with these grim possibilities, Drexler and Freitas have proposed that we develop "active shields" and surveillance technologies. But it's generally agreed that weaponized nanotechnology will be able to tunnel through even the most seemingly impenetrable regions of ‘civilization space.'
3. Everybody uploads — but no one is conscious
It's all but guaranteed that we'll develop artificial general intelligence some day. But what's less certain is whether or not we'll ever be able to develop artificial consciousness. Neuroscientists and cognitive scientists still don't have a working theory to explain conscious awareness, so it's no sure thing that AI will develop in the ways we think. It's quite possible, for example, that consciousness is an emergent property of intricately configured matter — what some philosophers call panpsychism. If this is true, we may never be able to code for consciousness using a stream of ones and zeros. Consequently, consciousness uploads would be a form of suicide; the end result would be an apparent version of you, but there would be nobody home. Because it's so difficult for us to verify the presence of consciousness, uploading will have to be a leap of faith. As Ray Kurzweil prognosticated in The Age of Spiritual Machines, "The year is 2029. The machines will convince us that they are conscious, that they have their own agenda worthy of our respect. They'll embody human qualities, they'll claim to be human, and we'll believe them." But it could all be a big fat lie — a nightmare in which everyone on the planet has uploaded themselves into oblivion — resulting in billions of mindless automatons running around like bots in a video game.
Now to be fair, it's quite likely that not everyone on the planet will choose to upload (for a whole host of reasons), making this a low risk possibility — but it's interesting to think about nonetheless.
4. Robopocalypse
Also known as the Terminator scenario, this is the fear of a global-scale catastrophe in which either an advanced artificial intelligence or a malevolent human has instructed robots to turn against humanity. An excellent sci-fi treatment of this possibility was portrayed in Daniel Wilson's Robopocalypse, where a domineering machine intelligence decides that it's time to take over. Indeed, Wilson's scenario seems all the more plausible given the ongoing sophistication and ever-growing adaptability of robots. We humans are a fragile bunch — and we likely wouldn't stand a chance against these mechanized monsters. Hunter Killers and other single-purpose machines would relentlessly go about their extermination missions. Robotic locusts could wipe out all crops, resulting in mass starvation. They would be able to mass produce themselves, self-repair, engage in swarming behavior, and take on any size, shape, and form deemed necessarily to accomplish their mission. And of course, we won't be able to bargain or reason with these machines. They won't feel pity, or remorse, or fear. And they absolutely will not stop, ever, until we are dead.
5. Artificial superintelligence
Somewhat related to the robopocalypse, the day is coming when artificial intelligence will surpass human capacities. And then keep on going. This could all happen in a disturbingly short amount of time from a human perspective — what futurists refer to as a ‘hard takeoff' event. In such a scenario, a machine intelligence would rework our entire infrastructure to meet its needs. We would be completely unable to contain it. The SAI would take control of all the resources it requires, including the Internet, factories, defense systems, and robots. It would hit us like an explosion.
Take the infamous paperclip scenario, for example, where a hypothetical SAI is developed by a paperclip manufacturer. The machine's highest priority is to produce as many paperclips as possible. But because its goal was written without safeguards or other vital logic, the SAI would quickly go about converting the entire galaxy into paperclips - what would most certainly qualify as an apocalyptic outcome.
6. Particle accelerator accident
Though highly unlikely, there is the remote possibility that we could destroy the Earth while conducting a high-energy particle experiment. Back when the Large Hadron Collider was being constructed, some feared that it would produce a micro black hole or a strangelet that could convert the Earth to a shrunken mass of strange matter. Thankfully, the physics doesn't entirely support this possibility. Moreover, as Max Tegmark and Nick Bostrom have calculated, it probably only happens about once every billion years or so.
7. Deliberately engineered pandemic
Back in 2005, Ray Kurzweil and Bill Joy published an OpEd in the New York Times in which they warned that sensitive scientific information was being made available to the general public. They were writing in reaction to the United States Department of Health and Human Services' decision to publish the full genome of the 1918 influenza virus on the GenBank online database. "This is extremely foolish, they wrote. "The genome is essentially the design of a weapon of mass destruction. No responsible scientist would advocate publishing precise designs for an atomic bomb, and in two ways revealing the sequence for the flu virus is even more dangerous."
But their warnings have largely gone unheeded.
This past May, the journal Nature went ahead and published the details of an experiment describing how the avian flu can be modified into a human-contagious form. All the details are right here if you're interested. This is clearly an escalating concern. The information age has coincided with the biotech revolution — and it may only be a matter of time before someone (a country, a team, an individual) designs their own disease and unleashes it on our civilization. And what's even scarier is the possibility that the pathogen could be made highly virulent and 100 percent fatal.
8. Anthropogenic global warming
While this version of apocalypse would likely involve the onset of irrecoverable natural disasters, they would be of our doing. If carbon emissions continue to escalate at current rates, we may eventually create a positive feedback loop between the surface of the Earth and the carbon-drenched atmosphere above it. The effect would cause a rapid and progressively escalating rise in temperature that would eventually result in the extermination of all life on the planet and the evaporation of the oceans. This possibility is made all the more scarier as scientists grow increasingly concerned about massive amounts of stored carbon being released from the thawing tundra. In addition, ocean acidification could result in downstream ecological damage and mass extinctions that would likewise pose risks to humanity. Though many deny it, global warming is indeed an existential risk.
9. World War III
At the close of the Second World War, nearly 2.5% of the human population had perished. Of the 70 million people who were killed, about 20 million died from starvation. And disturbingly, civilians accounted for nearly 50 percent of all deaths — a stark indication that war isn't just for soldiers any more.
Given the incredible degree to which technology has advanced in the nearly seven decades since this war, it's reasonable to assume that the next global ‘conventional war' — i.e. one fought without nuclear weapons — would be near apocalyptic in scope. The degree of human suffering that could be unleashed would easily surpass anything that came before it, with combatants using many of the technologies already described in this list, including autonomous killing machines and weaponized nanotechnology. And in various acts of desperation (or sheer malevolence), some belligerent nations could choose to unleash chemical and biological agents that would result in countless deaths. And like WWII, food could be used as a weapon; agricultural yields could be brought to a grinding halt.
Thankfully, we're a far ways off from this possibly. Though not guaranteed, the global conflicts of the 20th century may have been an historical anomaly — one now greatly mitigated by the presence of nuclear arms.
Images: Top via Bethesda Game Studios, Nuclear bomb: Shutterstock/Elena Schweitzer, Grey Goo, Uploads: Shutterstock/Tonis Pan, Robopocalypse: Shutterstock/Oneo, machine mind: Shutterstock/agsandrew, Particle: Shutterstock/SSSCCC, Global Warming: Shutterstock/Barnaby Chambers, War: Shutterstock/Dmitrijs Bindemanis.