Prompted by a recent article in Aeon Magazine warning of the threat posed by advanced artificial intelligence, Kristin Centorcelli of SF Signal put together an impressive panel of renowned science fiction authors to get their opinions on the subject.
If you haven’t read the Aeon article, you really should. It was one of the more important think pieces published on the subject in quite some time.
As the Future of Humanity Institute’s David Dewey noted in the piece, “If you had a machine that was designed specifically to make inferences about the world, instead of a machine like the human brain, you could make discoveries...much faster,” but an AI “might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels.”
Indeed, the threat of an AI run amok
To see what the science fiction community has to say about all this, Centorcelli invited a number of writers, including Larry Niven, Karl Schroeder, Madeline Ashby, Wesley Chu, Guy Hasson, Gregg Rosenblum, James Lovegrove, Guy Haley, Jason M. Hough, James K. Decker, and Neal Asher.
Somewhat surprisingly, most responses were critical — and even a bit dismissive — of what they perceived as a “sky is falling” tone. For many writers, the threat is neither real or properly contextualized. But some definitely see problems on the horizon.
Here’s a quick taste what they had to say:
Wesley Chu:
Yes, future apocalyptic extinction sucks and sounds pretty unpleasant, but if I may, when was the last time any futurist’s prediction actually came true? They predicted flying cars in every family’s garage back in the 1920s. Nearly a hundred years later, cars aren’t drastically different than they were since the days of the Model T. We still don’t have a moon base, and my cleaning lady is composed of skin, bones, and blood, albeit I admit she sounds like a robot when she talks. Hell, we can’t even get a guy to Mars let alone the next solar system. We can’t even cure the common cold. Basically, the track record for futurists kind of suck. And the further out we get in the predictions, the less likely any of them will hit their mark.
Karl Schroder:
Anderson et al. have suffered a failure of imagination. They’ve succeeded in imagining artificial intelligence but failed to imagine the more important innovation, which would be Artificial Desire. Once you’ve pictured AD, it becomes immediately obvious that the ‘problem’ of autonomous AI is no problem at all. –Or, rather, an autonomous self-interested AI is a completely avoidable design failure.
Gregg Rosenblum:
I have to say, at the risk of sounding wishy-washy, that I think we’re going to get a mixed bag of positives and negatives from AI technology. We’re going to have bots defusing land mines and fighting fires but also dropping bombs from unmanned drones. We’ll probably have AI cars driving without human guidance (we’ve already got self-parking cars, right?), but we’re also going to have an interesting, “transhumanism” cyborg-like blurring of the lines between technology and humanity. (Google Glass is just the tip of the iceberg—how many of us, for example, if we could have a comm. chip implanted in us that acted as a smart phone, would jump at the chance?)
James Lovegrove:
The problems may come if we somehow generate an AI that is so far above our ways of thinking that it becomes unknowable. Then we’re looking at a “god AI” whose mental processes are so alien to us that all we can do is bow down in subjection before it and venerate it, in the hope that it won’t become a vengeful deity and smite us all. I can easily foresee churches springing up full of worshippers of this AI and a priest caste seizing power and holding sway by being able to – or at least trying to – interpret the mind and meaning of our new computer deity. Perhaps it’ll promise us a virtual reality afterlife if we behave. The lucky, saved few will have their brain patterns uploaded into a hard-drive heaven and live for eternity as digital souls.
Guy Haley:
I reckon a greater danger comes from unthinking machines, set loose to do a mindless task, that rather like the brooms in The Sorcerer’s Apprentice, cannot be stopped. The ecophagy “gray goo” scenario from Eric Drexler’s novel Engines of Creation or the robots sent to terraform Mars that end up disassembling it in Stephen Baxter’s Evolution.
James K. Decker:
If we’re talking about a true intelligence, some kind of self-aware network of synthetic neurons and not some kind of ‘human simulation’, I don’t see how we could have the slightest idea what it might do once it became conscious. We’d be interacting with a completely inhuman intelligence, free of empathy, or even an understanding of what life and death are. The things that are core to us as humans would mean nothing to a being like that and so given the chance to act in our world, we could have no way of guessing what it might decide to do. Even if it were somehow keyed to be beneficial to us, taking the “maximizing human happiness” example from the original question, a machine intelligence might decide the optimal way to do this would be to keep every human immobilized, and hooked up to a feeding tube with a wire running current to our pleasure centers. That would make every human happy for their entire lives, and without the ability to understand why that would be horrible it might seem like the most efficient course of action.
Neal Asher:
Yes, our computers are able to process so much more every day but AIs they are not. And if they suddenly do turn into demigods, how exactly are they going to change the world? It’s all very well having vast intelligence but if you can’t even pick up a screwdriver it isn’t going to do much good. Sorry to be blunt, but go ask Stephen Hawking about that.
There’s tons more to this discussion at SF Signal.
Image: Shutterstock/agsandrew.