Will AI Kill Us All?

Will AI Kill Us All?

Elon Musk, CEO of Tesla Motors and SpaceX, recently warned on Twitter that superintelligent computers could one day exterminate all of humanity.

“We need to be super careful with AI,” he said in one tweet. “Potentially more dangerous than nukes.”

Another said: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”

The technology entrepreneur warned of the downfall of our species after reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, a professor of philosophy at the University of Oxford, and Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.

Bostrom, a professor and founding director of the Future of Humanity Institute, claims that the first superintelligent entity will potentially eliminate our race. Barrat suggests that even if the superintelligence were created with good intentions, it would still be inclined to commit genocide.

“Without meticulous countervailing instructions, a self-aware, self-improving, goal-seeking system will go to length we’d deem ridiculous to fulfill its goal,” he writes.

The fear that our creations will rise up and annihilate our race, or at least rule us, are a persistent theme in science fiction media such as the Terminator series. Isaac Asimov, author of Three Laws of Robotics, took the time to counter this paranoia. The first, most important law is that robots must never hurt another human. The second says that they must obey human orders and the third that they must preserve themselves.

Ray Kurzwell, Google’s director of engineering, believes that computers will become more intelligent than humans by 2029, a point called the “Singularity” by futurologists. But he argues in The Age of Spiritual Machines that AIs will be subservient to humans.

Time will tell!