Stephen Hawking has made some bold predictions about the dangers of artificial intelligence (AI) in the past.
Now, in a recent Reddit interview, the world-renowned physicist has suggested a survival instinct in superintelligent machines of the future could be bad news for humankind.
When a Reddit user asked whether or not AI could have the drive to survive and reproduce like biological organisms do, Hawking answered:
An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.
It's a pretty terrifying prospect and sounds a lot like the malevolent computer HAL, from "2001: A Space Odyssey." But based on Tech Insider's extensive interviews with career AI researchers, Hawking's frightening suggestion may not hold up.
The experts we spoke with doubt the possibility of an AI survival instinct that's at odds with humans. For one, according to AI researcher at Yoshua Bengio, humans build machines and can control what's encoded in machine algorithms and what isn't.
"Evolution gave us an ego and a self preservation instinct because otherwise we wouldn't have survived," Bengio told Tech Insider in an earlier interview. "We were evolved by natural selection, but AIs are built by humans."
Thomas Dietterich, an intelligent systems researcher at Oregon State University, expanded on this idea in an email to Tech Insider. Humans need another human to create offspring, he wrote, which "leads to a strong drive for survival and reproduction."
"In contrast, AI systems are created through a complex series of manufacturing and assembly facilities with complex supply chains — so such systems will not be under the same survival pressures," Dietterich wrote. "AI systems may be more like social bees who live in a hive. The hive may find it advantageous to sacrifice many of its individual members in order to achieve long-term survival."
Toby Walsh, an AI professor at the National Information and Communications Technology in Australia, echoed that idea that computers wouldn't develop goals or drives without the input of a human being first, even systems that learn to improve on their own.
"Computers have no wishes and desires — they don't wake up in the morning and want to do things,"Walsh told Tech Insider. "The Jeopardy-playing IBM supercomputer Watson never woke up and said, 'Ah, I'm bored of playing Jeopardy! I want to play another game today.' That's just not in its code, and that's not in the way that we write programs today."
Join the conversation about this story »
NOW WATCH: Meet 'Iceman' and 'Wolverine' — the 2 coolest robots in Tesla's factory