Elon Musk is terrified of artificial intelligence (AI). The founder of SpaceX and Tesla Motors predicts it'll soon be "potentially more dangerous than nukes," and recently he gave $10 million toward research to "keep AI beneficial."
Stephen Hawking has likewise warned that "the development of full AI could spell the end of the human race."
Musk and Hawking don't fear garden variety smartphone assistants, like Siri.
They fear superintelligence—when AI outsmarts people (and then enslaves and slaughters them).
But if there's a looming AI Armageddon, Silicon Valley remains undeterred. As of January, as many as 170 startups were actively pursuing AI. Facebook has recruited some of the field's brightest minds for a new AI research lab, and Google paid $400 million last year to acquire DeepMind, an AI firm.
The question then becomes: Are software companies and venture capitalists courting disaster? Or are humankind's most prominent geeks false prophets of end times?
Surely, creating standards for the nascent AI industry is warranted, and will be increasingly important for, say, establishing the ethics of self-driving cars. But an imminent robot uprising is not a threat.
The reality is that AI research and development is tremendously complex. Even intellects like Musk and Hawking don't necessarily have a solid understanding of it. As such, they fall onto specious assumptions, drawn more from science fiction than the real world.
Are humankind's most prominent geeks false prophets of end times?
Of those who actually work in AI, few are particularly worried about runaway superintelligence. "The AI community as a whole is a long way away from building anything that could be a concern to the general public," says Dileep George, co-founder of Vicarious, a prominent AI firm. Yann LeCun, director of AI research at Facebook and director of the New York University Center for Data Research, stresses that the creation of human-level AI is a difficult—if not hopeless—goal, making superintelligence moot for the foreseeable future.
AI researchers are not, however, free from all anxieties. "What people in my field do worry about is the fear-mongering that is happening," says Yoshua Bengio, head of the Machine Learning Laboratory at the University of Montreal. Along with confusing the public and potentially turning away investors and students, Bengio says, "there are crazy people out there who believe these claims of extreme danger to humanity. They might take people like us as targets."
The most pressing threat related to AI, in other words, might be neither artificial nor intelligent. And the most urgent task for the AI community, then, is addressing the branding challenge, not the technological one. Says George: "As researchers, we have an obligation to educate the public about the difference between Hollywood and reality."
This article was originally published in the March 2015 issue of Popular Science, under the title "Artificial Intelligence Will Not Obliterate Humanity."
This article originally appeared on Popular Science
This article was written by Erik Sofge from Popular Science and was legally licensed through the NewsCred publisher network.
SEE ALSO: KURZWEIL: Human-Level AI Is Coming By 2029
Join the conversation about this story »
NOW WATCH: Why a NASA mission to Jupiter’s famous icy moon is now a priority