Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

Stephen Hawking Is Worried About Artificial Intelligence Wiping Out Humanity

$
0
0

Screen Shot 2014 05 05 at 9.36.14 AM

We've previously reported on the realistic potential for malicious artificial intelligence to wreak havoc on humanity's way of life. Physicist Stephen Hawking agrees it's worth worrying about.

Current artificial intelligence is nowhere near advanced enough to actually be of sci-fi-movie-style harm, but its continued development has given rise to a number of theories about how it may ultimately be mankind's undoing.

Writing in The Independent, Hawking readily acknowledges the good that comes from such technological advancements:

Recent landmarks such as self-driving cars, a computer winning at "Jeopardy!," and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

But he keeps the negatives close to mind, writing that "such achievements will probably pale against what the coming decades will bring":

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

A scientist named Steve Omohundro recently wrote a paper that identifies six different types of "evil" artificially intelligent systems and lays out three ways to stop them. Those three ways are:

  • To prevent harmful AI systems from being created in the first place. We're not yet at the point where malicious AI is being created. Careful programming with a Hippocratic emphasis ("First, do no harm.") will become increasingly important as AI technologies improve
  • To detect malicious AI early in its life before it acquires too many resources. This is a matter of simply paying close attention to an autonomous system and shutting it down when it becomes clear that it's up to no good.
  • To identify malicious AI after it's already acquired lots of resources. This quickly approaches sci-fi nightmare territory, and it might be too late at this point.

Join the conversation about this story »


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>