Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

Scientists have created a murder-obsessed 'psychopath' AI called Norman — and it learned everything it knows from Reddit

$
0
0

Norman large

  • Researchers at MIT have programmed an AI using exclusively violent and gruesome content from Reddit.
  • They called it "Norman."
  • As a result, Norman only sees death in everything.
  • This isn't the first time an AI has been turned dark by the internet — it happened to Microsoft's "Tay" too.


Some people fear Artificial Intelligence, maybe because they've seen too many films like "Terminator" and "I, Robot" where machines rise against humanity, or perhaps becaise they spend too much time thinking about Roko's Basilisk.

As it turns out, it is possible to create an AI that is obsessed with murder.

That's what scientists Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan did at the Massachusetts Institute of Technology when they programmed an AI algorithm by only exposing it to gruesome and violent content on Reddit, then called it "Norman."

Norman was named after the character of Norman Bates from "Psycho," and "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms,"according to MIT.

The scientists tested Norman to see how it would respond to inkblot tests— the ambiguous ink pictures psychologists sometimes use to help determine personality characteristics or emotional functioning.

In the first inkblot, a normally programmed AI saw "a group of birds sitting on top of a tree branch." Norman, however, saw "a man is electrocuted and catches to death."

When the normal AI saw a black and white bird, a person holding an umbrella, and a wedding cake, Norman saw a man getting pulled into a dough machine, a man getting killed by a speeding driver, and "man is shot dead in front of his screaming wife."

"Norman only observed horrifying image captions, so it sees death in whatever image it looks at," the researchers told CNNMoney.

The internet is a dark place, and other AI experiments have shown how quickly things can turn when an AI is exposed to the worst places and people on it. Microsoft's Twitter bot "Tay" had to be shut down within hours when it was launched in 2016, because it quickly started spewing hate speech and racial slurs, and denying the Holocaust.

But not all is lost for Norman. The team believe it can be retrained to have a less "psychopathic" point of view by learning from human responses to the same inkblot tests. AI can also be used for good, like when MIT managed to create an algorithm called "Deep Empathy" last year, to help people relate to victims of disaster.

None of this has stopped people on the internet freaking out, though.

Here are just a few Twitter reactions to Norman:

SEE ALSO: The careers millennials are choosing are less likely to be taken over by robots — here's why

Join the conversation about this story »

NOW WATCH: You've been pouring your Guinness all wrong


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>