Google's artificial neural networks are supposed to be used for image recognition of specific objects like cars or dogs. Recently, Google's engineers turned the networks upside down and fed them random images and static in a process they called "inceptionism."
In return, they discovered their algorithms can turn almost anything into trippy images of knights with dog heads and pig-snails.
Now computer programmers across the internet are getting in on the "inceptionism" fun, after Google let their AI code run free on the internet. The open-source AI networks are available on GitHub for anyone with the know-how to download, use, and tweak.
Gathered under the Twitter hashtag #deepdream, the resulting images range from amusing to deeply disturbing. One user turned the already dystopian world of "Mad Max: Fury Road" into a car chase Salvador Dali could only dream of.
Mad Max's face is transformed into a many-eyed monster with the chin of a dog, while the guitar now spews out a dog fish instead of flames.
MAD MAX: FURY ROAD #deepdreampic.twitter.com/8P6ZYf5Dab
— ゴッドスコーピオン (@GoddoSukoupion) July 4, 2015
The AI networks are composed of "10 to 30 stacked layers of artificial neurons." On one end, the input layer is fed whatever image the user chooses. The lower layers look for basic features, like the edges of objects.
Higher levels look for more detailed features, and eventually the last layer makes a decision about what it's looking at.
These networks are usually trained with thousands of images depicting the object they're supposed to be looking for, whether it's bananas, towers, or cars.
Many of the networks are producing images depicting "puppy-slugs," a strange hybrid of dog faces and long, sluggish bodies. That's because those networks were trained to recognize dogs and other animals.
Here's what a galaxy would look like if it was made of dog heads.
A rare photography of The Puppyslug Nebula from the Hubble Telescope.
#deepdreampic.twitter.com/YjslsSlYAo
— Devine Lu Linvega (@aliceffekt) July 2, 2015
"The network that you see most people on the hashtag [use] is a single network, it's a fairly large one," said Samim Winiger, a computer programmer and game developer. "And why you see so many similar 'puppyslugs' as we call them now, is it's one type of network we're dealing with in most cases. It's important to know there's many more out there."
Duncan Nicoll's half-eaten sprinkle donut was transformed into something much less appetizing once Google's AI was done with it.
An intrepid user can emphasize particular features in an image, by running it through the network or even a single layer multiple times.
"Each layer has a unique representation of what [for example] a cat might look like," said Roelof Pieters, a data and AI scientist who rewrote the code for videos and ran a clip of "Fear and Loathing in Las Vegas" through the network.
"Some of these neurons in the neural network are primed toward dogs, so whenever there's something that looks like a dog, these neurons ... very actively prime themselves and say ahh, I see a dog. Let's make it like a dog."
Networks trained to search for faces and eyes created the most baffling images from seemingly innocuous photos.
"Yeah well, it did get a little weird... but I would totally go back."#deepdreampic.twitter.com/KAz9goRm0T
— John Mendonca (@johnmendonca) July 2, 2015
The networks were also taught to look for inanimate objects like cars. Below, Winiger turned the National Security Agency headquarters into a black double-decker bus.
#deepdream NSA Headquarters. We all knew. pic.twitter.com/K7sTwERQCM
— samim (@samim) July 2, 2015
Many more images are beyond description. You'd have to see them yourself.
This creeps me out more than it should @mtyka@317070@sedielem#deepdreampic.twitter.com/CN0H0n0fa5
— c0ldW1r3 (@hutstaender) July 2, 2015
#deepdream stockphotography, (c) gettyimages. Generative Copyright? Get ready for a interesting debate. pic.twitter.com/wLu0P5C37v
— samim (@samim) July 2, 2015
ok now this creeps me out #deepdreampic.twitter.com/N50HTFv5IA
— Некстджен и Усиление (@turbojedi) July 2, 2015
Winiger also tweaked the code for GIFs, which is available on GitHub. Here, a volcano spews dog heads into the atmosphere.
With Winiger's help, I was able to test the network on a photo of myself drinking tea in an antique shop.
This lower level on the AI network seems to be primed to search for holes and eyes, inadvertently adding dog faces in the background.
While this image produced by an upper layer looked for faces, pagodas, and birds. Notice the grumpy little man in what looks like a space suit appearing in the bottom right.
Winiger and Pieters both hope that the images from #deepdream will have people talking and learning about AI visual systems as they become more integrated into our daily lives.
"One of the things I find extremely important right now is to raise the debate and awareness of these systems," Winiger said. "We've been talking about computer literacy for 10 to 20 years now, but as intelligent systems are really starting to have an impact on society the debate lags behind. There's almost no better way than the pop culture approach to get the interest, at least, sparked."
SEE ALSO: These trippy images show how Google's AI sees the world
Join the conversation about this story »
NOW WATCH: We asked Siri the most existential question ever and she had a lot to say