Google's image recognition programs are usually trained to look for specific objects, like cars or dogs.
But now, in a process Google's engineers are calling "inceptionism," these artificial intelligence networks were fed random images of landscapes and static noise.
What they get back sheds light on how AI perceive the world, and the possibility that computers can be creative too.
The AI networks churned out some insane images and took the engineers on a hallucinatory trip full of knights with dog heads, a tapestry of eyes, pig-snails, and pagodas in the sky.
Engineers trained the network by "showing it millions of training examples and gradually adjusting the network parameters,"according to Google’s research blog. The image below was produced by a network that was taught to look for animals.
Each of Google's AI networks is made of a hierarchy of layers, usually about "10 to 30 stacked layers of artificial neurons." The first layer, called the input layer, can detect very basic features like the edges of objects. The engineers found that this layer tended to produce strokes and swirls in objects, as in the image of a pair of ibis below.
As an image progresses through each layer, the network will look for more complicated structures, until the final layer makes a decision about the objects in the image. This AI searched for animals in a photo of clouds in a blue sky and ended up creating animal hybrids.
See the rest of the story at Business Insider