When I interview artificial intelligence researchers for Tech Insider stories, the conversations almost always turn to science fiction.
I've seen a few movies about artificial intelligence (AI), like the "Terminator" and "The Matrix," for example, but I hadn't seen "2001: A Space Odyssey," considered by many I've spoken with as the pinnacle of sci-fi. (Marvin Minsky — one of the pioneers of AI — was even an adviser to the movie's production team.)
So I decided to spend a weekend binge-watching every acclaimed AI movie I'd missed. Taking tips from colleagues and The Guardian's list of the top movies about AI, I lined up seven films.
I didn't begin with any expectations or criteria, but by the end — red-eyed and suffering from a little bit of cabin fever — I realized that one movie on my list offered the most realistic vision of the future of AI, and it was a cartoon.
Read on to see how these iconic titles jibe with modern science. (Warning: Spoilers ahead.)
2001: A Space Odyssey (1968)
The movie: Astronaut David Bowman and his crew mates aboard the Discovery One are headed to Jupiter in search of strange black monoliths — devices that appear at turning points throughout the human species' evolution. The ship's computer, Hal 9000, has a lot of responsibilities, including piloting the ship and maintaining life support for astronauts in hibernation.
Though Hal insists he is "by any practical definition of the words, foolproof and incapable of error," he makes a mistake and two astronauts conspire to turn him off. Little do they know that Hal has a few tricks in his memory banks.
The technology: Hal has a wide range of tasks, which makes him an artificial general intelligence (AGI) — AI that has or exceeds human-level intelligence across all the fields of expertise that a human could have. AGI would take a huge amount of computation and energy. According to Scientific American, AI researcher Hans Moravec estimates that it would require at least "100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain."
Is it possible?: The Fujitsu K computer already outpaces this estimate at 10 quadrillion worth of computations in one second. Despite the K computer's computing capabilities, it still took about "40 minutes to complete a simulation of one second of neuronal network activity in real time," according to CNET. Moravec writes "at the present pace, only about 20 or 30 years will be needed to close the gap." So, Hal is possible, but not right now.
Hal also has human emotions — pride, fear, and a survival instinct — but I wasn't sure where they originated. Humans have emotions because of evolutionary survival instincts. Emotions like fear and jealousy, according to the New York Times, may have helped us hoard scant resources for ourselves.
On the other hand, AI wouldn't develop emotions unless they’re programmed to replicate them. The humans may have given Hal a survival instinct, but surely they wouldn't have programmed him to survive at the expense of his human crewmates.
The takeaway: Watching Stanley Kubrick's stunning masterpiece was like watching a living painting. But it also serves to warn us to ensure any AGI we create doesn't prioritize its survival over the survival of the humans it serves.
The movie: Matthew Broderick plays a high school hacker named David Lightman, who mistakenly hacks into a government computer in charge of the nuclear missile launch systems at the North American Aerospace Defense Command (NORAD). Thinking he's hacked into a games company, Lightman begins to play as the Soviet Union in what he thinks is a simulation game called Global Thermonuclear War, unwittingly setting off a series of events that threaten to create World War III.
The technology: The government computer, called the War Operations Plan Response (WOPR), learns from constantly running military simulations, and can autonomously target and fire nuclear missiles.
Is it possible?: WOPR combines two different technologies that exist right now, so I'd say this technology is possible with some time and effort — though it may not be a good idea. Like WOPR, DeepMind's deep neural net system, called deep-Q networks (DQN), learns to play video games and gets better with time. According to Deep Mind's Nature paper, the DQN was able to "achieve a level comparable to that of a professional human games tester across a set of 49 games."
Autonomous weapons that can target and fire on their own also exist right now. One frightening real-life autonomous weapons is the Samsung SGR-1, which patrols the demilitarized zone between North and South Korea and can fire without human assistance. These are the kind of self-targeting weapons that almost started World War III.
The takeaway: Autonomous weapons exist right now, but I can't think of any government that would be willing to put the most dangerous weapons known to man in the hands of an easily hackable computer that doesn't clearly differentiate between simulations and firing real weapons. However, Tesla CEO Elon Musk, physicist Stephen Hawking, and over 16,000 AI researchers don't want to take that chance, and recently urged the United Nations to ban the use of autonomous weapons.
WOPR also has a clear set of goals — win the game at any cost, even if it means destroying humanity. It's a clear illustration of an AI that could decimate humanity, what philosopher Nick Bostrom calls "existential threat."
Ghost in the Shell (1995)
The movie: In 2029, almost everyone in Japan is connected to the cloud via cybernetic android bodies, including detective Major Kusanagi. Tasked with finding a hacker named the Puppet Master, she learns that the hacker was originally a computer program that gained sentience. Over time, the Puppet Master learned about the nature of his existence, and his inability to reproduce or have a normal life.
The technology: In "Ghost in the Shell," technology has advanced to the point that false memories can be hacked and robots can build other robots. Major Kusanagi is a "ghost"— a human mind uploaded to and accessible through the cloud using her artificial body. She has superhuman strength and invisibility. She can also speak telepathically, access information, and even drive cars using her mind's access to the cloud.
Is it possible?: The idea of humans accessing the internet using just their minds is a well-trodden trope. Futurist and Google researcher Ray Kurzweil predicted that we'll be able to communicate telepathically using the cloud by 2030, just a year after the events of "Ghost in the Shell" take place.
Kusanagi's artificial body moves like a human body, but robots today still can't walk on two legs without collapsing midstep, as shown by the robots in the DARPA Robotics Challenge Finals. So that makes it pretty hard to believe that robots would be dexterous enough to be backflipping off high-rise buildings in just 15 years. On the other hand, MIT is currently building superstrong robots that can punch through walls, but these robots aren't autonomous — they're controlled by a human wearing an exoskeleton.
The takeaway: We’ll probably have to wait more than 15 years for technology that will allow us to upload our minds into robotic bodies, but “Ghost in the Shell” brought up some very real ethical and safety concerns. For example: In the movie, a garbageman is convinced he’s helping a criminal in exchange for regaining custody of his daughter. But he later learns that his memories have been faked — he never had a wife or a daughter. Could hackers implant false memories?
“Imagine when the internet is in your brain, if the NSA can see into your brain, if hackers can hack into your brain,” Shimon Whiteson, an AI researcher at the University of Amsterdam, said.
The military is developing a brain implant that could restore memories and repair brain damage, so it's not too far-fetched to think these kinds of implants could be hacked.
See the rest of the story at Business Insider