Working roboticists need to indulge the public in sci-fi scenarios.
I thought it'd be a cool story to interview academics and robotics professionals about the popular notion of a robot takeover, but four big names in the area declined to talk to me. A fifth person with robo street cred told me on background that people in the community fear that publicly talking about these topics could hurt their credibility, and that they think the topic has already been explained well enough.
This is a problem. A good roboticist should have a finger on the pulse of the public's popular conception of robotics and be able to speak to it. The public doesn't care about "degrees of freedom" or "state estimation and optimization for mobile robot navigation," but give a robot a gun and a mission, and they're enthralled.
More importantly, as I heard from the few roboticists who spoke to me on the record, there are real risks involved going forward, and the time to have a serious discussion about the development and regulation of robots is now.
Most people agree that the robot revolution will have benefits. People disagree about the risks.
Author and physicist Louis Del Monte told us that the robot uprising "won't be the 'Terminator' scenario, not a war. In the early part of the post-singularity world — after robots become smarter than humans — one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts. We'll see the machines as a useful tool."
But according to Del Monte, the real danger occurs when self-aware machines realize they share the planet with humans. They "might view us the same way we view harmful insects" because humans are a species that "is unstable, creates wars, has weapons to wipe out the world twice over, and makes computer viruses."
Frank Tobe, editor and publisher of the business-focused Robot Report, subscribes to the views of Google futurist Ray Kurzweil on the singularity, that we're close to developing machines that can outperform the human mind, perhaps by 2045. He says we shouldn't take this lightly.
"I’ve become concerned that now is the time to set in motion limits, controls, and guidelines for the development and deployment of future robotic-like devices," Tobe told Business Insider.
"It’s time to decide whether future robots will have superpowers — which themselves will be subject to exponential rates of progress — or be limited to services under man’s control," Tobe said. "Superman or valet? I choose the latter, but I’m concerned that politicians and governments, particularly their departments of defense and industry lobbyists, will choose the former."
Kurzweil contends that as various research projects plumb the depths of the human brain with software (such as the Blue Brain Project, The Human Brain Project, and the BRAIN Initiative), humankind itself will be improved by offshoot therapies and implants.
"This seems logical to me," Tobe said. "Nevertheless, until we choose the valet option, we have to be wary that sociopathic behaviors can be programmed into future bots with unimaginable consequences."
Ryan Calo, assistant professor of law at the University of Washington with an eye on robot ethics and policy, does not see a machine uprising ever happening: "Based on what I read, and on conversations I have had with a wide variety of roboticists and computer scientists, I do not believe machines will surpass human intelligence — in the sense of achieving 'strong' or 'general' AI — in the foreseeable future. Even if processing power continues to advance, we would need an achievement in software on par with the work of Mozart to reproduce consciousness."
Calo adds, however, that we should watch for warnings leading up to a potential singularity moment. If we see robots become more multipurpose and contextually aware then they may then be "on their way to strong AI," says Calo. That will be a tip that they're advancing to the point of danger for humans.
Calo has also recently said that robotic capability needs to be regulated.
Andra Keay, managing director of Silicon Valley Robotics, also doesn't foresee a guns a' blazin' robot war, but she says there are issues we should confront: "I don't believe in a head-on conflict between humans and machines, but I do think that machines may profoundly change the way we live and unless we pay attention to the shifting economical and ethical boundaries, then we will create a worse world for the future," she said. "It's up to us."
In contrast to this, Jorge Heraud, CEO of agricultural robotics company Blue River Technology, offers a fairly middle-of-the-road point of view: "Yes, someday [robots and machines] will [surpass human intelligence]. Early on, robots/machines will be better at some tasks and (much) worse at others. It'll take a very long while until a single robot/machine will surpass human intelligence in a broad number of tasks. [It will be] much longer until it's better in all."
When asked if if the singularity would look like a missing scene from "Terminator" or if it would be more subtle than that, Heraud said, "Much more subtle. Think C-3PO. We don't have anything to worry for a long while."
Regardless of the risk, it shouldn't be controversial that we need to discuss and regulate the future of robotics.
Northwestern Law professor John O. McGinnis makes clear how we can win the robot revolution right now in his paper, "Accelerating AI" [emphasis ours]:
Even a non-anthropomorphic human intelligence still could pose threats to mankind, but they are probably manageable threats. The greatest problem is that such artificial intelligence may be indifferent to human welfare. Thus, for instance, unless otherwise programmed, it could solve problems in ways that could lead to harm against humans. But indifference, rather than innate malevolence, is much more easily cured. Artificial intelligence can be programmed to weigh human values in its decision making. The key will be to assure such programming.
Long before any battle scenes ripped from science fiction actually take place, the real battle will be in the hands of the people building and designing artificially intelligent systems. Many of the same people who declined to be interviewed for this story are the ones who must stand up as heroes to save humanity from blockbuster science fiction terror in the real world.
Forget the missiles and lasers — the only weapons of consequence here will be algorithms and the human minds creating them.
***
We asked our interview subjects for book and movie recommendations that pertain to this topic. Their responses are below.
Ryan Calo: "I would recommend 'The Machine Stops' by E.M. Forster for an eerie if exaggerated account of where technology could take the human condition."
Frank Tobe: "The James Martin Institute for Science and Civilization at the University of Oxford produced a video moderated by Michael Douglas entitled 'The Meaning of the 21st Century' and wrote a book with the same title. It might be worth your time to watch the short version: 'Revolution in Oxford'."
Andra Keay: "I enjoy Daniel Wilson's books, but also the sci-fi of Octavia Butler and other writers who delve into the different inner lives that simple changes in biology create, whether human, alien, or robot."
Jorge Heraud: "'Star Wars'."
SEE ALSO: By 2045 'The Top Species Will No Longer Be Humans,' And That Could Be A Problem