Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

An ex-Pentagon official thinks 'killer robots' need to be stopped

$
0
0

terminator genisys concept art seattle

The dystopian war between robots and humans of the "Terminator" films is probably not going to happen, but there is still reason to worry about so-called "killer robots."

A new report by Paul Scharre of the Center for a New American Security argues that, while militaries develop semi- and fully-autonomous weapons systems such as missiles and drone aircraft, they are facing "potentially catastrophic consequences" if human controllers are taken out of the loop.

"Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to ‘p.m.’ instead of ‘a.m.,’ or any of the countless frustrations that come with interacting with computers, has experienced the problem of ‘brittleness’ that plagues automated systems," Scharre, a former Army Ranger who helped draft policies related to autonomous weapons systems for the Pentagon, writes in the report.

His main point: Automated systems can be really useful, but they are limited by their programming, and lack the use of "common sense" that a human may employ in certain cases.

Such was the case in 1983, when the human skepticism of Stanislav Petrov, a Russian military officer, was the biggest safety in stopping the Soviet Union from launching its missiles, after an early-warning system reported five incoming missiles from the United States.

It was a computer error. A fully-automated system had the data and would have launched. But Petrov rightly believed the system was malfunctioning.

"What would an autonomous system have done if it was in the same situation as Stanislav Petrov found himself on September 26, 1983? Whatever it was programmed to do."

Software error, cannot compute. Launch missile?

Scharre evaluates a number of past failures — involving humans and automated systems — to illustrate his point. While humans were in the loop during the disasters at Three Mile Island or Fukushima, these rare accidents expose the problem with tightly-controlled systems.

In the case of Fukushima for instance, many of the safety features activated by loss of power, flooding, and earthquakes worked as designed, but the engineers did not account for the possibility that all three of these things could happen at the same time. 

Fukushima

Engineers may be able to hypothesize and program the machine's response to nightmare scenarios but on a "long enough time horizon," Scharre writes, "Unanticipated system interactions are inevitable."

When something unanticipated happens to a computer that isn't programmed to deal with it, plenty of bad stuff can happen. Most computer users know the famous "blue screen of death" error and constantly update their software to fix bugs, and security problems are often found in systems after they have been exploited by hackers.

"Without a human in the loop to act as a fail-safe, the consequences of failure with an autonomous weapon could be far more severe than an equivalent semi-autonomous weapon," he writes.

Scharre advocates a similar framework for humans and machines to work together, called "centaur warfighting." It's based on Gary Kasparov's model of "centaur," or advanced chess — in which an artificially-intelligent machine helps the Chessmaster think smarter about his next move.

"The best chess players in the world are human-machine teams," he writes.

 

Join the conversation about this story »

NOW WATCH: Psychologists discovered how to make people like you


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>