Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

Experts are worried that advancements in AI could threaten humanity

$
0
0

hello barbie ai artificial intelligence mattel

Oren Etzioni, a well-known AI researcher, complains about news coverage of potential long-term risks arising from future success in AI research (see “No, Experts Don't Think Superintelligent AI is a Threat to Humanity”).

After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom’s “main source of data on the advent of human-level intelligence” consists of surveys on the opinions of AI researchers. He then surveys the opinions of AI researchers, arguing that his results refute Bostrom’s.

It’s important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it’s important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent. He writes, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.”

Thus, in our view, Etzioni’s article distracts the reader from the core argument of the book and directs an ad hominem attack against Bostrom under the pretext of disputing his survey results. We feel it is necessary to correct the record. One of us (Russell) even contributed to Etzioni’s survey, only to see his response being completely misconstrued. In fact, as our detailed analysis shows, Etzioni’s survey results are entirely consistent with the ones Bostrom cites.

How, then, does Etzioni reach his novel conclusion? By designing a survey instrument that is inferior to Bostrom’s and then misinterpretingthe results.

The subtitle of the article reads, “If you ask the people who should really know, you’ll find that few believe AI is a threat to humanity.” So the reader is led to believe that Etzioni asked this question of the people who should really know, while Bostrom did not. In fact, the opposite is true: Bostrom did ask people who should really know, but Etzioni did not ask anyone at all. Bostrom surveyed the top 100 most cited AI researchers. More than half of the respondents said they believe there is a substantial (at least 15 percent) chance that the effect of human-level machine intelligence on humanity will be “on balance bad” or “extremely bad (existential catastrophe).” Etzioni’s survey, unlike Bostrom’s, did not ask any questions about a threat to humanity.

Instead, he simply asks one question about when we will achieve superintelligence. As Bostrom’s data would have already predicted, somewhat more than half (67.5 percent) of Etzioni’s respondents plumped for “more than 25 years” to achieve superintelligence—after all, more than half of Bostrom’s respondents gave dates beyond 25 years for a mere 50 percent probability of achieving mere human-level intelligence. One of us (Russell) responded to Etzioni’s survey with “more than 25 years,” and Bostrom himself writes, of his own surveys, “My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates.”

Now, having designed a survey where respondents could be expected to choose “more than 25 years,” Etzioni springs his trap: he asserts that 25 years is “beyond the foreseeable horizon” and thereby deduces that neither Russell nor indeed Bostrom himself believes that superintelligent AI is a threat to humanity. This will come as a surprise to Russell and Bostrom, and presumably to many other respondents in the survey. (Indeed, Etzioni’s headline could just as easily have been “75 percent of experts think superintelligent AI is inevitable.”) Should we ignore catastrophic risks simply because most experts think they are more than 25 years away? By Etzioni’s logic, we should also ignore the catastrophic risks of climate change and castigate those who bring them up.

Contrary to the views of Etzioni and some others in the AI community, pointing to long-term risks from AI is not equivalent to claiming that superintelligent AI and its accompanying risks are “imminent.” The list of those who have pointed to the risks includes such luminaries as Alan Turing, Norbert Wiener, I.J. Good, and Marvin Minsky. Even Oren Etzioni has acknowledged these challenges. To our knowledge, none of these ever asserted that superintelligent AI was imminent. Nor, as noted above, did Bostrom in Superintelligence.

Artificial IntelligenceEtzioni then repeats the dubious argument that “doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.” The argument does not even apply to Bostrom, who predicts that success in controlling AI will result in “a compassionate and jubilant use of humanity’s cosmic endowment.” The argument is also nonsense. It’s like arguing that nuclear engineers who analyze the possibility of meltdowns in nuclear power stations are “failing to consider the potential benefits” of cheap electricity, and that because nuclear power stations might one day generate really cheap electricity, we should neither mention, nor work on preventing, the possibility of a meltdown.

Our experience with Chernobyl suggests it may be unwise to claim that a powerful technology entails no risks. It may also be unwise to claim that a powerful technology will never come to fruition. On September 11, 1933, Lord Rutherford, perhaps the world’s most eminent nuclear physicist, described the prospect of extracting energy from atoms as nothing but “moonshine.” Less than 24 hours later, Leo Szilard invented the neutron-induced nuclear chain reaction; detailed designs for nuclear reactors and nuclear weapons followed a few years later. Surely it is better to anticipate human ingenuity than to underestimate it, better to acknowledge the risks than to deny them.

Many prominent AI experts have recognized the possibility that AI presents an existential risk. Contrary to misrepresentations in the media, this risk need not arise from spontaneous malevolent consciousness. Rather, the risk arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it. We invite the reader to support the ongoing efforts to do so.

Allan Dafoe is an assistant professor of political science at Yale University.

Stuart Russell is a professor of computer science at the University of California, Berkeley.

Join the conversation about this story »

NOW WATCH: NASA recorded moving clouds on Titan — and it's amazing


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>