Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

Afraid Of AI? Here's Why You Shouldn't Be

$
0
0

Pepper humanoid emotional robot from japan

Earlier in January, an organization called the Future of Life Institute issued an open letter on the subject of building safety measures into artificial intelligence systems (AI).

The letter, and the research document that accompanies it, present a remarkably even-handed look at how AI researchers can maximize the potential of this technology.

Here's the letter at its most ominous, which is to say, not ominous at all:

Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

And here was CNET's headline, for its story about the letter:

Artificial intelligence experts sign open letter to protect mankind from machines

BBC News, meanwhile, ventured slightly further out of the panic room to deliver this falsehood:

Experts pledge to rein in AI research

I'd like to think that this is rock bottom.

Journalists can't possibly be any more clueless, or callously traffic-baiting, when it comes to robots and AI. And readers have to get tired, at some point, of clicking on the same shrill headlines, that quote the same non-AI researchers—Elon Musk and Stephen Hawking, to be specific—making the same doomsday proclamations.

Fear-mongering always loses its edge over time, and even the most toxic media coverage has an inherent half-life. But it never stops.

Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists.

***

This is what it looks like to make a fool of yourself, when covering AI.

Start by doing as little reporting as possible. In this case, that means reading an open letter released online, and not bothering to interview any of the people involved in its creation.

To use the CNET and BBC stories as examples, neither includes quotes or clarifications from the researchers who helped put together either the letter or its companion research document. This is a function of speed, but it's also a tactical decision (whether conscious or not). Like every story that centers on frantic warnings about apocalyptic AI, the more you report, the more threadbare the premise turns out to be.

Experts in this field tend to point out that the theater isn't on fire, which is no fun at all when your primary mission is to send readers scrambling for the exit.

The speedier, and more dramatic course of action is to provide what looks like context, but is really just Elon Musk and Stephen Hawking talking about a subject that is neither of their specialties. I'm mentioning them, in particular, because they've become the collective voice of AI panic.

They believe that machine superintelligence could lead to our extinction. And their comments to that effect have the ring of truth, because they come from brilliant minds with a blessed lack of media filters. If time is money, then the endlessly recycled quotes from Musk and Hawking are a goldmine for harried reporters and editors. What more context do you need, than a pair of geniuses publicly fretting about the fall of humankind?

And that's all it takes to report on a topic whose stakes couldn't possibly be higher. Cut and paste from online documents, and from previous interviews, tweets and comments, affix a headline that conjures visions of skeletal androids stomping human skulls underfoot, and wait for the marks to come gawking.

That's what this sort of journalism does to its creators, and to its consumers. It turns a complex, and transformative technology into a carnival sideshow. Actually, that's giving most AI coverage too much credit. Carnies work hard for their wages. Tech reporters just have to work fast.

***

The story behind the open letter is, in some ways, more interesting than the letter itself. On January 2, roughly 70 researchers met at a hotel in San Juan, Puerto Rico, for a three-day conference on AI safety. This was a genuinely secretive event.

The Future of Life Institute (FLI) hadn't alerted the media in advance, or invited any reporters to attend, despite having planned the meeting at least six months in advance. Even now, the event's organizers won't provide a complete list of attendees. FLI wanted researchers to speak candidly, and without worry of attribution, during the weekend-long schedule of formal and informal discussions.

In a movie, this shadowy conference, hosted by an organization with a tantalizing name—and held in a tropical locale, no less—would have come under preemptive assault from some rampaging algorithmic reboot of Frankenstein's monster.

Or, in the hands of more patient filmmakers, the result would have been a first-act setup: an urgent call to immediately halt all AI research (ignored, of course, by a rebellious lunatic in a darkened server room). Those headlines from BBC News and CNET would have been perfectly at home on the movie screen, signaling the global response to a legitimately terrifying announcement.

In fact, the open letter from FLI is a pretty bloodless affair. The title alone—Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter—should manage the reader's expectations. The letter references advances in machine learning, neuroscience, and other research areas that, in combination, are yielding promising results for AI systems. As for doom and gloom, the only relevant statements are the afore-mentioned sentence about “potential pitfalls,” and this one:

“We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

That, in all honesty, is as dark as this open letter gets. A couple of slightly arch statements, buried within a document whose language is intentionally optimistic. The signatories are interested in “maximizing the societal benefit of AI,” and the letter ends with this call to action.

“In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.”

This is the document that news outlets are interpreting as a call to protect humanity from machines, and to rein in AI R&D. It's also proof that many journalists aren't simply missing the point, when it comes to artificial intelligence. They are lying to you.

***

The truth is, there are researchers within the AI community who are extremely concerned about the question of artificial superintelligence, which is why FLI included a section in the letter's companion document about those fears. But it's also true that these researchers are in the extreme minority.

And according to Bart Selman, a professor of computer science at Cornell, the purpose of the open letter was to tamp down the hysteria that journalists are trying to instill in the general public, while bringing up near-term concerns.

Some of these issues are complex, and compelling. Will a mortgage company's machine learning system accidentally violate an applicant's privacy, and possibly even break the law, by digging too deep into his or her metadata? Selman isn't worried about rebellious algorithms, but faulty or over-eager ones.

“These systems are often given fairly high level goals. So making sure that they don't achieve their goals by something dramatically different than you could anticipate are reasonable research goals,” says Selman. “The problem we have is that the press, the popular press in particular, goes for this really extreme angle of superintelligence, and AI taking over. And we're trying to show them that, that's one angle that you could worry about, but it's not that big of a worry for us.”

Of course, it's statements like that which, when taken out of context, can fuel the very fires they're trying to put out. Selman, who attended the San Juan conference and contributed to FLI's open letter and research document, cannot in good conscience rule out a future outcome for the field of AI.

That sort of dogmatic dismissal is anathema to a responsible scientist. But he also isn't above throwing a bit of shade at the researchers who seem preoccupied with the prospect of bootstrapped AI, meaning a system that suddenly becomes exponentially smarter and more capable.

“The people who've been working in this area for 20, 30 years, they know this problem of complexity and scaling a little better than people are new to the area,” says Selman.

The history of AI research is full of theoretical benchmarks and milestones whose only barrier appeared to be a lack of computing resources. And yet, even as processor and storage technology has raced ahead of researchers' expectations, the deadlines for AI's most promising (or terrifying, depending on your agenda) applications remain stuck somewhere in the next 10 or 20 years.

I've written before about the myth of inevitable superintelligence, but Selman is much more succinct on the subject. The key mistake, he says, is in confusing principle with execution, and assuming that throwing more resources at given system will trigger an explosive increase in capability.

“People in computer science are very much aware that, even if you can do something in principle, if you had unlimited resources, you might still not be able to do it,” he says, “because unlimited resources don't mean an exponential scaling up. And if you do have an exponential scale, suddenly you have 20 times the variables.”

Bootstrapping AI is simultaneously an AI researcher's worst nightmare and dream come true—instead of grinding away at the same piece of bug-infested code for weeks on end, he or she can sit back, and watch the damn thing write itself.

At the heart of this fear of superintelligence is a question that, at present, can't be answered.

“The mainstream AI community does believe that systems will get to a human-level intelligence in 10 or 20 years, though I don't mean all aspects of intelligence,” says Selman.

Speech and vision recognition, for example, might individually reach that level of capability, without adding up to a system that understands the kind of social cues that even toddlers can pick up on.

“But will computers be great programmers, or great mathematicians, or other things that require creativity? That's much less clear. There are some real computational barriers to that, and they may actually be fundamental barriers,” says Selman.

While superintelligence doesn't have to spring into existence with recognizably human thought processes—peppering its bitter protest poetry with references to Paradise Lost—it would arguably have to be able to program itself into godhood. Is such a thing possible in principle, much less in practice?

It's that question that FLI is hoping the AI community will explore, though not with any particular urgency. When I spoke to Viktoriya Krakovna, one of the organization's founders, she was alarmed at how the media has interpreted the open letter, and focused almost exclusively on the issue of superintelligence.

"We wanted to show that the AI research community is a community of responsible people, who are trying to build beneficial robots and AI," she says. Instead, reporters have presented the letter as something like an act of contrition, punishing FLI for creating a document that's inclusive enough to include the possibility of researching the question--not the threat, but the question--of runaway AI.

Selman sees such a project as a job for “a few people,” to try to define a problem that hasn't been researched or even defined. He compares it to work done by theoretical physicists, who might calculate the effects of some cosmic cataclysm as a pure research question.

Until it can be determined that this version of the apocalypse is feasible in principle, there's nothing to safeguard against. This is an important distinction, that's easily overlooked. For science to work, it has to be concerned with the observable universe, not with superstition couched in scientific jargon.

Sadly, the chances of AI coverage becoming any less fear-mongering are about as likely as the Large Hadron Collider producing a planet-annihilating black hole. Remember that easily digestible non-story, and the way it dwarfed the true significance of that particle accelerator? When it comes to the difficult business of covering science and technology, nothing grabs readers like threatening their lives.

This article originally appeared on Popular Science

 

 

This article was written by Erik Sofge from Popular Science and was legally licensed through the NewsCred publisher network.

https://images1.newscred.com/cD03MzBlYjg2YWI1OWYwZDQxOTI2YWM2NWIwMWY4M2UyZiZnPWY2MTlmOTM2YWRiZTEwYTg4MzUwNjIwZTZkMjk4YTYw

SEE ALSO: 13 Scientific Predictions For 2015

Join the conversation about this story »


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>