Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

The Next 'Avengers' Movie Might Demonize Artificial Intelligence All Over Again

$
0
0

age-of-ultron-art

Billionaire investor Peter Thiel recently said progressive technologies like artificial intelligence tend to get vilified in Hollywood.

"You know, our society, the dominant culture doesn’t like science. It doesn’t like technology. You just look at the science-fiction movies that come out of Hollywood — 'Terminator,' 'Matrix,' 'Avatar,' 'Elysium.' I watched the 'Gravity' movie the other day. It’s like you would never want to go into outer space. You would just want to be back on some muddy island."

In that case, Thiel probably won’t be thrilled with the next “Avengers” movie.

For years, there have been dozens of movies warning about an apocalypse brought on by evil robots. But this movie might have a bit more resonance since we’re actually approaching the point where we’ll have AI in our smartphones, our cars, and in our homes.

Earlier this year, Stephen Hawking and Elon Musk — two of the greatest minds in science and technology, respectively — warned about what might happen if artificial intelligence systems were somehow programmed to be malicious.

Musk described a "Terminator" scenario that would be "more dangerous than nukes," while Hawking offered a more nuanced understanding of the impact:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

So basically, unless we're careful about how we program AI, we could have killer robots on our hands. This also happens to be the plot for Marvel's "Avengers: Age Of Ultron." The trailer for the film was released Wednesday.

In the film, billionaire playboy genius mechanic futurist Tony Stark, a.k.a Iron Man — whose movie character is actually inspired by Musk— "tries to jumpstart a dormant peacekeeping program" and inadvertently creates a maniacal AI named "Ultron." In the trailer, you can see what looks to be the first carnation of Ultron, which looks like one of those dilapidated peacekeeping robots (notice the Avengers logo on the chest).

ultron-1

ultron-2

The rest of the trailer includes lots of explosions and lots of shots of a sad and beaten Bruce Banner. And if the Incredible Hulk is overwhelmed by evil AI, maybe we should be a little skittish, too.

bruce-banner-sad

The potential for AI to be "evil" isn't a new concept — and it's not a crazy prospect, either— but Disney and Marvel are influential enough to bring this topic back into the mainstream, now that AI is finally here.

Since IBM's Watson supercomputer beat a bunch of Jeopardy! winners at their own game in 2011, big tech companies have begun to bet big on AI: Google purchased DeepMind for hundreds of millions of dollars earlier this year, social networks are using AI for facial recognition, AI is used to regulate traffic and train schedules, and several car companies, including Musk's Tesla Motors, are working on autonomous vehicles. 

boston dynamics

Artificial intelligence is all about creating machines that can make decisions by themselves based on logical objectives. There are good intentions, obviously, since smart robots can help us get work done more efficiently. The problem is what happens if we program robots to choose their own objectives, and what happens if humans simply become an "obstacle" between the robot and its objective.

Hopefully, this movie will inspire companies and governments to be more careful about how we develop artificial intelligence, since so many believe the propensity for AI to go horribly, horribly wrong, is "inevitable." On the bright side, at least there's one scientist who knows how to stop the robot uprising.

SEE ALSO: Elon Musk: I'm Worried About A 'Terminator'-Like Scenario Erupting From Artificial Intelligence

SEE ALSO: Elon Musk: Artificial Intelligence Is 'Potentially More Dangerous Than Nukes'

SEE ALSO: There Are Only 3 Ways To Stop The Inevitable Robot Uprising

Join the conversation about this story »


Elon Musk Thinks Sci-Fi Nightmare Scenarios About Artificial Intelligence Could Really Happen

$
0
0

Elon Musk at MIT

Tesla CEO Elon Musk isn't the biggest fan of artificial intelligence, an initiative he called "our biggest existential threat" in comments at the MIT Aeronautics and Astronautics department's Centennial Symposium on Friday.

Musk, who called for some regulatory oversight of AI to ensure "we don't do something very foolish," warned of the dangers.

"If I were to guess what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence," he said. "With artificial intelligence we are summoning the demon."

Artificial intelligence (AI) is an area of research with the goal of creating intelligent machines which can reason, problem-solve, and think like, or better than, human beings can. While many researchers wish to ensure AI has a positive impact, a nightmare scenario has played out often in science fiction books and movies — from 2001 to Terminator to Blade Runner — where intelligent computers or machines end up turning on their human creators.

"In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out," Musk said.

The symposium wasn't the first time Musk raised concerns. In August, Musk tweeted: "We need to be super careful with AI. Potentially more dangerous than nukes."

(h/t The Washington Post)

NOW WATCH: 7 Reasons Why The New Tesla Is Such A Big Deal

Join the conversation about this story »

A Comment About Artificial Intelligence Left Elon Musk Frozen On Stage

$
0
0

Elon Musk at MIT

Artificial intelligence really spooks out Tesla and SpaceX founder Elon Musk.

He's afraid, without proper regulation in place, it could be the "biggest existential threat" to humans.

Musk was asked about AI at MIT's annual AeroAstra Centennial Symposium last week. He spooked himself out so badly answering the question, he was unable to concentrate for a few minutes after.

"Do you have any plans to enter the field of artificial intelligence?" an audience member asked.

"I think we should be very careful about artificial intelligence,"Musk replied. "If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon.

You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out."

According to Musk, AI humans are capable of building would make Space Odyssey's HAL 9000 look like "a puppy dog."

The next question came from another audience member who asked how SpaceX plans to utilize telecommunications — something totally unrelated to AI.

But Musk was too distracted to listen.

"I'm sorry could you repeat the question?" he said. "I was just sort of thinking about the AI thing for a second."

Here's the clip (start watching at the 1 hour 7 minute mark):


NOW WATCH: 5 Apps That Will Do Chores For You

 

Join the conversation about this story »

Here's Why Elon Musk Is Wrong About Artificial Intelligence

$
0
0

elon musk

Ever since the 1927 film Metropolis introduced movie viewers to the first cinematic evil robot (a demagogic, labor activist-impersonating temptress), society has reacted to the cumulative influx of artificial intelligence, robots, and other intelligent systems with a mixture of wonder and sheer terror.

Computer scientists work to counterbalance these fears by striving to make “moral” machines and/or human-friendly AI.

Yet the core flaw of this effort is that it assumes that the technology — and not our emotional, human reactions to it — is the problem. Adapting to the complexities of a “second machine age” will require addressing the understandable fears without succumbing to them. Unfortunately, our own tendencies to indulge in overwrought fear mongering could hinder our own autonomy in a world that may come to be powerfully shaped by autonomous machines.

Tesla CEO and famous technology innovator Elon Musk has repeatedly warned about AI threats. In June, he said on CNBC that he had invested in AI research because “I like to just keep an eye on what's going on with artificial intelligence. I think there is a potential dangerous outcome there.” He went on to invoke The Terminator. 

In August, he tweeted that “We need to be super careful with AI. Potentially more dangerous than nukes.” And at a recent MIT symposium, Musk dubbed AI an “existential threat” to the human race and a “demon” that foolish scientists and technologists are “summoning.”

Musk likened the idea of control over such a force to the delusions of “guy[s] with a pentagram and holy water” who are sure they can control a supernatural force — until it devours them. As Musk himself suggests elsewhere in his remarks, the solution to the problem lies in sober and considered collaboration between scientists and policymakers. However, it is hard to see how talk of “demons” advances this noble goal. In fact, it may actively hinder it.

elon musk spaceXFirst, the idea of a Skynet scenario itself has enormous holes. While computer science researchers think Musk’s musings are “not completely crazy,” they are still awfully remote from a world in which AI hype masks less artificially intelligent realities that our nation’s computer scientists grapple with:

Yann LeCun, the head of Facebook’s AI lab, summed it up in a Google+ post back in 2013: “Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI Hype must be stopped.” … Forget the Terminator. We have to be measured in how we talk about AI. … the fact is, our “smartest” AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over, not putting the finishing touches on Skynet.

LeCun and others are right to fear the consequences of hype. Failure to live up to sci-fi–fueled expectations, after all, often results in harsh cuts to AI research budgets. But that’s by no means the only risk inherent in Musk’s talk of supernatural (not artificial) intelligence.

Technology law and policy specialist Adam Thierer has developed a theory of something he calls the “technopanic” — a moral panic over a vague, looming technological threat driven by crowd irrationality and threat inflation rather than sensible threat assessment. For example, instead of sensible policy discussions about the problems of cybersecurity, policy and media institutions trumpet the threat of a “cyber Pearl Harbor” that devastates America’s information infrastructure.

Never mind that even Stuxnet’s devastating impact was overhyped. Disregard more mundane but nonetheless serious issues of bugs in widely used open-source software like OpenSSL and the UNIX Bash shell. Pay no attention to the inconvenient fact that the entirely self-inflicted problem of our own government’s insatiable desire to compromise consumer security with law enforcement backdoors puts the average user in just as much peril as any notional superhacker’s evil designs. When America believes a looming “cyber Pearl Harbor” is on the way, no one wants to be the 21st-century Admiral Husband E. Kimmel.

Thierer diagnoses six factors that drive technopanics: generational differences that lead to fear of the new, “hypernostalgia” for illusory good old days, the economic incentive for reporters and pundits to fear-monger, special interests jostling for government favor, projection of moral and cultural debates onto new technologies, and elitist attitudes among academic skeptics and cultural critics disdainful of new technologies and tools adopted by the mass public. All of these are perfectly reasonable explanations, but a seventh factor also matters: the psychological consequences of human dependence on complex technology in almost all areas of modern life.

As sociologists of technology argue, we depend on technology we ourselves cannot understand or control. Instead, we are forced to trust that the systems and subsystems we depend on and the experts who maintain them function as advertised. Passengers may have vague notions of the physics behind flight, but not the formulas used to calculate the mechanics used to keep the airplane flying.

Moreover, no single engineer on the design team of the plane has full knowledge of every component. Complex yet absolutely crucial technologies like airplanes are foreign and mysterious to us. Yet this, if anything, underplays the problem. Contra Star Trek, for many users their iPhone or iPad is the “undiscovered country.”

In this light, Arthur C. Clarke’s famous quote that advanced technology is “indistinguishable from magic” explains why Musk reached for explicitly occult imagery more characteristic of Buffy the Vampire Slayer than anything out of Stuart Russell and Peter Norvig’s widely used AI textbook. Modern technology to us is a kind of black magic, shrouded in mysticism and occlusion and dominated by a select coterie of sorcerers who conjure up spells with C++ and Java instead of a “pentagram and holy water.”

elon musk officeRhetoric like Musk’s is not harmless. As sociologist of technology Sean Lawsonargues, fear of drones has already resulted in draconian restrictions on nongovernmental-unmanned-aerial-system use that stifle innovation and trample on our civil liberties. As Lawson notes, the Federal Aviation Administration has sought to prevent volunteers from using drones to find missing persons and even threatened a news outlet looking to publish footage recorded by consumer drones.

While Musk may hope that his concern drives sensible anticipatory regulation by domestic and international authorities, it’s hard to see why loose talk of AI demon-summoning contributes to anything except the kind of regulatory bungling that Lawson documents.

But the biggest negative impact of AI fear mongering may not lie in the regulatory realm. Instead, it could very well reinforce and worsen the state of learned helplessness that characterizes the average Joe or Jane’s relationship to and dependence on complex technology.

At best, computing is a necessary chore for many users. At worst, computing is bewildering and alienating, sometimes requiring intervention of technical specialists with arcane knowledge bases. Experts often lament that the mass public and the people who represent them are ignorant oftechnological details and thus make poor choices concerning technology in both day-to-day life and regulatory policy.

Technopanics didn’t create the divide between the Linuxless masses and the Geek Squad—but they arguably worsen it. When public figures like Musk characterize emerging technologies in mystical, alarmist, and metaphorical terms, they abandon the very science and technology that forged innovations like Tesla cars for the superstition and ignorance of what Carl Sagan famously dubbed the “demon-haunted world.”

Instead of helping users understand, adapt to, and even empathize with the white-collar robot that may be joining their workplace, Musk’s remarks encourage them to fear and despise what they don’t understand. It is fitting that Musk’s remarks come so close to Halloween, as his rhetoric resembles that of the village elder in an old horror movie who whips up the villagers to bear pitchforks and torches to kill the monster in the decrepit old castle up the hill.

The greatest tragedy of the emergent AI technopanic that Musk fuels is that it may reduce human autonomy in a world that may one day be driven by increasingly autonomous machine intelligence. Experts tell us that emerging AI technologies will fundamentally reshape everything from romantic relationships to national security.

self driving carThey could be wrong, as AI has an unfortunate history of failing to live up to expectations. Let’s assume, however, that they are right. Why would it be in the public interest to—through visions of demons, wizards, and warlocks—contribute to an already growing divide between the technologists who make the self-driving cars and the rest of us who will ride in them?

Debates in AI and public policy often hinge on trying to parse precisely what machine autonomy represents, but you don’t need a Ph.D. in computer science or even a Github account to know what it means to be an autonomous humaninteracting with technology. It’s understanding (at least on some level) and being able to make confident decisions about the ways we use everyday technology. (Perhaps if users were encouraged to take charge of technology instead of fearing it, they wouldn’t need to take so many trips to the Genius Bar.) Yes, Musk is right that AI can’t be left purely to the programmers. But worrying about science fiction like Skynet could just reinforce the “digital divide” between the tech’s haves and have-nots.

If Musk redirected his energies and helped us all learn how to understand and control intelligent systems, he could ensure that the technological future is one that everyone feels comfortable debating and weighing in on. A man of his creative-engineering talents and business instincts surely could help ensure that we get a Skynet for the John and Sarah Connors of the world, not just the Bay Area tech elites.*Granted, AI for the masses might not be Mars colonization or the Hyperloop. But it’s far more socially beneficial (and potentially profitable for tech gurus like Musk) than simply raging against the machine.

Join the conversation about this story »

This Could Be The Single Most Important Development In Helping Machines Think Like Humans

$
0
0

connectome brain technology neural networks

Throughout history, humans have been the most intelligent beings on earth. But is this about to change? The advancement of neural networks could be the single most important development in helping machines think more like humans. Investors should take note.

The human body is incredibly adept at sensing the world around it—largely thanks to a complex nervous system manned by a massive network of neurons. These neurons “fire” when they’re stimulated by inputs such as images or temperature. They act independently, but the network processes information collectively and efficiently. The human brain is the most complex neural network, with an estimated 80–100 billion neurons, each with 1,000 connections.

Building a Brain: Helping Machines Learn

It’s not easy to mimic the human brain. Stanford University professor Andrew Ng took an early stab at it in 2011 with Google’s Deep Learning project—later called “Google Brain.” The initial setup had 1,000 computer servers and was roughly equivalent (as measured by the number of connections) to a honeybee’s brain. The cost? A cool $5 million. After Google Brain was fed YouTube images for three straight days, it could identify the faces of a human and a cat. Recently, another tech firm, Nvidia, announced hardware that had similar capabilities but cost only $33,000, bringing brain-like architecture (or thinking) to the masses.

Google Brain is an example of machine learning—using hardware and software to solve problems through “learning” instead of through rule-based instructions. Artificial neural networks (ANNs), modeled after the complex neural connections in the human brain, have been one of the most successful of these approaches so far.

Early Applications of Neural Networks

Neural networks are already at work in more places than you might think. Facebook and Google use them in image searches and to dynamically target advertisements to users. Car manufacturers use them to process images from onboard cameras feeding safety features like lane-drift warnings and pedestrian detection. In fact, any computer application that uses pattern recognition or image analysis is perfect for a neural network framework. And there’s more and more data to work with: the amount of analyzable data in the world is growing exponentially (Display).

Ruegsegger Digital Explosion_d3Most efforts so far to imitate the human brain have centered around software, but in recent years researchers have tried their hands at hardware too. IBM is working on a computer chip that has better sensory capabilities and uses less power than traditional chips. The company’s long-term goal is to create a system of 10 billion neurons that consumes 1 kilowatt of power and has a volume of less than two liters. IBM has committed $3 billion over the next five years to semiconductor research in areas such as new chip architectures. Other companies such as QUALCOMM are making similar investments. 

Who’s Capitalizing on Machine-Based Learning?

Google and Facebook have been noted for their efforts to recruit thought leaders applying this technology. It’s not surprising that these tech-savvy companies are leading in machine learning. But the applications are likely to span multiple industries.

One notable example is the race to design the most effective semi- and fully autonomous driving systems. Given the massive volume of images that must be captured and processed for these systems, the hardware vendors, auto parts suppliers and original equipment manufacturers that can design the most efficient and accurate systems will see quicker times to market and fewer incidents. Neural network frameworks may be a real differentiator.

Smaller, entrepreneurial finance companies are also using neural networks to create better lending models by more accurately predicting credit behaviors, such as defaults. Fraud detection—where pattern recognition is particularly useful—is another emerging application.

This technological shift is changing how investors gauge a company’s potential in terms of its intellectual property. Traditionally, one measure of this has been a company’s patent portfolio. Today, it may be better to know how many machine-learning experts are on staff. More talent usually leads to better products and smarter capital spending—driven by neural networks’ greater efficiency versus traditional data analysis. This is especially true for companies whose products involve image and pattern recognition.

Even today, our understanding of how the human brain works is still limited. The more we understand, the more complex we can make the systems that mimic the brain. True artificial intelligence may still be a few years away, but progress is being made on this incredible journey. Today’s machines are more adaptive than ever—mainly because of neural networks. As company assets, experts in machine learning are quickly becoming as important as patent portfolios.

The views expressed herein do not constitute research, investment advice or trade recommendations and do not necessarily represent the views of all AllianceBernstein portfolio-management teams.

Benjamin Ruegsegger is Portfolio Manager—Growth Equities at AllianceBernstein (NYSE: AB)

Join the conversation about this story »

ELON MUSK: You Have No Idea How Close We Are To Killer Robots

$
0
0

Elon Musk

Elon Musk has been ranting about killer robots again.

Musk posted a comment on the futurology site Edge.org, warning readers that developments in AI could bring about robots that may autonomously decide that it is sensible to start killing humans.

"The risk of something seriously dangerous happening is in the five year timeframe," Musk wrote. 

Aware that internet commenters may mock him for his outlandish predictions, Musk defended his views, writing, "This is not a case of crying wolf about something I don't understand."

But minutes after he posted the comment, it was deleted

The billionaire entrepreneur has made a habit of making apocalyptic comments about killer robots in recent interviews.

During a talk at a recent Vanity Fair conference, Musk warned the audience about killer robots. He suggested that advanced artificial intelligence could cause robots to delete humans like spam:

If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans ...

The interviewer went on to ask Musk whether humanity could use his SpaceX ships to escape killer robots if they took over Earth, but things don't look promising.

No — more likely than not that if there’s some ... apocalypse scenario, it may follow people from Earth.

Here's Musk's deleted comment from Edge.org:

The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen... 

SEE ALSO: Scientists are afraid to talk about robot apocalypse, and that's a problem

Join the conversation about this story »

Elon Musk Is Not Alone In His Belief That Robots Will Eventually Want To Kill Us All

$
0
0

elon musk

For months, billionaire tech entrepreneur Elon Musk has been warning the world that developments in artificial intelligence could cause robots to become hostile to humans. 

It all started in June when Musk explained why he invested in artificial intelligence company DeepMind. He said that he likes to "keep an eye on what's going on with artificial intelligence" because he believes there could be a "dangerous outcome" there. "There have been movies about this, you know, like Terminator," Musk said.

But Musk's warnings didn't end there. He's gone on to suggest that robots could delete humans like spam, and even said in a since-deleted comment that killer robots could arrive within five years.

But is Musk right about the threat of AI? We asked Louis Del Monte — who has written about AI and is a former employee of IBM and Honeywell's microelectronics units —   whether robots really will kill us all.

"Musk is correct," Del Monte said, "killer robots are already a reality and will proliferate over the next five or ten years."

Del Monte explained that artificial intelligence is developing at a rapid pace, and that could pose a threat to humanity if it comes to believe that humans are simply "junk code" that gets in the way:

"The power of computers doubles about every 18 months. If you use today as a starting point, we will have computers equivalent to human brains by approximately 2025. In addition, computers in 2025 will have the ability to learn from experience and improve their performance, similar to how humans learn from experience and improve their performance. The difference is that computers in 2025 will have most relevant facts in their memory banks. For example, they would be able to download everything that is in Wikipedia. If they are able to connect to the Internet, then they would be able to learn from other computers. Sharing enormous banks of knowledge could be done in micro-seconds, versus years for humans. The end result is that the average computer in 2025 would be typically smarter that the average human and able to learn new information at astonishing rates."

While it's certainly unsettling to think that machines could learn information quicker than us, that's not the real danger. Instead, we're rapidly approaching an event know as the Singularity:

"[There will be] a point in time when intelligent machines exceed the cognitive intelligence of all humans combined, [and that] will occur between 2040-2045. This projection is based on extrapolating Moore’s law, as well as reading the opinions of my colleagues in AI research. Respected futurists like Ray Kurzweil and James Martin both project the singularity to occur around 2045."

Louis Del Monte"The real danger surfaces when we attempt to answer this simple question: How will these highly intelligent machines view humanity? If you look at our history, you would conclude that we are an unstable species. We engage in wars. We release computer viruses. We have enough nuclear weapons to wipe out the Earth twice over. I judge that these highly intelligent machines will view humanity as a potential threat. If, for example, a nuclear war occurs, it will have the potential to wipe out these highly intelligent machines."

It looks like Musk's concerns about AI are echoed by other futurists and experts. But his prediction of a dangerous event occurring in five to ten years seems dramatically different from the widely accepted date of 2045. Could Musk's involvement with Google-owned artificial intelligence company DeepMind mean that he knows something we don't?

"Yes, Musk must be aware of the current capabilities and is able to extrapolate likely scenarios. It is entirely possible DeepMind is a step ahead of what is published in the public domain."

Musk has warned repeatedly that advancement in artificial intelligence could lead to robots turning on humans and killing us. We asked Del Monte what the scenario might actually look like:

"In the latter half of the 21st century, artificially intelligent machines will likely be at the heart of all technologically advanced societies. They will control factories, manufacture goods, manufacture foods and essentially have replaced organic humans in every work endeavour."

"The scenario of human extinction will go something like this: First, artificially intelligent machines will appear as humanity’s greatest invention. AI will provide cures for diseases and numerous medical breakthroughs, an abundance of products, end world hunger, AI brain implants that allow organic humans to become geniuses and the ability to upload human consciousness to an AI machine. Uploaded humans and humans with AI brain implants will more closely identify with the AI machines than with organic humans. AI machines and SAH (strong artificially intelligent human) cyborgs will use ingenious subterfuge to get as many organic humans as possible to have brain implants or to become uploaded humans."

"In the latter part of the 21st century, I estimate organic humans will be a minority and an endangered species. However, they will still be viewed as a threat by SAH cyborgs and AI machines. One scenario is that AI machines could release a nanobot virus that attacks organic humans and results in their total extinction. There are numerous other scenarios which I am developing for my new book. The outcome is the same, regardless of the scenario, namely, the extinction of organic humans. In the first quarter of the 22nd century, I project that the AI machines will view uploaded humans as junk code that just wastes energy and computing power."

Join the conversation about this story »

ELON MUSK: Robots Could Start Killing Us All Within 5 Years

$
0
0

Elon Musk

Elon Musk has been ranting about killer robots again.

Musk posted a comment on futurology site Edge.org, warning readers that developments in AI could bring about robots that may autonomously decide that it is sensible to start killing humans.

"The risk of something seriously dangerous happening is in the five year timeframe," Musk wrote. 

Aware that internet commenters may mock him for his outlandish predictions, Musk defended his views, writing, "This is not a case of crying wolf about something I don't understand."

But minutes after he posted the comment, it was deleted

The billionaire entrepreneur has made a habit of making apocalyptic comments about killer robots in recent interviews.

During a talk at a recent Vanity Fair conference, Musk warned the audience about killer robots. He suggested that advanced artificial intelligence could cause robots to delete humans like spam:

If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans ...

The interviewer went on to ask Musk whether humanity could use his SpaceX ships to escape killer robots if they took over Earth, but things don't look promising.

No — more likely than not that if there’s some ... apocalypse scenario, it may follow people from Earth.

Here's Musk's deleted comment from Edge.org:

The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognise the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen... 

Join the conversation about this story »


Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

$
0
0

The cast of  "The Imitation Game" offered their thoughts on artificial intelligence.

The film is directed by Morten Tyldum and follows the race against the clock to crack the German Enigma Code during World War II. It stars Benedict Cumberbatch as Alan Turing and Keira Knightley as Joan Clarke. 

Produced by Devan Joseph. Video courtesy of Associated Press.

Join the conversation about this story »

Computers Are Writing Novels: Read A Few Samples Here

$
0
0

Alan Turing

Computers are writing novels — and getting better at it.

It probably won't help your "robots are stealing our jobs" fear. And it casts doubt on the idea that creative professions are safer than the administrative or processing professions. (Don't tell Elon Musk.)

Right now, in a play on a human literary contest, around a hundred people are writing computer programs that will write texts for them, the Verge says. It's a response to November's National Novel Writing Month, an annual challenge that gets people to finish a 50,000-word book on a deadline. 

The Verge explains the futuristic version was started by developer and artist Darius Kazemi, who encouraged creations made entirely by code. These computerised novels are becoming more sophisticated. 

A computer writes "True Love".

One of the first computer-generated works of fiction was printed in 2008. The St. Petersburg Times reported at the time that "True Love", published by the Russia's SPb publishing company, was the work of a computer program and a team of IT specialists. The paper says the 320-page novel is a variation of Leo Tolstoy's "Anna Karenina", but worded in the style of a Japanese author called Haruki Murakami. It hit Russian bookstores in the same year. Here is an extract:

“Kitty couldn’t fall asleep for a long time. Her nerves were strained as two tight strings, and even a glass of hot wine, that Vronsky made her drink, did not help her. Lying in bed she kept going over and over that monstrous scene at the meadow.”

Two years ago the BBC noted that Professor Philip Parker at the Insead Business School created software capable of generating more than 200,000 books. They cover topics like the amount of fat in fromage frais; there's even a Romanian crossword guide. But the research, ultimately, was designed to help the publishing process and looks at the likes of corrections and composition. The books simply compile existing information and create new predictions using formulas. Still, they led to Professor Parker experimenting with software that might one day actually automate fiction. 

The question is: will these AI books fool humans? 

Alan Turing, currently a hot topic due to the new Benedict Cumberbatch film of his life, asked in 1950, "can machines think?" It's his test that is the real basis for determining whether AI has reached new bounds — the point where computers might actually take over.

He looked at literature specifically. Turing's literary test for computer generated fiction is this:

  • Soft test – Human readers can’t tell it’s not human generated.
  • Hard test – Human readers not only can’t tell it’s not human generated, but they’ll actually purchase it.

In a study into the process, the BBC pitched a computerised poem against one penned by poet Luke Write: 

Poems

It's likely you can tell which was constructed by a machine (the top one). But it's not completely obvious, which is a bit scary. 

As Future Perfect Publishing remarks though, neither of Turing's have yet been wholly passed. It points out that, while AI is evolving, it's not quite ready to perform "linguistic processing capability"; definitely not without human coding and drawing on established text to mash text together into new algorithms or sequences.

The other and the clay sighed for something of red."

However, when you read something like "Irritant" by Darby Larson, it highlights the fact that things are moving forward. Larson's project, reports Vice, "takes the utilisation of computer-generated speech to the next level." It consists of a 624-page paragraph and is made of sentences that "morph and mangle" together. While it's not yet a fully-formed piece of fiction, it edges closer to the necessary creative aspect of producing an interesting work of literary art. Here's an extract:

“The man in front of the truck trampled from front to back safe from the blue. And all this while the man scooped shovels of dirt and trampled from front to back front to back. The other and the clay sighed for something of red. The irritant lay in something of red and laughed.”

The "breakout" computer novel of 2013.

Indeed, 2013 was a big year for AI novels. The Verge reports Nick Montfort's "World Clock" was "the breakout hit of last year". He's a professor of digital media at MIT, and used lines of a code to arrange characters, locations, and actions to construct his work. It was printed by Havard Books. Here's the opening from Montfort's website preview:

AI book

It's not bad, but it's unlikely anyone would go out and buy the book for literary appreciation over curiosity. We'll check out 2014's AI novels when they're released.

Join the conversation about this story »

This Is What It Looks Like When Computers Write Novels

$
0
0

Alan Turing

Computers are writing novels — and getting better at it.

It probably won't help your "robots are stealing our jobs" fear. And it casts doubt on the idea that creative professions are safer than the administrative or processing professions. (Don't tell Elon Musk.)

Right now, in a play on a human literary contest, around a hundred people are writing computer programs that will write texts for them, the Verge says. It's a response to November's National Novel Writing Month, an annual challenge that gets people to finish a 50,000-word book on a deadline. 

The Verge explains the futuristic version was started by developer and artist Darius Kazemi, who encouraged creations made entirely by code. These computerised novels are becoming more sophisticated. 

A computer writes "True Love".

One of the first computer-generated works of fiction was printed in 2008. The St. Petersburg Times reported at the time that "True Love", published by the Russia's SPb publishing company, was the work of a computer program and a team of IT specialists. The paper says the 320-page novel is a variation of Leo Tolstoy's "Anna Karenina", but worded in the style of a Japanese author called Haruki Murakami. It hit Russian bookstores in the same year. Here is an extract:

“Kitty couldn’t fall asleep for a long time. Her nerves were strained as two tight strings, and even a glass of hot wine, that Vronsky made her drink, did not help her. Lying in bed she kept going over and over that monstrous scene at the meadow.”

Two years ago the BBC noted that Professor Philip Parker at the Insead Business School created software capable of generating more than 200,000 books. They cover topics like the amount of fat in fromage frais; there's even a Romanian crossword guide. But the research, ultimately, was designed to help the publishing process and looks at the likes of corrections and composition. The books simply compile existing information and create new predictions using formulas. Still, they led to Professor Parker experimenting with software that might one day actually automate fiction. 

The question is: will these AI books fool humans? 

Alan Turing, currently a hot topic due to the new Benedict Cumberbatch film of his life, asked in 1950, "can machines think?" It's his test that is the real basis for determining whether AI has reached new bounds — the point where computers might actually take over.

He looked at literature specifically. Turing's literary test for computer generated fiction is this:

  • Soft test – Human readers can’t tell it’s not human generated.
  • Hard test – Human readers not only can’t tell it’s not human generated, but they’ll actually purchase it.

In a study into the process, the BBC pitched a computerised poem against one penned by poet Luke Write: 

Poems

It's likely you can tell which was constructed by a machine (the top one). But it's not completely obvious, which is a bit scary. 

As Future Perfect Publishing remarks though, neither of Turing's have yet been wholly passed. It points out that, while AI is evolving, it's not quite ready to perform "linguistic processing capability"; definitely not without human coding and drawing on established text to mash text together into new algorithms or sequences.

The other and the clay sighed for something of red."

However, when you read something like "Irritant" by Darby Larson, it highlights the fact that things are moving forward. Larson's project, reports Vice, "takes the utilisation of computer-generated speech to the next level." It consists of a 624-page paragraph and is made of sentences that "morph and mangle" together. While it's not yet a fully-formed piece of fiction, it edges closer to the necessary creative aspect of producing an interesting work of literary art. Here's an extract:

“The man in front of the truck trampled from front to back safe from the blue. And all this while the man scooped shovels of dirt and trampled from front to back front to back. The other and the clay sighed for something of red. The irritant lay in something of red and laughed.”

The "breakout" computer novel of 2013.

Indeed, 2013 was a big year for AI novels. The Verge reports Nick Montfort's "World Clock" was "the breakout hit of last year". He's a professor of digital media at MIT, and used lines of a code to arrange characters, locations, and actions to construct his work. It was printed by Havard Books. Here's the opening from Montfort's website preview:

AI book

It's not bad, but it's unlikely anyone would go out and buy the book for literary appreciation over curiosity. We'll check out 2014's AI novels when they're released.

Join the conversation about this story »

Stephen Hawking: 'Artificial Intelligence Could Spell The End Of The Human Race'

$
0
0

Stephen Hawking

Professor Stephen Hawking is getting an upgrade to the technology that allows him to communicate. But the legendary physicist is fully aware of the implications of improving the artificial intelligence software that is so valuable to him. 

Earlier this year, Hawking spelled out the potential dangers of artificial intelligence: If we can make robots smarter than humans, they can out-invent human researchers and out-manipulate human leaders, "developing weapons we cannot even understand," in Hawking's words.

He reiterated those claims to the BBC on Tuesday, which had asked about Hawking's AI upgrades. "The development of artificial intelligence could spell the end of the human race," Hawking said.

That said, Hawking's AI is very basic. It was partially built by the engineers behind Swiftkey, which creates a smartphone keyboard app with predictive learning; Hawking's system similarly learns how the professor thinks and suggests his next words.

Hawking also says his computer-generated voice hasn't changed. "It has become my trademark, and I wouldn't change it for a more natural voice with a British accent," he said. "I'm told that children who need a computer voice want one like mine."

But Hawking reiterated his fears of AI becoming smart and powerful enough to match or surpass humans in almost every perceivable respect.  

"It would take off on its own, and redesign itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Interesting to note is that Hawking believes the internet will play a major role in how we shape artificial intelligence — how we choose to cultivate it or exploit it over time.

"More must be done by the internet companies to counter the threat," he said, "but the difficulty is to do this without sacrificing freedom and privacy."


NOW WATCH: Your Facebook App Is Quietly Clogging Up Your iPhone

 

SEE ALSO: The Next 'Avengers' Movie Might Demonize Artificial Intelligence All Over Again

Join the conversation about this story »

Stephen Hawking: Artificial Intelligence 'Could Spell The End Of The Human Race'

$
0
0

Theoretical physicist professor Stephen Hawking speaks at a press conference in London on December 2, 2014

London (AFP) - British theoretical physicist Stephen Hawking has warned that development of artificial intelligence could mean the end of humanity.

In an interview with the BBC, the scientist said such technology could rapidly evolve and overtake mankind, a scenario like that envisaged in the "Terminator" movies.

"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," the professor said in an interview aired Tuesday.

"Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate.

"Humans, who are limited by slow biological evolution, couldn't compete and would be superseded," said Hawking, who is regarded as one of the world's most brilliant living scientists.

Hawking, who is wheelchair-bound as a result of motor neurone disease and speaks with the aid of a voice synthesiser, is however keen to take advantage of modern communications technology and said he was one of the first people to be connected in the early days of the Internet.

He said the Internet had brought dangers as well as benefits, citing a warning from the new head of Britain's electronic spying agency GCHQ that it had become a command centre for criminals and terrorists.

"More must be done by the Internet companies to counter the threat, but the difficulty is to do this without sacrificing freedom and privacy," Hawking, 72, said.

Hawking Tuesday demonstrated a new software system developed by Intel, which incorporates predictive text to allow him to write faster. It will be made available online in January to help those with motor neurone disease.

While welcoming the improvements, the scientist said he had decided not to change his robotic-sounding voice, which originally came from a speech synthesiser designed for a telephone directory service.

"That voice was very clear although slightly robotic. It has become my trademark and I wouldn't change it for a more natural voice with a British accent," he told the BBC.

"I'm told that children who need a computer voice want one like mine."

 

Join the conversation about this story »

Google Has An Internal Committee To Discuss Its Fears About The Power Of Artificial Intelligence (GOOG)

$
0
0

demis hassabis deepmind

Google has assembled a team of experts in London who are working to "solve intelligence." They make up Google DeepMind, the US tech giant's artificial intelligence (AI) company, which it acquired in 2014.

In an interview with MIT Technology Review, published yesterday, Demis Hassabis, the man in charge of DeepMind, spoke out about some of the company's biggest fears about the future of AI. 

Hassabis and his team are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services. It could enhance YouTube recommendations for users for example, or make the company's mobile voice search better. 

But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: "Eventually, I think human extinction will probably occur, and technology will likely play a part in this". He adds he thinks AI is the "no.1 risk for this century". It's ominous stuff. (Read about Elon Musk discussing his concerns about AI here.)

People like Stephen Hawking and Elon Musk are worried about what might happen as a result of advancements in AI. They're concerned that robots could grow so intelligent that they could independently decide to exterminate humans. And if Hawking and Musk are fearful, you probably should be too.

Hassibis showcased some DeepMind software in a video back in April. In it, a computer learns how to beat Atari video games — it wasn't programmed with any information about how to play, just given the controls and an instinct to win. AI specialist Stuart Russell of the University of California says people were "shocked".

Here's DeepMind's AI in action:

Google is also concerned about the "other side" of developing computers in this way. That's why it set up an "ethics board". It's tasked with making sure AI technology isn't abused. As Hassibis explains: "It's (AI) something that we or other people at Google need to be cognizant of." Hassibis does concede that "we're still playing Atari games currently"— but as AI moves forward, the fear sets in. 

The main point of Google DeepMind's AI, says Hassabis, is to create computers that can "solve any problem". "AI has huge potential to be amazing for humanity", he mentions in the Technology Review interview. Accelerating the way we combat disease is one idea. But it's exactly technology capable of such brilliance which makes people so afraid. 

Join the conversation about this story »

Experts Are Divided On Stephen Hawking's Claim That Artificial Intelligence Could End Humanity

$
0
0

Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking

Paris (AFP) - There was the psychotic HAL 9000 in "2001: A Space Odyssey," the humanoids which attacked their human masters in "I, Robot" and, of course, "The Terminator", where a robot is sent into the past to kill a woman whose son will end the tyranny of the machines.

Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking.

"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC.

"Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate," he said.

But experts interviewed by AFP were divided.

Some agreed with Hawking, saying that the threat, even if it were distant, should be taken seriously. Others said his warning seemed overblown.

"I'm pleased that a scientist from the 'hard sciences' has spoken out. I've been saying the same thing for years," said Daniela Cerqui, an anthropologist at Switzerland's Lausanne University.

Gains in AI are creating machines that outstrip human performance, Cerqui argued. The trend eventually will delegate responsibility for human life to the machine, she predicted.

"It may seem like science fiction, but it's only a matter of degrees when you see what is happening right now," said Cerqui. "We are heading down the road he talked about, one step at a time."

Nick Bostrom, director of a programme on the impacts of future technology at the University of Oxford, said the threat of AI superiority was not immediate.

Bostrom pointed to current and near-future applications of AI that were still clearly in human hands -- things such as military drones, driverless cars, robot factory workers and automated surveillance of the Internet. 

But, he said, "I think machine intelligence will eventually surpass biological intelligence -- and, yes, there will be significant existential risks associated with that transition."

Other experts said "true" AI -- loosely defined as a machine that can pass itself off as a human being or think creatively -- was at best decades away, and cautioned against alarmism.

Since the field was launched at a conference in 1956, "predictions that AI will be achieved in the next 15 to 25 years have littered the field," according to Oxford researcher Stuart Armstrong.

"Unless we missed something really spectacular in the news recently, none of them have come to pass," Armstrong says in a book, "Smarter than Us: The Rise of Machine Intelligence."

Jean-Gabriel Ganascia, an AI expert and moral philosopher at the Pierre and Marie Curie University in Paris, said Hawking's warning was "over the top."

"Many things in AI unleash emotion and worry because it changes our way of life," he said.

"Hawking said there would be autonomous technology which would develop separately from humans. He has no evidence to support that. There is no data to back this opinion."

"It's a little apocalyptic," said Mathieu Lafourcade, an AI language specialist at the University of Montpellier, southern France. 

"Machines already do things better than us," he said, pointing to chess-playing software. "That doesn't mean they are more intelligent than us."

Allan Tucker, a senior lecturer in computer science at Britain's Brunel University, took a look at the hurdles facing AI.

 

- BigDog and WildCat -

 

Recent years have seen dramatic gains in data-processing speed, spurring flexible software to enable a machine to learn from its mistakes, he said. Balance and reflexes, too, have made big advances.

Tucker pointed to the US firm Boston Dynamics as being in the research vanguard. 

It has designed four-footed robots called BigDog (https://www.youtube.com/watch?v=W1czBcnX1Ww) and WildCat (https://www.youtube.com/watch?v=dhooVgC_0eY), with funding from the Pentagon's hi-tech research arm. 

"These things are incredible tools that are really adaptative to an environment, but there is still a human there, directing them," said Tucker. "To me, none of these are close to what true AI is."

Tony Cohn, a professor of automated reasoning at Leeds University in northern England, said full AI is "still a long way off... not in my lifetime certainly, and I would say still many decades, given (the) current rate of progress."

Despite big strides in recognition programmes and language cognition, robots perform poorly in open, messy environments where there are lots of noise, movement, objects and faces, said Cohn.

Such situations require machines to have what humans possess naturally and in abundance -- "commonsense knowledge" to make sense of things.

Tucker said that, ultimately, the biggest barrier facing the age of AI is that machines are... well, machines.

"We've evolved over however many millennia to be what we are, and the motivation is survival," he said.

"That motivation is hard-wired into us. It's key to AI, but it's very difficult to implement."

 

 

Join the conversation about this story »


Google Has An Internal Committee To Discuss Its Fears About The Power Of Artificial Intelligence (GOOG)

$
0
0

demis hassabis deepmind

Google has assembled a team of experts in London who are working to "solve intelligence." They make up Google DeepMind, the US tech giant's artificial intelligence (AI) company, which it acquired in 2014.

In an interview with MIT Technology Review, published yesterday, Demis Hassabis, the man in charge of DeepMind, spoke out about some of the company's biggest fears about the future of AI. 

Hassabis and his team are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services. It could enhance YouTube recommendations for users for example, or make the company's mobile voice search better. 

But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: "Eventually, I think human extinction will probably occur, and technology will likely play a part in this". He adds he thinks AI is the "no.1 risk for this century". It's ominous stuff. (Read about Elon Musk discussing his concerns about AI here.)

People like Stephen Hawking and Elon Musk are worried about what might happen as a result of advancements in AI. They're concerned that robots could grow so intelligent that they could independently decide to exterminate humans. And if Hawking and Musk are fearful, you probably should be too.

Hassibis showcased some DeepMind software in a video back in April. In it, a computer learns how to beat Atari video games — it wasn't programmed with any information about how to play, just given the controls and an instinct to win. AI specialist Stuart Russell of the University of California says people were "shocked".

Here's DeepMind's AI in action:

Google is also concerned about the "other side" of developing computers in this way. That's why it set up an "ethics board". It's tasked with making sure AI technology isn't abused. As Hassibis explains: "It's (AI) something that we or other people at Google need to be cognizant of." Hassibis does concede that "we're still playing Atari games currently"— but as AI moves forward, the fear sets in. 

The main point of Google DeepMind's AI, says Hassabis, is to create computers that can "solve any problem". "AI has huge potential to be amazing for humanity", he mentions in the Technology Review interview. Accelerating the way we combat disease is one idea. But it's exactly technology capable of such brilliance which makes people so afraid. 

Join the conversation about this story »

Facebook Has Big Plans To Save Users From Instant Regret

$
0
0

fuzipop girl drinking juice partying

Facebook has big plans to make sure you never upload another photo on Facebook you'll live to regret.

Wired reports Facebook researcher Yann LeCun "wants to build a kind of Facebook digital assistant that will, say, recognize when you’re uploading an embarrassingly candid photo of your late-night antics."

This assistant would show up like an angel on your shoulder to ask you if you were absolutely sure you wanted to share the photo with the masses. 

LeCun currently oversees the Facebook Artificial Intelligence Research lab, a group of AI researchers that work in both the California and New York offices currently working on producing such a digital assistant. 

In Wired,

Fashioning such a tool is largely about building image recognition technology that can distinguish between your drunken self and your sober self, and using a red-hot form of artificial intelligence called “deep learning”—a technology bootstrapped by LeCun and other academics—Facebook has already reached a point where it can identify your face and your friends’ faces in the photos you post to its social network, letting you more easily tag them with the right names.

Deep learning isn't new — Microsoft and Google both utilize the technologies — but LeCun is "pushing for more."

You can read more about it in Wired.

[h/t Sarah Frier]

Join the conversation about this story »

CHART: These Are The Jobs That Robots Will Take Over In The Future

$
0
0

Robot Skull

The Federal Department of Industry has released the inaugural Australia Industry Report, a unique insight into the two million businesses which make up Australian industry.

Chief economist Mark Cully says Australia’s economic adaptability and resilience have underpinned more than two decades of continuous economic growth and a huge improvement in living standards.

One of the sections of the report forecasts which jobs will become obsolete with more automation, or the increasing use of robots.

Take a look at the chart below:

Robot Jobs 1 

Jobs identified as having highly automatable tasks, such as secretaries and butchers, fell on average by 0.9% a year between 1993–94 and 2013–14.

“The challenges presented by increasing automation are not limited to low-skilled positions,” the report says.

“Indeed, it was the skilled artisan weavers who were replaced in the wake of the Industrial Revolution.”

One of the at-risk jobs identified in the study are pharmacists: 78.6% of pharmacists in Australia have a bachelor’s degree and 15.4% have post graduate qualifications.

“A tertiary education therefore does not guarantee safeguard against automation,” the report says.

“Robots are increasingly replicating the tasks of medium and high-skilled workers. Computers are programmed to diagnose illnesses faster than doctors, machines can analyse volumes of legal text in a fraction of the time that a solicitor can and a robot has even been appointed as director to an investment board.”

In Australia, high-skilled jobs not at risk of automation, such as surgeons, secondary school teachers and electrical engineers, are projected to grow on average by 4.5% a year, twice the projected growth rate of 2.2% per year of high-skilled jobs in general.

Now look at this chart which shows the jobs which don’t require tertiary education and are least likely to be replaced by robots:

Robot Jobs 2

Join the conversation about this story »

How To Predict Dangerous Solar Flares

$
0
0

solar flare

A couple of months ago, the sun sported the largest sunspot we've seen in the last 24 years.

This monstrous spot, visible to the naked eye (that is, without magnification, but with protective eyewear of course), launched more than 100 flares.

The number of the spots on the sun ebbs and flows cyclically, every 11 years. Right now, the sun is in the most active part of this cycle: we're expecting lots of spots and lots of flares in the coming months.

Usually, the media focuses on the destructive power of solar flares— the chance that, one day, a huge explosion on the sun will fling a ton of energetic particles our way and fry our communication satellites. But there's less coverage on how we forecast these things, like the weather, so that we can prevent any potential damage.

How do you forecast a solar flare, anyway?

One way is to use machine learning programs, which are a type of artificial intelligence that learns automatically from experience. These algorithms gradually improve their mathematical models every time new data come in.

In order to learn properly, however, the algorithms require large sums of data. Scientists lacked any solar data on this scale before the 2010 launch of the Solar Dynamics Observatory (SDO), a sun-watching satellite that downlinks about a terabyte and a half of data every day—more than the most data of any other satellite in NASA history.

Explore an interactive graphic showing where on the sun flares of different classes have been sighted over the years: Click image below to see interactive version on Scientific American.

Solar Flares

Solar flares are notoriously complex. They occur in the solar atmosphere, above surface-dwelling sunspots. Sunspots, which generally come in pairs, act like bar magnets — that is, one spot acts like a north pole and the other like a south.

Given that there are lots of sunspots, that various layers on the sun are rotating at different speeds, and that the sun itself has a north and south pole, the magnetic field in the solar atmosphere gets pretty messy. Like a rubber band, a really twisted magnetic field will eventually snap—and release a lot of energy in the process. That's a solar flare. But sometimes twisted fields don't flare, sometimes flares come from fairly innocuous-looking sunspots, and sometimes huge sunspots never do a thing.

We don't understand the physics of how solar flares occur. We have ideas — we know flares are certainly magnetic in nature—but we don't really know how they release so much energy so fast. In the absence of a definitive physical theory, the best hope for forecasting solar flares lies in scrutinizing our vast data set for observational clues.

There are two general ways to forecast solar flares: numerical models and statistical models. In the first case, we take the physics that we do know, code up the equations, run them over time, and get a forecast. In the second, we use statistics.

We answer questions like: What's the probability that an active region that's associated with a huge sunspot will flare compared with one that's associated with a small sunspot? As such, we build large data sets, full of features—such as the size of a sunspot, or the strength of its magnetic field—and look for relationships between these features and solar flares.

Machine learning algorithms can help to this end. We use machine learning algorithms everywhere. Biometric watches run them to predict when we should wake up. They're better than doctors at predicting rare genetic disorders. They've identified paintings that have influenced artists throughout history.

Scientists find machine learning algorithms so universally useful because they can identify non-linear patterns—basically every pattern that can't be represented by straight lines—which is tough to do. But it's important, because lots of patterns are non-linear.

We've used machine learning algorithms to forecast solar flares using SDO's vast data set. To do this, we first built a database of all the active regions SDO has ever observed. Since it's historical data, we already know if these active regions flared or not. The learning algorithm then analyzes active region features—such as the size of a sunspot, the strength of its associated magnetic field and the twistedness of these field lines—to identify general characteristics of flaring active regions.

To do this, the algorithm starts by making a guess. Let's say its first guess is that a tiny sunspot with a weak magnetic field will produce a huge flare. Then it checks the answer. Whoops, nope.

The algorithm then tweaks the way that it guesses. The next time around, it'll make a different guess. Through trial and error—in the form of hundreds of thousands of guesses and checks—the algorithm figures out which features correspond to flaring active regions. Now, we have a self-taught algorithm that we can apply to real-time data.

Expanding such efforts could help us provide better notice of impending solar flares. So far, studies have found that machine-learning algorithms forecast flares better than or, at the worst, just as well as the numerical or statistical methods. This is kind of a phenomenal result in and of itself.

These algorithms, which run without any human input whatsoever by simply looking for patterns in the data, and which are so general that you can use the same algorithm (on a different data set) to identify genetic disorders, can perform just as well as any other method used thus far to forecast solar flares.

And if we have more data? Who knows. Although we already have tons of data—SDO has been running for four and a half years—there haven't been a ton of flares during that time. That's because we're in the quietest solar cycle of the century. That's more reason to continue collecting data and keep the algorithms busy.

SEE ALSO: Neil DeGrasse Tyson: Here's What Everyone Gets Wrong About Solar Flares

Join the conversation about this story »

KURZWEIL: Human-Level AI Is Coming By 2029

$
0
0

artificial intelligence

When artificial intelligence is as smart as humans, the world will change forever.

While technological change itself is neutral, neither good nor bad, AI's effects on society will be so powerful that they've been described in both utopian and apocalyptic terms.

And some futurists think those changes are just on the horizon.

That includes Ray Kurzweil — author of five books on AI, including the recent best seller "How to Create a Mind," and founder of the futurist organization the Singularity University. He is currently working with Google to build more machine intelligence into their products.

AI: Coming Soon

In an article he wrote for Time Magazine, Kurzweil says that even though most of the people in the field think we're still several decades away from creating a human level intelligence, he puts the date at 2029 — less than 15 years away.

Kurzweil argues we are already a human-machine civilization. We already use lower level AI technology to diagnose disease, provide education, and develop new technologies.

"A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago," he writes.

Continued development of AI technology could better provide information and solutions to each individual person on the planet — it could potentially be the thing that designs cancer cures and medications that stop cognitive decline.

Something To Fear?

While Kurzweil thinks the development of human-level AI can happen safely — after all, so far this more informed world hasn't turned on us — not everyone is so sure.

Elon Musk said AI could be the human race's "biggest existential threat"at a recent symposium at MIT. "With artificial intelligence we're summoning the demon," Musk said.

Stephen Hawking agrees. "The development of artificial intelligence could spell the end of the human race,"he recently told the BBC.

They fear — as has been theorized by science fiction authors for decades — that once we create something smarter than us we'll no longer be in control of what happens in the world.

So if that new intelligence doesn't like us or thinks we're harmful, it could decide to eliminate us. This wouldn't necessarily be the case, but if it was we probably couldn't stop it.

Of course, in the end, what Kurzweil estimates will happen in 2029 is the creation of a human-level intelligence, which isn't necessarily capable of becoming a force that takes over the world for good or ill.

But as Nick Bostrom, futurist and author of a recent book on AI titled Superintelligence, notes, just a little bit past the creation of a human level intelligence "is superhuman-level machine intelligence." Perhaps a machine with supercomputing processing power and human ability could even upgrade itself in short period of time.

"The train might not pause or even decelerate at Humanville Station," writes Bostrom. "It is likely to swoosh right by."

But Kurzweil argues that we've created other things that have the potential to destroy the human race with nuclear power and biotechnology and not only are we still here, we're actually living in the most peaceful time in human history.

He thinks that instead of viewing this creation as leading to a potential battle between humanity and a malevolent AI like Skynet, we should view it as something that has the power to elevate humanity, something that will exist in many forms, not just one all-powerful entity.

Avoiding The Robot Apocalypse

Kurzweil thinks we have time now to develop safeguards and to continue to establish a more peaceful and progress-oriented society, which could lead us to develop AI with the same goals, instead of (for example) a militaristic intelligence.

The best thing we can do to avoid a future peril, then, is simply to focus on our own social ideals and human progress — that and carefully build safeguards into the technology as we go.

That's more comforting than the potentially apocalyptic concerns of Musk and Hawking, though even Kurzweil writes that "technology has always been a double-edged sword, since fire kept us warm but also burned down our villages."

And we don't know if Kurzweil's prediction for human-level AI in 2029 will even come to pass.

Bostrom says that most prognosticators have predicted for years that a true AI is just "a couple of decades away," a sweet spot that's far enough to allow for necessary technological innovation but close enough to account for the fact that we think it's coming soon. But really, as he says, we just can't know the date that we'll all of a sudden understand intelligence well enough to re-create and even surpass it.

But we do know one thing, as Bostrom writes in his book: "We will only get one chance."

SEE ALSO: Stephen Hawking: 'Artificial Intelligence Could Spell The End Of The Human Race'

Join the conversation about this story »

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>