Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Intelligent robots don't need to be conscious to turn against us

$
0
0

Stuart Russell

Last week Elon Musk, Stephen Hawking, and more than 16,000 researchers signed an open letter warning against the dangers of autonomous weapons.

A top signatory who studies artificial intelligence (AI) was Stuart Russell, a computer scientist and founder of the Center for Intelligent Systems at the University of California. Russell is also the co-author of "Artificial Intelligence: A Modern Approach," a textbook about AI used in more than 100 countries.

In the past few months, Russell has urged scientists to consider the possible dangers AI might pose, starting with another open letter he wrote in January 2015. That dispatch called on researchers to only develop AI they can ensure is safe and beneficial.

Russell spoke to Tech Insider about AI-powered surveillance systems, what the technological "singularity" actually means, and how AI could amplify human intelligence. He also blew our minds a little on the concept of consciousness.

Below is that conversation edited for length, style, and clarity.

TECH INSIDER: You chose a career in AI over one in physics. Why?

STUART RUSSELL: AI was very much a new field. You could break new ground quite quickly, whereas a lot of the physicists I talked to were not very optimistic either about their field or establishing their own career. There was a joke going around then: "How do you meet a PhD physicist? You hail a taxi in New York."

TI: That's funny.

SR: It's slightly different now. Some PhD physicists write software or work for hedge funds, but physics still has a problem with having very smart people but not enough opportunities.

TI: What's your favorite sci-fi depiction of AI?

SR: The one I would say is realistic, in the not-too-distant future, and also deliberately not sensationalistic or scary, is the computer in "Star Trek" onboard the Enterprise. It just acts as a repository of knowledge and can do calculations and projections, essentially as a completely faithful servant. So it's a very non-controversial kind of computer and it's almost in the background. I think that's sort of the way it should be.

In terms of giving you the willies, I think "Ex Machina" is pretty good.

TI: If the Enterprise computer is realistic, what sci-fi depiction would you say is the least realistic?

SR: There's a lot of them. But if everyone was nice and obedient, there wouldn't be much of a plot.

In a lot of movies there is an element of realism, yet the machine somehow spontaneously becomes conscious – and either evil or somehow intrinsically in opposition to human beings. Because of this, a lot of people might assume 1) that's what could actually happen and 2) they have reason to be concerned about the long-term future of AI.

I think both of those things are not true, except sort of by accident. It's unlikely that machines would spontaneously decide they didn’t like people, or that they had goals in opposition to those of human beings.

ex machinaBut in "Ex Machina" that's what happens. It's unclear how the intelligence of the robot is constructed, but the few hints that they drop suggest it’s a pretty random trial-and-error process. Kind of pre-loading the robot brain with all the information of human behavior on the web and stuff like that. To me that's setting yourself up for disaster: not knowing what you’re doing and not having a plan and trying stuff willy nilly.

In reality, we don't build machines that way. We build them with precisely defined goals. But say you have a very precisely defined goal and you build a machine that's superhuman in its capabilities for achieving goals. If it turns out that the subsequent behavior of the robot in achieving that goal was not what you want, you have a real problem.

The robot is not going to want to be switched off because you’ve given it a goal to achieve and being switched off is a way of failing — so it will do its best not to be switched off. That's a story that isn’t made clear in most movies but it I think is a real issue.

TI: What’s the most mind-blowing thing you’ve learned during your career?

SR: Seeing the Big Dog videos was really remarkable. Big Dog is a four-legged robot built by Boston Dynamics that, in terms of its physical capabilities, is incredibly lifelike. It’s able to walk up and down steep hills and snow drifts and to recover its balance when its pushed over on an icy pond and so on. It’s just an amazing piece of technology.

Leg locomotion was, for decades, thought to be an incredibly difficult problem. There has been very, very painstakingly slow progress there, and robots that essentially lumbered along at one step every 15 seconds and occasionally fell over. Then, all of the sudden, you had this huge quantum leap in leg locomotion capabilities with Big Dog.

Another amazing thing is the capability of the human brain and the human mind. The more we learn about AI and about how the brain works, the more amazing the brain seems. Just the sheer amount of computation it does is truly incredible, especially for a couple of pounds of meat.

A lot of people talk about sometime around 2030, machines will be more powerful than the human brain, in terms of the raw number of computations they can do per second. But that seems completely irrelevant. We don’t know how the brain is organized, how it does what it does.

TI: What a common piece of AI people use everyday they might take for granted?

SR: Google or other search engines. Those are examples of AI, and relatively simple AI, but they're still AI. That plus an awful lot of hardware to make it work fast enough.

TI: Do you think if people thought about search engines as AI, they'd think differently about offering up information about about their lives?

SR: Most of the AI goes into figuring which are the important pages you want. And to some extent what your query means, and what you’re likely to be after based on your previous behavior and other information it collects about you.

It’s not really trying to build a complete picture of you, as a person as yet. But there are lots of other companies that are doing this. They’re really trying to collect as much information as they can about every single person on the planet because they think its going to be valuable and it probably already is valuable.

Here's a question: If you're being watched by a surveillance camera, does it make a difference to you whether a human is watching the recording? What if there's an AI system, which actually can understand everything that you're doing, and if you're doing something you're not supposed to — or something that might be of interest to the owner of the camera? That it would describe what was going on in English, and report that to a human being? Would that feel different from having a human watch directly?

The last time I checked, the Canadian supreme court said it is different: If there isn't a human watching through a camera, then your privacy is not being violated. I expect that people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing.

TI: What's the most impressive real-world use of AI technology you've ever seen?

SR: One would be Deep Mind's DQN system. It essentially just wakes up, sees the screen of a video game, and works out how to play the video game to a superhuman level. It can do that for about 30 different Atari titles. And that's both impressive and scary, in the sense that if a human baby was born and, by the evening of its first day, already beating adult human beings at video games.

In terms of a practical application, though, I would say object recognition.

TI: How do you mean?

SR: AI's ability to recognize visual categories and images is now pretty close to what human beings can manage, and probably better than a lot of people's, actually. AI can have more knowledge of detailed categories, like animals and so on.

There have been a series of competitions aimed at improving standard computer vision algorithms, particularly their ability to recognize categories of objects in images. It might be a cauliflower or a German shepherd. Or a glass of water or a rose, any type of object.

The most recent large-scale competition, called ImageNet, has around a thousand categories. And I think there are more than a million training images for those categories — more than a thousand images for each category. A machine is given those training images, and for each of the training images it's told what the category of objects is.

Let's say it's told a German shepherd is in an image, and then the test is that it's given a whole bunch of images it's never seen before and is asked to identify the category. If you guessed randomly, you'd have a 1-in-1,000 chance of getting it right. Using a technology called deep learning, the best systems today are correct about 95% of the time. Ten years ago, the best computer vision systems got about 5% right.

There's a grad student at Stanford who tried to do this task himself, not with a machine. After he looked at the test images, he realized he didn't know that much about different breeds of dogs. In a lot of the categories, there were about 100 different breeds of dog, because the competition wanted to test an ability to make fine distinctions among different kinds of objects.

The student didn't do well on the test, at all. So he spent several days going back through all the training images and learned all of these different breeds of dogs. After days and days and days of work, he got his performance up to just above the performance of the machine. He was around 96% accurate. Most of his friends who also tried gave up. They just couldn't put in the time and effort required to be as good as the machine.

TI: You mentioned deep learning. Is that based on how the human brain works?

SR: It's a technique that's loosely based on some aspects of the brain. A "deep" network is a large collection of small, simple computing elements that are trainable.

You could say most progress in AI has been gaining a deeper mathematical understanding of tasks. For example, chess programs don't play chess the way humans play chess. We don't really know how humans play chess, but one of the things we do is spot some opportunity on the chess board toward a move to capture the opponent's queen.

Garry Kasparov Deep Blue

Chess programs don't play that way at all. They don't spot any opportunities on the board, they have no goal. They just consider all positive moves, and they pick which one is best. It's a mathematical approximation to optimal play in chess — and it works extremely well.

So, for decision-making tasks and perception tasks, once you define the task mathematically, you can come up with techniques that solve it extremely well. Those techniques don't have to be how humans do it. Sometimes it helps to get some inspiration from the brain, but it's inspiration — it's not a copy of how the neural systems are wired up or how they work in detail.

TI: What are the biggest obstacles to developing AI capable of sentient reasoning?

SR: What do you mean by sentient, do you mean that it's conscious?

TI: Yes, consciousness.

SR: The biggest obstacle is we have absolutely no idea how the brain produces consciousness. It's not even clear that if we did accidentally produce a sentient machine, we would even know it.

I used to say that if you gave me a trillion dollars to build a sentient or conscious machine I would give it back. I could not honestly say I knew how it works. When I read philosophy or neuroscience papers about consciousness, I don't get the sense we're any closer to understanding it than we were 50 years ago.

TI: Because we don't really know how the brain works?

SR: It's not just that: We could not know how the brain works, in the sense that we don't know how the brain produces intelligence. But that's a different question from how it produces consciousness.

There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to telling us how that physical system would generate a conscious experience. We don't even have the beginnings of a theory whose conclusion would be "such a system is conscious."

There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to a conscious experience. We don't even have the beginnings of a theory whose conclusion would be "such a system is conscious."

TI: I suppose the singularity is not even an issue right now then.

SR: The singularity has nothing to do with consciousness, either.

Its really important to understand the difference between sentience and consciousness, which are important for human beings. But when people talk about the singularity, when people talk about superintelligent AI, they're not talking about sentience or consciousness. They're talking about superhuman ability to make high-quality decisions.

Say I'm a chess player and I'm playing against a computer, and it's wiping the board with me every single time. I can assure you it's not conscious but it doesn't matter: It's still beating me. I'm still losing every time. Now extrapolate from a chess board to the world, which in some sense is a bigger chess board. If human beings are losing every time, it doesn't matter whether they're losing to a conscious machine or an completely non conscious machine, they still lost. The singularity is about the quality of decision-making, which is not consciousness at all.

TI: What is the most common misconception of AI?

SR: That what AI people are working towards is a conscious machine. And that until you have conscious machine, there's nothing to worry about. It's really a red herring.

To my knowledge nobody — no one who is publishing papers in the main field of AI — is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I'm not aware that they've made any progress. No one has a clue how to build a conscious machine, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

TI: What about a machine that's convincingly human, one that can pass the Turing Test?

SR: That can happen without being conscious at all. Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby. There are people who do work on passing the Turing Test in various competitions, but I wouldn't describe that as mainstream AI research.

Almost nobody in AI is working on passing the Turing Test.

The Turing Test wasn't designed as the goal of AI. It was designed as a thought experiment to explain to people who were very skeptical, at the time, that the possibility of intelligent machines did not depend on achieving consciousness — that you could have a machine you'd have to agree was behaving intelligently because it was behaving indistinguishably from a human being. So that thought experiment was there to make an argument about the importance of behavior in judging intelligence as opposed to the importance of, for example, consciousness. Or just being human, which is not something machines have a good chance of being able to do.

And so I think the media often gets it wrong. They assume that everyone in AI is trying to pass the Turing Test, and nobody is. They assume that that's the definition of AI, and that wasn't even what it was for. 

TI: What are most AI scientists actually working toward, then?

SR: They're working towards systems that are better at perceiving, understanding language, operating in the physical world, like robots. Reasoning, learning, decision-making. Those are the goals of the field.

TI: Not making a Terminator.

SR: It's certainly true that a lot of funding for AI comes from the defense department, and the defense department seems to be very interested in greater and greater levels of autonomy in AI, inside weapons systems. That's one of the reasons why I've been more active about that question.

TI: What's the most profound change that intelligent AI could bring to our lives, and how might that happen?

SR: We could have self-driving cars — that seems to be a foregone conclusion. They have many, many advantages, and not just the fact that you can check your email while you're being driven to work.

Google self drivingI also think systems that are able to process and synthesize large amounts of knowledge. Right now, you're able to use a search engine, like Google or Bing or whatever. But those engines don't understand anything about pages that they give you; they essentially index the pages based on the words that you're searching, and then they intersect that with the words in your query, and they use some tricks to figure out which pages are more important than others. But they don't understand anything.

If you had a system that could read all the pages and understand the context, instead of just throwing back 26 million pages to answer your query, it could actually answer the question. You could ask a real question and get an answer as if you were talking to a person who read all those millions and billions of pages, understood them, and synthesized all that information.

So if you think that search engines right now are worth roughly a trillion dollars in market capitalization, systems with those kinds of capabilities might be more 10 times as much. Just as 20 years ago, we didn't really know how important search engines would be for us today. It's very hard to predict what kind of uses we'd make of assistants that could read and understand all the information the human race has ever generated. It could be really transformational.

Basically, the way I think about it is everything we have of value as human beings — as a civilization — is the result of our intelligence. What AI could do is essentially be a power tool that magnifies human intelligence and gives us the ability to move our civilization forward. It might be curing disease, it might be eliminating poverty. Certainly it should include preventing environmental catastrophe.

If AI could be instrumental to all those things, then I would feel it was worthwhile.

Join the conversation about this story »

NOW WATCH: MIT reveals how its military-funded Cheetah robot can now jump over obstacles on its own


The government hired a jazz musician to jam with its artificially intelligent software

$
0
0

Louis Armstrong

Artificial intelligence (AI) can paint hallucinatory images, shut down internet trolls, and critique the most creative paintings in history.

Now, with help from the Defense Advanced Research Projects Agency (DARPA), AI is coming for your saxophones and pianos, too. Jazz musician and computer scientist Kelland Thomas is building an AI program that can learn to play jazz and jam with the best of them, under a DARPA-funded project that aims to improve how we communicate with computers.

"A jazz musician improvises, given certain structures and certain constraints and certain basic guidelines that musicians are all working with," Thomas told Tech Insider. "Our system is going to be an improvisational system. So yeah, it will be able to jam."

Thomas and his team will first build a database of thousands of transcribed musical performances by the best jazz improvisers, including Louis Armstrong, Miles Davis, and Charlie Parker. Then, using machine learning techniques, they'll "train" the AI system with this database.

Eventually the AI will learn to analyze and identify musical patterns from the transcriptions, including Miles Davis's performance of "So What?" below:

The AI could use that knowledge to compose and play live, original music.

"A human musician also builds a knowledge base by practicing and by listening and by learning and studying," Thomas said. "So the thing we're proposing to do is analogous to the way a human learns, but eventually it will be able to do this on a much larger scale. It can scour thousands of transcriptions instead of dozens or hundreds."

Many people might not consider music a form of communication, but Paul Cohen, an AI researcher and head of the Communicating with Computers project, thinks music shares many qualities with the spoken and written word.

"Jazz, as with conversation, is sort of an interesting mixture of creativity and very tightly, ruled-down behavior," Cohen told Tech Insider. "There are strict rules about improvisation, following particular harmonic lines and making sure your timing is right. You can't end a phrase at the wrong place. It has to be done at exactly the right time and place."

Thomas thinks that making computers as convincingly creative as humans will make collaborations between humans and computers smoother and more efficient. For Thomas, jazz is the best way to model human creativity.

"In my mind, jazz and improvisation in music represent a pinnacle of human intellectual and mental achievement," Thomas said. "The ability to, on the fly and in the moment, create melodies that are goal-directed, that are going somewhere, doing something and evincing emotion in the listener, is really, really amazing."

Within five years, Thomas hopes to build an AI system that can improvise an electronic jazz number alongside a human musician. Following that: a robot that can manipulate musical instruments and accompany human musicians on stage.

But you don't have to wait five years to watch intelligent machines play music. Engineers from Japan to Germany are already building robots you can program to play pre-written songs.

Then there's Mason Bretan, a PhD student from Georgia Tech. He's been jamming alongside "Shimi" robots, which can partially improvise.

In the video below, Bretan provided the arrangement of his parts, and a recording of their tracks and cues. "But in between," including the mallet solo, the robots are "doing their own thing based on his chord progressions," according to the Washington Post.

SEE ALSO: 15 Astounding Technologies DARPA Is Creating Right Now

Join the conversation about this story »

NOW WATCH: California is defying nature by growing coffee beans

Scientists are teaching robots to avoid children — because kids can be surprisingly mean

$
0
0

robot child kid r2d2 star wars getty images

Earlier this month we learned about the tragic murder of hitchBOT, a friendly hitchhiking robot.

Now researchers in Japan — always ahead of the curve when it comes to mechanoids — may have documented the type of human-on-bot aggression that led to hitchBOT's demise.

The scientists dropped off a polite robot in a Japanese shopping mall, fully anticipating random violence. Sure enough, gangs of children beat the bolts out of their mechanized friend.

As Kate Darling at IEEE Spectrum reports in not-so-unbiased language, the new study shows that:

[...] in certain situations, children are actually horrible little brats may not be as empathetic towards robots as we’d previously thought, with gangs of unsupervised tykes repeatedly punching, kicking, and shaking a robot in a Japanese mall.

But the scientists didn't stop with simply documenting the attacks; they turned the data into artificial intelligence (AI) to help robots to anticipate and avoid dangerous hordes of children.

The researchers chose a Robovie II for their bait bot. Robovie II is an assistive humanoid robot on wheels that's designed to help elderly people shop for groceries, for example:

In the first of two studies, researchers let the robot roll around autonomously in a Japanese shopping mall a few hours a day for 13 days.

As it milled about, it politely asked people to move if they obstructed its path.

In nine out of the 13 days, however, 28 children in total showed "serious abusive behaviors" toward the innocent, bug-eyed machine:

Here's what the researchers specifically saw happen to their beloved bot:

 

 

After each bullying event scientists pulled the kids — only children attacked the robot — aside for interviews (and we hope some serious scolding). In the lab, they analyzed the video and interviews.

The researchers learned two things almost always preceded a violent attack: 1) kids persistently blocking the bot's path and 2) verbal abuse.

They also learned that as the number of kids unattended by any adults grew, so did the likelihood of attack.

In a followup study, they used this data to build a computer model that helps robots predict the likelihood of being clobbered by kids — and get the hell out of there if things look bad:

When they uploaded this new, kid-avoiding AI software to Robovie II, the robot scurried away from unattended children if they started clustering around it.

In some cases it even ran toward mommy and daddy:

This type of research might seem silly, at least initially. But companies around the world are racing to develop and sell assistive robots that can accompany their owners in public and provide physical help.

In fact, according to one report, medical robots could be an $13.6-billion industry by 2018. Assistive robots like Robovie II are part of that expected haul.

Capable helper bots of the future won't come cheap, especially not at first; by some estimates, an early model may cost about $50,000. So consumers and insurers will demand some kind of AI that's good enough to help automatons avoid damage, be it by walking into oncoming traffic or getting beat up by packs of rogue children.

If there's one other thing we learned from the study, it's that young kids may possess frightening moral principles about robots.

Granted, the studies' sample sizes were small, and it's easy to skew a child's response to research questions (even if you're using tried-and-true techniques).

But it's more than a bit unnerving when nearly three-quarters of the 28 kids interviewed "perceived the robot as human-like," yet decided to abuse it anyway. That and 35% of the kids who beat up the robot actually did so "for enjoyment."

The researchers go on to conclude:

From this finding, we speculate that, although one might consider that human-likeness might help moderating the abuse, humanlikeness is probably not that powerful way to moderate robot abuse. [...] [W]e face a question: whether the increase of human-likeness in a robot simply leads to the increase of children’s empathy for it, or favors its abuse from children with a lack of empathy for it

In other words, the more human a robot looks — and fails to pass out of the "uncanny valley" of robot creepiness — the more likely it may be to attract the tiny fisticuffs of toddlers. If true, the implications could be profound, both in practical terms (protecting robots) as well as ethical ones (human morality).

Read about the full range of abuse dealt to mechanized helpers from the future at IEEE Spectrum, and see it for yourself in the video below.

Join the conversation about this story »

NOW WATCH: This robot wakes you up in the morning and checks if you turned off the oven when you leave the house

Microsoft and The New Yorker are teaching a robot to have a sense of humor (MSFT)

$
0
0

oscars 70s c3po

Robots are funny, but historically most of that humor has derived from them being the "straight man." Their ability to be oblivious to the joke often makes them natural comedians. But now Microsoft and The New Yorker are trying to teach a robot to be intentionally funny — specifically so it can help with its popular caption contest, Bloomberg reports.

Since it was introduced in 2005, the caption contest has become a cult favorite among New Yorker readers. The premise is simple: The New Yorker publishes a black-and-white cartoon without a caption and readers send in their best attempts to finish it. The winner's caption runs in the next issue.

But the contest has become perhaps too popular, and now cartoon editor Bob Mankoff is inundated with 5,000 contest entries every week, according to Bloomberg. And it’s been hard on his assistants. “The process of looking at 5,000 caption entries a week usually destroys their mind in about two years, and then I get a new one,” Mankoff tells Bloomberg.

Who could possibly go through that many entries without becoming completely numb to humor? Microsoft’s answer is: a robot. Microsoft researchers have partnered with The New Yorker to try and build a robot capable of telling which of the captions are funny, and which will elicit only crickets.

Researchers fed The New Yorker cartoon and caption information to the robot, trying to teach it how to tell the difference in humor between similar jokes. Though the "top captions" lists of the human editors didn’t completely align with the robot's, all the editors' favorites did appear in the top 55.8% of the robots choices, according to Bloomberg.

Maybe the robot couldn’t pick the absolute funniest caption, but it seems that it could cut out the majority of the awful ones. And even this could save half the workload for Mankoff’s assistants.

Of course, Microsoft’s ambitions aren’t just limited to helping people at The New Yorker have more free time. The researchers told Bloomberg that they hope to one day train the robots to come up with their own jokes, not just know when to laugh. They think this would make digital assistants like Cortana and Siri more “pleasant” to use.

SEE ALSO: Here’s why it’s so hard to make a funny robot

Join the conversation about this story »

NOW WATCH: This robot wakes you up in the morning and checks if you turned off the oven when you leave the house

MIT Robots: Now able to punch through walls and serve you beer

$
0
0

MIT robot rescue

Like the Kool-Aid Man, the Massachusetts Institute of Technology's two new robots are here to save the day and quench your thirst. 

MIT's new humanoid robot can be used in rescue operations that might be too dangerous for humans, such as a building that might be at risk for collapsing, according to Popular Science.

HERMES has the strength to crush cans and punch through walls.

But it also has the dexterity to manipulate objects with its three fingers, like pour coffee or grab drills.

These aren't native awesome robot skills, though — a human pilot strapped into a exoskeleton remotely controls HERMES, combining the pilot's creativity and problem-solving with the strength of a robot.

"We want to take advantage of what humans can do and how humans can learn and adapt in order to face new challenges that we may not predict," said Joao Ramos, a mechanical engineer at MIT said in a video

Robots are still learning to walk on two legs, but robots need to be able to navigate a bipedal world. So MIT built HERMES to learn from the human's reflexes to keep this bipedal bot balanced. The robot's sensors feed data back to the human controller — the pilot can see what the robot sees, feel what the robot feels, and can correctly position the robot to ensure it doesn't topple over.

PhD student Albert Wang says a future version of HERMES would merge autonomous control with human intelligence, for scenarios where it may not be feasible to have a remote human controller.

Robots aren't just for crushing cans, though. MIT's Computer Science and Artificial Intelligence lab also developed a robotic system that can take orders and deliver beer to thirsty college students directly to their dorm rooms, according to Popular Science.

Two Turtlebots, which look like coolers on wheels, travel from room to room asking if anyone needs a beer. A thirsty student toggles a switch to make a request, and the Turtlebot travels back to a PR2 robot bartender.

The PR2 robot bartender senses the Turtlebot waiter nearby and drops the beer in the cooler for delivery.

The Turtlebot waiters can also coordinate with each other when they're in the same room to avoid collisions. When they're getting beers from the robot bartender, they autonomously line up and take turns.

Delivering cans of beer may not be seem sophisticated but getting robots to work around each other and around humans, called multi-agent planning, is no mean feat. Multi-agent planning requires robots to anticipate not just other robots movements, but that of humans, who tend to be more unpredictable.  

It remains to be seen if MIT plans to combine the two systems to create a Kool-Aid Man robot that can deliver beer, but they should know that we're all waiting. 

Watch a video of the bartending robots in action.

Join the conversation about this story »

NOW WATCH: Why BMI is BS

I learned something surprising after binge-watching 7 iconic artificial intelligence movies

$
0
0

hal 2001

When I interview artificial intelligence researchers for Tech Insider stories, the conversations almost always turn to science fiction.

I've seen a few movies about artificial intelligence (AI), like the "Terminator" and "The Matrix," for example, but I hadn't seen "2001: A Space Odyssey," considered by many I've spoken with as the pinnacle of sci-fi. (Marvin Minsky — one of the pioneers of AI — was even an adviser to the movie's production team.)

So I decided to spend a weekend binge-watching every acclaimed AI movie I'd missed. Taking tips from colleagues and The Guardian's list of the top movies about AI, I lined up seven films.

I didn't begin with any expectations or criteria, but by the end — red-eyed and suffering from a little bit of cabin fever — I realized that one movie on my list offered the most realistic vision of the future of AI, and it was a cartoon.

Read on to see how these iconic titles jibe with modern science. (Warning: Spoilers ahead.)

2001: A Space Odyssey (1968)

The movie: Astronaut David Bowman and his crew mates aboard the Discovery One are headed to Jupiter in search of strange black monoliths — devices that appear at turning points throughout the human species' evolution. The ship's computer, Hal 9000, has a lot of responsibilities, including piloting the ship and maintaining life support for astronauts in hibernation.

Though Hal insists he is "by any practical definition of the words, foolproof and incapable of error," he makes a mistake and two astronauts conspire to turn him off. Little do they know that Hal has a few tricks in his memory banks.

The technology: Hal has a wide range of tasks, which makes him an artificial general intelligence (AGI) — AI that has or exceeds human-level intelligence across all the fields of expertise that a human could have. AGI would take a huge amount of computation and energy. According to Scientific American, AI researcher Hans Moravec estimates that it would require at least "100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain."

Is it possible?: The Fujitsu K computer already outpaces this estimate at 10 quadrillion worth of computations in one second. Despite the K computer's computing capabilities, it still took about "40 minutes to complete a simulation of one second of neuronal network activity in real time," according to CNET. Moravec writes "at the present pace, only about 20 or 30 years will be needed to close the gap." So, Hal is possible, but not right now.

Hal also has human emotions — pride, fear, and a survival instinct — but I wasn't sure where they originated. Humans have emotions because of evolutionary survival instincts. Emotions like fear and jealousy, according to the New York Times, may have helped us hoard scant resources for ourselves.

On the other hand, AI wouldn't develop emotions unless they’re programmed to replicate them. The humans may have given Hal a survival instinct, but surely they wouldn't have programmed him to survive at the expense of his human crewmates.

The takeaway: Watching Stanley Kubrick's stunning masterpiece was like watching a living painting. But it also serves to warn us to ensure any AGI we create doesn't prioritize its survival over the survival of the humans it serves.



WarGames (1983)

RAW Embed

The movie: Matthew Broderick plays a high school hacker named David Lightman, who mistakenly hacks into a government computer in charge of the nuclear missile launch systems at the North American Aerospace Defense Command (NORAD). Thinking he's hacked into a games company, Lightman begins to play as the Soviet Union in a what he thinks is a simulation game called Global Thermonuclear War, unwittingly setting off a series of events that threaten to create World War III.

The technology: The government computer, called the War Operations Plan Response (WOPR), learns from constantly running military simulations, and can autonomously target and fire nuclear missiles.

Is it possible?: WOPR combines two different technologies that exist right now, so I'd say this technology is possible with some time and effort — though it may not be a good idea. Like WOPR, DeepMind's deep neural net system, called deep-Q networks (DQN), learns to play video games and gets better with time. According to Deep Mind's Nature paper, the DQN was able to "achieve a level comparable to that of a professional human games tester across a set of 49 games."

Autonomous weapons that can target and fire on their own also exist right now. One frightening real-life autonomous weapons is the Samsung SGR-1, which patrols the demilitarized zone between North and South Korea and can fire without human assistance. These are the kind of self-targeting weapons that almost started World War III.

The takeaway: Autonomous weapons exist right now, but I can't think of any government that would be willing to put the most dangerous weapons known to man in the hands of an easily hackable computer that doesn't clearly differentiate between simulations and firing real weapons. However, Tesla CEO Elon Musk, physicist Stephen Hawking, and over 16,000 AI researchers don't want to take that chance, and recently urged the United Nations to ban the use of autonomous weapons.

WOPR also has a clear set of goals — win the game at any cost, even if it means destroying humanity. It's a clear illustration of an AI that could decimate humanity, what philosopher Nick Bostrom calls "existential threat."



Ghost in the Shell (1995)

RAW Embed

The movie: In 2029, almost everyone in Japan is connected to the cloud via cybernetic android bodies, including detective Major Kusanagi. Tasked with finding a hacker named the Puppet Master, she learns that the hacker was originally a computer program that gained sentience. Over time, the Puppet Master learned about the nature of his existence, and his inability to reproduce or have a normal life.

The technology: In "Ghost in the Shell," technology has advanced to the point that false memories can be hacked and robots can build other robots. Major Kusanagi is a "ghost"— a human mind uploaded to and accessible through the cloud using her artificial body. She has superhuman strength and invisibility. She can also speak telepathically, access information, and even drive cars using her mind's access to the cloud.

Is it possible?: The idea of humans accessing the internet using just their minds is a well-trodden trope. Futurist and Google researcher Ray Kurzweil predicted that we'll be able to communicate telepathically using the cloud by 2030, just a year after the events of "Ghost in the Shell" take place.

Kusanagi's artificial body moves like a human body, but robots today still can't walk on two legs without collapsing midstep, as shown by the robots in the DARPA Robotics Challenge Finals. So that makes it pretty hard to believe that robots would be dexterous enough to be backflipping off high-rise buildings in just 15 years. On the other hand, MIT is currently building superstrong robots that can punch through walls, but these robots aren't autonomous — they're controlled by a human wearing an exoskeleton.

The takeaway: We’ll probably have to wait more than 15 years for technology that will allow us to upload our minds into robotic bodies, but “Ghost in the Shell” brought up some very real ethical and safety concerns. For example: In the movie, a garbageman is convinced he’s helping a criminal in exchange for regaining custody of his daughter. But he later learns that his memories have been faked — he never had a wife or a daughter. Could hacker implant false memories?

“Imagine when the internet is in your brain, if the NSA can see into your brain, if hackers can hack into your brain,” Shimon Whiteson, an AI researcher at the University of Amsterdam, said.

The military is developing a brain implant that could restore memories and repair brain damage, so it's not too far-fetched to think these kinds of implants could be hacked.



See the rest of the story at Business Insider

NOW WATCH: These guys remotely hacked a Jeep — here's how to prevent it from happening to you

I learned something surprising after binge-watching 7 iconic artificial intelligence movies

$
0
0

hal 2001

When I interview artificial intelligence researchers for Tech Insider stories, the conversations almost always turn to science fiction.

I've seen a few movies about artificial intelligence (AI), like the "Terminator" and "The Matrix," for example, but I hadn't seen "2001: A Space Odyssey," considered by many I've spoken with as the pinnacle of sci-fi. (Marvin Minsky — one of the pioneers of AI — was even an adviser to the movie's production team.)

So I decided to spend a weekend binge-watching every acclaimed AI movie I'd missed. Taking tips from colleagues and The Guardian's list of the top movies about AI, I lined up seven films.

I didn't begin with any expectations or criteria, but by the end — red-eyed and suffering from a little bit of cabin fever — I realized that one movie on my list offered the most realistic vision of the future of AI, and it was a cartoon.

Read on to see how these iconic titles jibe with modern science. (Warning: Spoilers ahead.)

2001: A Space Odyssey (1968)

The movie: Astronaut David Bowman and his crew mates aboard the Discovery One are headed to Jupiter in search of strange black monoliths — devices that appear at turning points throughout the human species' evolution. The ship's computer, Hal 9000, has a lot of responsibilities, including piloting the ship and maintaining life support for astronauts in hibernation.

Though Hal insists he is "by any practical definition of the words, foolproof and incapable of error," he makes a mistake and two astronauts conspire to turn him off. Little do they know that Hal has a few tricks in his memory banks.

The technology: Hal has a wide range of tasks, which makes him an artificial general intelligence (AGI) — AI that has or exceeds human-level intelligence across all the fields of expertise that a human could have. AGI would take a huge amount of computation and energy. According to Scientific American, AI researcher Hans Moravec estimates that it would require at least "100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain."

Is it possible?: The Fujitsu K computer already outpaces this estimate at 10 quadrillion worth of computations in one second. Despite the K computer's computing capabilities, it still took about "40 minutes to complete a simulation of one second of neuronal network activity in real time," according to CNET. Moravec writes "at the present pace, only about 20 or 30 years will be needed to close the gap." So, Hal is possible, but not right now.

Hal also has human emotions — pride, fear, and a survival instinct — but I wasn't sure where they originated. Humans have emotions because of evolutionary survival instincts. Emotions like fear and jealousy, according to the New York Times, may have helped us hoard scant resources for ourselves.

On the other hand, AI wouldn't develop emotions unless they’re programmed to replicate them. The humans may have given Hal a survival instinct, but surely they wouldn't have programmed him to survive at the expense of his human crewmates.

The takeaway: Watching Stanley Kubrick's stunning masterpiece was like watching a living painting. But it also serves to warn us to ensure any AGI we create doesn't prioritize its survival over the survival of the humans it serves.



WarGames (1983)

RAW Embed

The movie: Matthew Broderick plays a high school hacker named David Lightman, who mistakenly hacks into a government computer in charge of the nuclear missile launch systems at the North American Aerospace Defense Command (NORAD). Thinking he's hacked into a games company, Lightman begins to play as the Soviet Union in what he thinks is a simulation game called Global Thermonuclear War, unwittingly setting off a series of events that threaten to create World War III.

The technology: The government computer, called the War Operations Plan Response (WOPR), learns from constantly running military simulations, and can autonomously target and fire nuclear missiles.

Is it possible?: WOPR combines two different technologies that exist right now, so I'd say this technology is possible with some time and effort — though it may not be a good idea. Like WOPR, DeepMind's deep neural net system, called deep-Q networks (DQN), learns to play video games and gets better with time. According to Deep Mind's Nature paper, the DQN was able to "achieve a level comparable to that of a professional human games tester across a set of 49 games."

Autonomous weapons that can target and fire on their own also exist right now. One frightening real-life autonomous weapons is the Samsung SGR-1, which patrols the demilitarized zone between North and South Korea and can fire without human assistance. These are the kind of self-targeting weapons that almost started World War III.

The takeaway: Autonomous weapons exist right now, but I can't think of any government that would be willing to put the most dangerous weapons known to man in the hands of an easily hackable computer that doesn't clearly differentiate between simulations and firing real weapons. However, Tesla CEO Elon Musk, physicist Stephen Hawking, and over 16,000 AI researchers don't want to take that chance, and recently urged the United Nations to ban the use of autonomous weapons.

WOPR also has a clear set of goals — win the game at any cost, even if it means destroying humanity. It's a clear illustration of an AI that could decimate humanity, what philosopher Nick Bostrom calls "existential threat."



Ghost in the Shell (1995)

RAW Embed

The movie: In 2029, almost everyone in Japan is connected to the cloud via cybernetic android bodies, including detective Major Kusanagi. Tasked with finding a hacker named the Puppet Master, she learns that the hacker was originally a computer program that gained sentience. Over time, the Puppet Master learned about the nature of his existence, and his inability to reproduce or have a normal life.

The technology: In "Ghost in the Shell," technology has advanced to the point that false memories can be hacked and robots can build other robots. Major Kusanagi is a "ghost"— a human mind uploaded to and accessible through the cloud using her artificial body. She has superhuman strength and invisibility. She can also speak telepathically, access information, and even drive cars using her mind's access to the cloud.

Is it possible?: The idea of humans accessing the internet using just their minds is a well-trodden trope. Futurist and Google researcher Ray Kurzweil predicted that we'll be able to communicate telepathically using the cloud by 2030, just a year after the events of "Ghost in the Shell" take place.

Kusanagi's artificial body moves like a human body, but robots today still can't walk on two legs without collapsing midstep, as shown by the robots in the DARPA Robotics Challenge Finals. So that makes it pretty hard to believe that robots would be dexterous enough to be backflipping off high-rise buildings in just 15 years. On the other hand, MIT is currently building superstrong robots that can punch through walls, but these robots aren't autonomous — they're controlled by a human wearing an exoskeleton.

The takeaway: We’ll probably have to wait more than 15 years for technology that will allow us to upload our minds into robotic bodies, but “Ghost in the Shell” brought up some very real ethical and safety concerns. For example: In the movie, a garbageman is convinced he’s helping a criminal in exchange for regaining custody of his daughter. But he later learns that his memories have been faked — he never had a wife or a daughter. Could hackers implant false memories?

“Imagine when the internet is in your brain, if the NSA can see into your brain, if hackers can hack into your brain,” Shimon Whiteson, an AI researcher at the University of Amsterdam, said.

The military is developing a brain implant that could restore memories and repair brain damage, so it's not too far-fetched to think these kinds of implants could be hacked.



See the rest of the story at Business Insider

NOW WATCH: These guys remotely hacked a Jeep — here's how to prevent it from happening to you

Google's AI created a bunch of trippy images when let loose on the internet

$
0
0

Google AI dreams

Google's image recognition programs are usually trained to look for specific objects, like cars or dogs.

But now, in a process Google's engineers are calling "inceptionism," these artificial intelligence networks were fed random images of landscapes and static noise.

What they get back sheds light on how AI perceive the world, and the possibility that computers can be creative too. 

The AI networks churned out some insane images and took the engineers on a hallucinatory trip full of knights with dog heads, a tapestry of eyes, pig-snails, and pagodas in the sky.

Engineers trained the network by "showing it millions of training examples and gradually adjusting the network parameters,"according to Google’s research blog. The image below was produced by a network that was taught to look for animals.

 

 



Each of Google's AI networks is made of a hierarchy of layers, usually about "10 to 30 stacked layers of artificial neurons." The first layer, called the input layer, can detect very basic features like the edges of objects. The engineers found that this layer tended to produce strokes and swirls in objects, as in the image of a pair of ibis below.



As an image progresses through each layer, the network will look for more complicated structures, until the final layer makes a decision about the objects in the image. This AI searched for animals in a photo of clouds in a blue sky and ended up creating animal hybrids.



See the rest of the story at Business Insider

NOW WATCH: A psychologist reveals how to get rid of negative thoughts


'Nobody' in artificial intelligence is trying to pass the Turing Test

$
0
0

Ex Machina

People have thought of the Turing Test as a benchmark that artificial intelligence (AI) must pass since famed computer scientist Alan Turing proposed it in his 1950 seminal paper.

Now 65 years later, some AI scientists say it's time to rethink the Turing Test and design better measures to track progress in AI.

The Turing Test tasks a human evaluator with determining whether he is speaking with a human or a machine. If the machine can pass for human, then it's passed the test.

Last summer, a computer program with the persona of a teenage Ukrainian boy won the Loebner Prize, a competition awarding $200,000 to any person who can create a machine that passes the Turing Test, Science Magazine reported.

But Gary Marcus, cognitive scientist at New York University, told Science that competitions like the Loebner Prize reward AI that are more akin to "parlor tricks" than to a "program [that] is genuinely intelligent."

"Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby," Stuart Russell, an AI researcher at University of California, Berkeley, told Tech Insider in an interview. "The people who do work on passing the Turing Test in these various competitions, I wouldn't describe them as mainstream AI researchers."

Detractors like Marcus and Russell argue that the Turing Test measures just one aspect of intelligence. A single test for conversation neglects the vast number of tasks AI researchers have been working to improve separately, including vision, common-sense reasoning, or even physical manipulation and locomotion, according to Science Magazine.

Russell, who is also co-author of the standard textbook "Artificial Intelligence: A Modern Approach,"told Tech Insider that the Turing Test wasn't even supposed to be taken literally — it's a thought experiment used to show how the intelligence of AI should rely more on behavior than on whether it is self-aware.

"It wasn't designed as the goal of AI, it wasn't designed to create a research agenda to work towards," he said. "It was designed as a thought experiment to explain to people who were very skeptical at the time that the possibility of intelligent machines did not depend on achieving consciousness, that you could have a machine that would behave intelligently ... because it was behaving indistinguishably from a human being."

Russell isn't alone in his opinion.

Marvin Minsky, one of the founding fathers of AI science, condemned the Loebner Prize as a farce, according to Salon. Minsky called the competition "obnoxious and stupid" and offered his own money to anyone that could convince Hugh Loebner, the competition's namesake who put up his own money for the prize, to cancel it altogether.

When asked what researchers are actually working on, Russell mentioned improving AI's "reasoning, learning, decision making" capabilities.

Luckily NYU researcher Marcus is designing a series of tests that focus on just those things, according to Science. Marcus hopes that the new competitions would "motivate researchers to develop machines with a deeper understanding of the world."

Join the conversation about this story »

NOW WATCH: How to make your iPhone run faster

Two Silicon Valley superstars are trying to build the computer companion from the movie 'Her' — today (FB)

$
0
0

her joaquin phoenixIf you watched the film "Her" and thought you'd rather not wait for the future to get your heart broken by an operating system, Venmo co-founder Andrew Kortina is right there with you.

Kortina and Sam Lessin, former Facebook VP of product and co-founder of Drop.io, have started The Fin Exploration Company — one of whose main goals is to build “something like the OS from Her today.”

In a blog post, the founders describe their vision of the future as a time where people just talk to machines like they talk to people. “Mobile devices are extremely powerful, but apps are a crude method of communication,” they write.

And it seems they are already taking concrete steps to push past the "app" framework they criticize. According to the post, Kortina and Lessin have begun interacting with “Fin,” presumably their operating system, and it's already showing "Her"-like qualities.

“After interacting with Fin for a few weeks, I've been surprised to find that it does not feel like software. Fin feels like a person — actually, it feels like a multiplicity of people, almost like a city," they write.

Andrew Kortina"Fin grows and learns. Fin has an opinion. Fin surprises me and challenges me. Talking to Fin is a real conversation, not just a query and response or command dispatch.”

That certainly sounds intoxicating, and it’s easy to imagine some poor soul falling for this new operating system — eventually — in the way Joaquin Phoenix did in "Her." In the film, Phoenix ends with his heart smashed to bits as the operating system’s intelligence, Samantha, spirals out of his comprehension, and he realizes there can be nothing approximating human-to-human love with "her."

We have no word yet on how the Fin operating system actually works, and Lessin declined to comment to Business Insider for this article.

SEE ALSO: Humans possess a particular set of skills that make them far superior to robots

Join the conversation about this story »

NOW WATCH: This is the 'Fallout 4' video fans have been waiting months to see

Artificially intelligent security cameras are spotting crimes before they happen

$
0
0

hal 2001

Next time you see a surveillance camera following you down an alleyway, don't be too sure that there's a human watching.

Surveillance camera companies are increasingly relying on artificial intelligence (AI) to automatically identify and detect problematic and criminal behavior as it happens — everything from public intoxication to trespassing.

An automated camera system called AIsight (pronounced eyesight), installed in Boston after the 2013 marathon bombing, monitors camera feeds in real time and alerts authorities if it spots unusual activity, according to Bloomberg.

AIsight cameras use a statistical method called machine learning to "learn what is normal for an area and then alerts on abnormal activity," according to its creator, Behavioral Recognition System Labs.

Slate reports that could mean picking up anything from "unusual loitering to activity occurring in restricted areas."

"We are recognizing a precursor pattern that may be associated with a crime that happens," Wesley Cobb, chief science officer at the company, told Bloomberg. "Casing the joint, poking around where he shouldn't be, going around looking at the back entrances to buildings."

And these systems aren't just looking for criminals. In early August, West Japan Railway installed 46 security cameras that can "automatically search for signs of intoxication" in passengers at the Kyobashi train station in Osaka, Japan, according to Wall Street Journal.

The AI watches for people stumbling, napping on benches, or standing motionless on the platform for long periods of time before lurching to move. The system can then alert human attendants if the person is in danger of falling on the tracks or hurting themselves.

Drunken passengers frequently fall or stumble off the train platform. West Japan Railway conducted a study that found 60% of the 221 people hit by trains in Japan in 2013 were intoxicated, the Wall Street Journal reports.

AI camera Japan trainUsing AI in surveillance systems makes sense — AI can catch what humans miss, operate around the clock, never get tired, or fall asleep on the job. But it raises concerns with "privacy and civil liberties advocates," because it "treats everyone as a potential criminal and targets people for crimes they have not yet committed," according to Slate.

Stuart Russell, AI researcher at University of California, Berkeley and co-author of the standard textbook "Artificial Intelligence: A Modern Approach," thinks intelligent "watching" programs will likely freak people out more than a human monitor does, even though& most people would reasonably expect they were being watched if they encounter a surveillance camera.

"What if there's an AI system, which actually can understand everything that you're doing?" Russell told Tech Insider. "Would that feel different from a human watching directly? I expect people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing."

This is just one of the many security and privacy issues that courts will have to grapple with as AI improves in the coming years, like the legality of AI that can buy up tickets, and then scalp them online.

Join the conversation about this story »

NOW WATCH: Something strange is happening with US rainfall

These are the 20 most creative paintings ever — according to a computer

$
0
0

The_Scream

Creativity and art are usually thought of as the domains of humans.

But computer scientists from Rutgers University have designed an algorithm that shows that computers may be just as skilled at critiquing artwork. By judging paintings based on their novelty and influence, the mathematical algorithm selected the most creative paintings and sculptures of each era.

The study, published in arxiv, found that more often than not, the computer chose what most art historians would also agree are groundbreaking works, like Edvard Munch's "The Scream" and Pablo Picasso's "The Young Ladies of Avignon."

Scroll down to see which paintings made the cut, and why.

The algorithm's network included over 62,000 paintings spanning 550 years and some of the most well-known names in art history, from the Renassaince era to the age of pop art. This painting by Lorenzo di Credi is often called the Dreyfus Madonna, after Gustav Dreyfus, one of its longtime owners.



The paintings were arranged on a timeline according to the date it was made, so each painting could be critiqued with a historical point of view. The algorithm looked for paintings that differed from the work that came before to measure its novelty. This fresco mural by Andrea Mantegna decorates one of the walls in a castle in Mantua, Italy.





The computer algorithm also weighed how influential each painting was by looking at paintings that imitated its style. Leonardo da Vinci painted this portrait of St. John the Baptist late in his career, leading an artistic era called Mannerism, which is characterized by exaggerated poses.



See the rest of the story at Business Insider

Researchers say we don't need to pass the Turing Test

$
0
0

Ex Machina

One of the biggest misconceptions about artificial intelligence (AI) is thinking that it must pass the Turing Test to be truly intelligent.

But AI scientists say the test is basically worthless and distracts people from real AI science.

"Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby," Stuart Russell, an AI researcher at University of California, Berkeley, told Tech Insider in an interview. "The people who do work on passing the Turing Test in these various competitions, I wouldn't describe them as mainstream AI researchers."

Named for the famed computer scientist Alan Turing, who proposed it in a 1950 paper, the Turing Test tasks a human evaluator with determining whether he is speaking with a human or a machine. If the machine can pass for human, then it's passed the test.

Last summer, Eugene Goostman, a computer program with the persona of a teenage Ukrainian boy, passed a Turing Test competition at the University of Reading, according to a press release.

The program fooled at least a third of the 30 judges in to thinking it was human. Kevin Warwick, an AI researcher and one of the event's organizers, declared that this was the first time a computer program had truly passed the test, in a "milestone that will go down in history as one of the most exciting."

EugeneConvoBut, soon after the announcement, critics started testing Eugene out for themselves.

The chatbot is no longer available online, but most transcripts show how it relied on humor and misdirection to confuse the judges and often repeated unintelligible responses.

In short, it was pretty lame.

In fact, the program's design as a 13-year-old boy with a bad grasp of English may have been why at least 10 of the judges were fooled.

According to The Guardian, Eugene's creator Vladimir Veselov said his age made for a perfect smokescreen to the programs failings, making it "perfectly reasonable that he doesn't know anything."

Many researchers, like Gary Marcus, cognitive scientist at New York University, get frustrated when the press picks up on these kinds of stories. He told Science Magazine that such competitions test AI that are more akin to "parlor tricks" than to a "program [that] is genuinely intelligent."

Detractors like Marcus and Russell argue that the Turing Test measures just one aspect of intelligence. A single test for conversation neglects the vast number of tasks AI researchers have been working to improve separately, including vision, common-sense reasoning, or even physical manipulation and locomotion, according to Science.

Russell, who is also co-author of the standard textbook "Artificial Intelligence: A Modern Approach," said the Turing Test wasn't even supposed to be taken literally — it's a thought experiment used to show how the intelligence of AI should rely more on behavior than on whether it is self-aware.

benedict cumberbatch keira knightley imitation game"It wasn't designed as the goal of AI, it wasn't designed to create a research agenda to work towards," he said. "It was designed as a thought experiment to explain to people who were very skeptical at the time that the possibility of intelligent machines did not depend on achieving consciousness, that you could have a machine that would behave intelligently ... because it was behaving indistinguishably from a human being."

Russell isn't alone in his opinion.

Marvin Minsky, one of the founding fathers of AI science, condemned one competition called the Loebner Prize as a farce, according to Salon. Minsky called it "obnoxious and stupid" and offered his own money to anyone that could convince Hugh Loebner, the competition's namesake who put up his own money for the prize, to cancel it altogether.

When asked what researchers are actually working on, Russell mentioned improving AI's "reasoning, learning, decision making" capabilities.

Luckily, NYU researcher Marcus is designing a series of tests that focus on just those things, according to Science. One proposed test would require a machine to understand "grammatically ambiguous sentences" that most humans would understand.

For example, with the sentence "the trophy would not fit in the brown suitcase because it was too big," most people would understand that the trophy was too big, not the suitcase. Such understanding is often difficult to program, according to Science.

Marcus hopes that the new competitions would "motivate researchers to develop machines with a deeper understanding of the world."

Join the conversation about this story »

NOW WATCH: How to make your iPhone run faster

These security cameras can predict crimes crimes before they happen

$
0
0

hal 2001

Next time you see a surveillance camera following you down an alleyway, don't be too sure that there's a human watching.

Surveillance camera companies are increasingly relying on artificial intelligence (AI) to automatically identify and detect problematic and criminal behavior as it happens — everything from public intoxication to trespassing.

An automated camera system called AIsight (pronounced eyesight), installed in Boston after the 2013 marathon bombing, monitors camera feeds in real time and alerts authorities if it spots unusual activity, according to Bloomberg.

AIsight cameras use a statistical method called machine learning to "learn what is normal for an area and then alerts on abnormal activity," according to its creator, Behavioral Recognition System Labs.

Slate reports that could mean picking up anything from "unusual loitering to activity occurring in restricted areas."

"We are recognizing a precursor pattern that may be associated with a crime that happens," Wesley Cobb, chief science officer at the company, told Bloomberg. "Casing the joint, poking around where he shouldn't be, going around looking at the back entrances to buildings."

And these systems aren't just looking for criminals. In early August, West Japan Railway installed 46 security cameras that can "automatically search for signs of intoxication" in passengers at the Kyobashi train station in Osaka, Japan, according to Wall Street Journal.

The AI watches for people stumbling, napping on benches, or standing motionless on the platform for long periods of time before lurching to move. The system can then alert human attendants if the person is in danger of falling on the tracks or hurting themselves.

Drunken passengers frequently fall or stumble off the train platform. West Japan Railway conducted a study that found 60% of the 221 people hit by trains in Japan in 2013 were intoxicated, the Wall Street Journal reports.

AI camera Japan trainUsing AI in surveillance systems makes sense — AI can catch what humans miss, operate around the clock, never get tired, or fall asleep on the job. But it raises concerns with "privacy and civil liberties advocates," because it "treats everyone as a potential criminal and targets people for crimes they have not yet committed," according to Slate.

Stuart Russell, AI researcher at University of California, Berkeley and co-author of the standard textbook "Artificial Intelligence: A Modern Approach," thinks intelligent "watching" programs will likely freak people out more than a human monitor does, even though& most people would reasonably expect they were being watched if they encounter a surveillance camera.

"What if there's an AI system, which actually can understand everything that you're doing?" Russell told Tech Insider. "Would that feel different from a human watching directly? I expect people are going to feel differently about that once they're aware that AI systems can watch through a camera and can, in some sense, understand what it's seeing."

This is just one of the many security and privacy issues that courts will have to grapple with as AI improves in the coming years, like the legality of AI that can buy up tickets, and then scalp them online.

Join the conversation about this story »

NOW WATCH: Something strange is happening with US rainfall

Watch this creepy, knife-wielding, unibrowed robot make noodles

$
0
0

china robot noodle

We've heard about all kinds of people losing their jobs to robots — from cab drivers to bartenders— but almost any repetitive job can be replaced by a robot.

One fantastically tasty example of this is the Noodlebot, which was originally patented under the name Chef Cui by its inventor Cui Runguan.

Noodblebot burst onto the scene back in 2012, but we just discovered him and we couldn't resist sharing.

Noodlebot is cheap, uncomplicated, and according to some restaurant owners, actually "better than human chefs."

He cuts a specific kind of noodle called dao xiao mian, or "knife cut noodles,"according to a CNN post from 2012.

Traditionally, a chef makes and kneads the wheat-based dough by hand, then holds the dough in one hand and cuts with the other.

The stationary robot works much in the same way, but it's faster and more accurate. According to CNN, Noodlebot can slice 150 pieces of noodles a minute, and can be programmed to cut noodles of different widths and lengths.

Noodlebot's knife-wielding arm works like a windshield wiper — slicing noodles in an up and down motion. The cut noodles fire directly into the wok.

The uncut dough sits on a platform in front of the robot. The platform moves up and side to side, allowing the other arm to cut across the dough.

Noodlebot's aim isn't perfect, so it helps to have a more experienced chef standing guard.

Runguan believes Noodlebot will allow entry-level cooks to work on more rewarding tasks in the kitchen.

"Young people don't want to work as chefs slicing noodles because this job is very exhausting," Runguan told Zoomin.TV. "It is a trend that robots will replace men in factories, and it is certainly going to happen in noodle slicing restaurants."

You'd think many restaurant owners would be terrified of Noodlebot. Its menacing unibrow and constantly shifting eyes make it impossible to tell if it'll suddenly turn to you and say "You're next" all while calmly cutting noodles.

Runguan designed the Noodlebot to look like characters from a famous 1960 Japanese show called "Ultraman" at the behest of his son, according to CNN.

"The robot chef can slice noodles better than human chefs and it is much cheaper than a real human chef," Liu Maohu told Zoomin.TV in 2012. "It costs more than [$4700] to hire a chef for a year, but the robot just costs me [$1500]."

And Maohu's customers don't seem to mind. According to Zoomin.TV, one customer said he couldn't tell the difference between the human-made and robot-made noodles, and that Noodlebot's noodles "taste good and look great."

In fact, like many food-making robots, watching Noodlebot is strangely mesmerizing. Noodlebot makes for some pretty awesome pre-dinner entertainment and some customers find him irresistible.

Other manufacturers are now building robots just like Noodlebot. Foxxconn, the same company that assembles iPhones and iPads, got in the noodle-cutting game early in 2015, according to Wall Street Journal. Foxxconn has only built three noodle-cutting robots so far and it doesn't seem to have as much flair as Noodlebot.

Watch Noodlebot in action below.

Join the conversation about this story »

NOW WATCH: We tried the 'crazy wrap thing' everyone is talking about


This is the weirdest recipe IBM's supercomputer chef has created

$
0
0

peas

The IBM computer known for demolishing humans in chess and Jeopardy is now trying its hand in the kitchen.

But every cook has culinary experiments go wrong once in awhile. Even with a successful cookbook on shelves, the Chef Watson app has turned up some less than scrumptious results.

The Chef Watson app takes the user's requested ingredients and throws together a recipe based on its memory of flavors that work together. 

A Reddit AMA on August 25 with IBM, Bon Appetit magazine and the Institute for Culinary Education, revealed a Chef Watson original recipe for Chicken Breast Taco that might be delicious but is well, physically impossible. 

"I've played around with the Chef Watson app but sometimes it leads to hilarious results," wrote bemused Reddit user ZipBoxer. "For example, [one] calls for wasabi powder (never used), shelled green peas (2 1/2 cup shelled green peas) cut into 3/4 pieces, then placed on a barbecue."

Grilled 3/4 pieces of peas might taste amazing with chicken tacos, but no reasonable human chef would go through the trouble of trying to grill them. IBM Watson researcher Patrick Wagstrom chalks the anomaly up to a natural language glitch, a consequence of a supersmart machine that lacks common sense reasoning and a hearty understanding of language.

In other words, the computer doesn't really understand what it's reading and recommending. 

"It's probably trying to substitute green peas in for a similar ingredient," Wagstrom wrote. "Likely the original ingredient needed to be unwrapped and then sliced, so the natural parallel was to suggest to shell the green peas." 

Indeed, the original recipe that Chef Watson was riffing on required "cut vegetables crosswise into 3/4-inch pieces." From Chef Watson's point of view, peas are a vegetable, so it works. 

Chef Watson may seem clueless but it's endured a unique form of culinary training, called machine learning. According to the Washington Post, Chef Watson "ingests a huge amount of unstructured data — recipes, books, academic studies, tweets — and analyzes it for patterns the human eye wouldn't detect."

For the webapp that produced this pea-loving recipe, Chef Watson analyzed 10,000 Bon Appetit recipes. It then looked for statistical correlations among ingredients that tended to appear in recipes together.

This isn't the first time Chef Watson recommended an impossible step. Wagstrom said an earlier version of the software advised him to "One: Refrigerate the goat. Two: Skewer the tequila."

You take care of the goat, Chef Watson. I'll do the tequila.

Try the webapp and find your own weird recipes >

Join the conversation about this story »

NOW WATCH: 11 amazing facts about your skin

Lawyers are just as likely to lose their jobs to robots as truck drivers and factory workers

$
0
0

Saul Goodman breaking bad

It's no surprise that the coming robotic workforce will take over jobs that require manual labor.

But white-collar workers like lawyers are equally at risk of losing their jobs to artificial intelligence (AI) that's cheaper and better than human workers, according to Jerry Kaplan, author of "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence."

Kaplan said any person that toils through many "repetitive and structured" tasks for a living won't be safe from the bread lines.

"Even for what you think of as highly-trained, highly-skilled, intuitive personable professions, it is still true that the vast majority of the work is routine," Kaplan told Tech Insider.

Lawyers, for example, may conjure up images of formidable debators pontificating in front of grand juries, but the reality is much more mundane.

"The vast majority of activities that lawyers are engaged in are straightforward drafting of contracts, putting together things like apartment leases, real estate deals, pre-trial discovery," Kaplan said. "It's these very tasks that make the profession susceptible to automation."

Startups are already springing up to take on these time-consuming and expensive chores. Kaplan lists just a few of them in his book — Judicata uses statistical methods called machine learning and natural language processing to automatically find relevant court cases.

Fair Document allows users to fill out forms to create documents for, say, estate planning for only $995 — a "service that might otherwise typically cost $3,500 to $5,000," for a lawyer to do.

There's already a huge gap between the small number of law jobs and increasing law school graduates. The New York Times reports 40% of 2014 law school graduates failed to find jobs "that required them to pass the bar exam."

Time Magazine reports that the recession, outsourced jobs, and new technology — including "software that can do tedious document review projects that used to require an actual human"— resulted in fewer jobs for law students with massive student loan debts.

lawyers mock trialStill, a 2013 oft-cited Oxford study estimated that lawyers only have a 3.5% chance of losing their jobs because "most computerization of legal research will compliment the work of lawyers in the medium term."

But Kaplan said this may only apply to seasoned lawyers who typically don't do the kind of repetitive tasks entry-level lawyers share with paralegals and legal secretaries. These professions were estimated to have a 94.5% likelihood of automation, according to the Oxford study.

"Starting lawyers used to spend a great deal of time doing what's called 'pre-trial discovery,'" Kaplan said. "If you think of lawyers are the people who argue in court, it won't be automated. But since much of their work is far more routine — drafting boilerplate contracts, lease agreements — those tasks are susceptible to automation."

And even without lawyers being replaced, those changes will have profound impacts on many sectors.

"Profession by profession, the tasks that people are performing that are routine, that are structured, and are susceptible to computerization — those are the tasks that are going away and as a result many, many fewer people or practitioners are needed in those professions," Kaplan said.

The 2013 study estimated that AI would be eating up 47% of all employment in the country. David H. Autor, economics professor at the Massachusetts Institute of Technology, told the New York Times that careers with middle-class incomes were "being lost to automation and outsourcing, and now job growth at the top is slowing because of automation."

Join the conversation about this story »

NOW WATCH: If you think China's air is bad, you should see the water

The 15 greatest movie robots of all time

$
0
0

terminator genisys arnold schwarzenegger

Hollywood has always been fascinated by robots.

They are a very important staple of sci-fi, because the way robots are portrayed in movies tend to say a lot about how people feel about technology during any given point in time.

In the 1980s, fears of Cold War technology annihilating all life on earth made for more frightening robots.

Today, everybody has a computer in their home and a smartphone in their pocket. Many of today's movie robots seem like they could be our friends. 

Here are the greatest robots in cinematic history.

15. TARS stole the show from Matthew McConaughey, Jessica Chastain, and Anne Hathaway in 2014's "Interstellar."

Surprisingly, the most relatable and human part of "Interstellar" is a box-shaped robot named TARS. Unlike many other robots on this list, TARS has a human personality and a hell of a sense of humor.



14. Gypsy Danger from "Pacific Rim" is controlled by humans. Yet, mankind needs this gigantic robot in order to bring down the Kaiju trying to destroy the world.



13. Dot Matrix from "Spaceballs" is meant to be a parody of another robot later on this list. She is played by the late, great Joan Rivers.

One of her many features includes a "Virgin Alarm."



See the rest of the story at Business Insider

NOW WATCH: What it's like to eat at McDonald's in South Korea

Here's what to do in college if you don't want to lose your job to a robot

$
0
0

college study

College students might think they have four more years before they need to worry about the real world, but entry-level jobs might be the first to go when the robotic workforce begins its true onslaught.

A 2013 Oxford study estimated that 47% of all American employment could be automated.

Jerry Kaplan, author of "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence," told Tech Insider that there's no secret key to succeeding in a workplace that's being overcome by artificially intelligent systems.

But learning specific skills early and relying on a few unconventional resources might give students a big head start.

Here are a few tips to get the most out of your college years.

Learn how to read, think, and communicate well

"Critical thinking skills are applicable in just about every important profession," Kaplan said. "Just getting a really good liberal-arts degree that involves critical thinking and basic skills, like being able to put words on a page that make sense, are really being valued a lot."

Kaplan noted that work that seemed unrelated to his studies during his own undergraduate study at the University of Chicago, he now thinks helped him learn a few helpful skills.

"I learned a lot of specific stuff — most of it was about Kierkegaard or something," he said. "But when I got out, I knew how to look at a problem and figure out what to do with it."

Do (the right) research before choosing a major

"When you go to get a job, you go to the recruiting center," Kaplan said. "But people who list there are the big employers who are looking for lots of people, but that isn't necessarily where the good jobs are, or for that matter, where the money is."

Instead, Kaplan advises that people looking to pick a career look to a more nontraditional resource — the Bureau of Labor Statistics. Their website houses data on what professions are in demand; what kind of skills and education you need, and salary and benefits for each profession.

Even Kaplan has pointed his own children to the website.

"It's amazing what you can find out there with just a little time looking at it," he said.

For example, you can search the database for jobs that have a "job outlook" that is improving.

"If you want to become a nurse — and that's for men and women — that’s a great profession right now," Kaplan said. "There's tons of opportunity for nurses."

That bears out when you look at the labor statistics, which says that nurse job outlook between 2012 and 2022 is "31% (much faster than average)"

bourough of labor statistics nursesThose figures agree with the Oxford study, which says registered nurses have a 0.9% chance of automation.

Don't rely on traditionally prosperous careers that involve a lot of routine tasks

Many white-collar jobs might not be safe from the onslaught of automation. Kaplan said any employee who toils through many "repetitive and structured" tasks for a living won't be safe from the bread lines.

"Even for what you think of as highly trained, highly skilled, intuitive, personable professions, it is still true that the vast majority of the work is routine," Kaplan told Tech Insider.

Lawyers, for example, may conjure up images of formidable debators pontificating in front of grand juries, but the reality is much more mundane. Most entry-level lawyers do a lot of work that's already being done by computers.

"The vast majority of activities that lawyers are engaged in are straightforward drafting of contracts, putting together things like apartment leases, real-estate deals, pretrial discovery," Kaplan said. "It's these very tasks that make the profession susceptible to automation."

Join the conversation about this story »

NOW WATCH: Scientists are having robots play 'Minecraft' to learn about human logic

This artificially intelligent program can transform photos to make them look like famous paintings

$
0
0

Tuebingen_VanGogh

Artificial intelligence (AI) has slowly been taking over the art scene. AI can critique art alongside the most seasoned critics and dream up trippy images on its own

Now scientists from the Bethge Lab in Germany have built an AI system that can learn a painting's style and apply it to other images.

The results look as though Pablo Picasso and Vincent van Gogh painted their own version of the image. 

According to Leon Gatys, PhD student and the lead on the paper published in the open-source journal arxiv, this is the first "artificial neural system that achieves a separation of image content from style."  

Scroll down to see how it works and the beautiful images it created.  

The AI could take the style of one image, like the swirls and dots of Vincent van Gogh's most famous painting "Starry Night" and make another image adopt this style, like a very sophisticated photo filter.



For their experiment, the scientists used a photo of the Neckar river in Tuebingen, Germany.



The altered photo of the river looks as though van Gogh was standing at the banks of the Neckar river instead of at his window in Saint Remy de Provence, the original setting for "Starry Night." Gatys wrote that the system gives us a mathematical basis for understanding how humans perceive and create art, especially because it mimics biological vision and the human brain.



See the rest of the story at Business Insider

NOW WATCH: This breakfast burrito may actually help you lose weight

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>