Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Stephen Hawking warns of an 'intelligence explosion'

$
0
0

Stephen Hawking

Stephen Hawking has been vocal about the dangers of artificial intelligence (AI) and how they could pose a threat to humanity.

In his recent Reddit AMA, the famed physicist explains how that might happen.

When asked by a user how AI could become smarter than its creator and pose a threat to the human race, Hawking wrote:

It's clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help.

If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

This terrifying vision of the future relies on a concept called the intelligence explosion. It posits that once AI with human-level intelligence is built, it can then recursively improve itself until it surpasses human intelligence, what's called superintelligence. The scenario is also described as the technological singularity.

According to Thomas Dietterich, an AI researcher at Oregon State University and president of the association for the Advancement of Artificial Intelligence, this scenario was first described in 1965 by I.J. Good, a British mathematician and cryptologist, in an essay titled "Speculations Concerning the First Ultraintelligent Machine."

"An ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind," Good wrote. "Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

It's hard to believe that humans would be able to control a machine whose intelligence far surpasses ours. But Dietterich has a few bones to pick with this idea, even going so far as to call it as a misconception. He told Tech Insider in an email that the intelligence explosion ignores realistic limits.

"I believe that there are informational and computational limits to how intelligent any system (human or robotic) can become," Ditterich wrote. "Computers could certainly become smarter than people — they already are, along many dimensions. But they will not become omniscient!"

Join the conversation about this story »

NOW WATCH: Here’s what really makes someone a genius


Stephen Hawking's claim about a 'drive to survive' in intelligent machines is probably wrong

$
0
0

stephen hawking

Stephen Hawking has made some bold predictions about the dangers of artificial intelligence (AI) in the past.

Now, in a recent Reddit interview, the world-renowned physicist has suggested a survival instinct in superintelligent machines of the future could be bad news for humankind.

When a Reddit user asked whether or not AI could have the drive to survive and reproduce like biological organisms do, Hawking answered:

An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.

It's a pretty terrifying prospect and sounds a lot like the malevolent computer HAL, from "2001: A Space Odyssey." But based on Tech Insider's extensive interviews with career AI researchers, Hawking's frightening suggestion may not hold up.

The experts we spoke with doubt the possibility of an AI survival instinct that's at odds with humans. For one, according to AI researcher at Yoshua Bengio, humans build machines and can control what's encoded in machine algorithms and what isn't.

"Evolution gave us an ego and a self preservation instinct because otherwise we wouldn't have survived," Bengio told Tech Insider in an earlier interview. "We were evolved by natural selection, but AIs are built by humans."

Thomas Dietterich, an intelligent systems researcher at Oregon State University, expanded on this idea in an email to Tech Insider. Humans need another human to create offspring, he wrote, which "leads to a strong drive for survival and reproduction."

"In contrast, AI systems are created through a complex series of manufacturing and assembly facilities with complex supply chains — so such systems will not be under the same survival pressures," Dietterich wrote. "AI systems may be more like social bees who live in a hive. The hive may find it advantageous to sacrifice many of its individual members in order to achieve long-term survival."

Toby Walsh, an AI professor at the National Information and Communications Technology in Australia, echoed that idea that computers wouldn't develop goals or drives without the input of a human being first, even systems that learn to improve on their own.

"Computers have no wishes and desires — they don't wake up in the morning and want to do things,"Walsh told Tech Insider. "The Jeopardy-playing IBM supercomputer Watson never woke up and said, 'Ah, I'm bored of playing Jeopardy! I want to play another game today.' That's just not in its code, and that's not in the way that we write programs today."

Join the conversation about this story »

NOW WATCH: Meet 'Iceman' and 'Wolverine' — the 2 coolest robots in Tesla's factory

'We are all going to be cyborgs' if humanity wants to solve its biggest problems

$
0
0

pepper robot 1

There's no one unifying vision of how super-smart artificial intelligence (AI) will change the future. Depending on who you ask, it could take our jobs, turn the Earth into paperclips, or kill off humankind.

Shimon Whiteson, an AI researcher at the University of Amsterdam, has a more hopeful vision: AI won't just make our futures better and safer, it will solve humanity's worst problems — even if getting there may require some uncomfortable compromises.

That belief is why Whiteson, who started learning computer programming at age 5, got into AI research in the first place.

"As I got older, I got really frustrated by how slowly humanity was solving the fundamental mysteries of the universe," Whiteson told Tech Insider. "I thought the bottleneck here is that our brains are just too puny. It's too hard to think about these really big problems with our little brains. We need to augment our brains with something that will make them smarter. We need to make computers so smart, they can help us solve these big problems."

AI-powered computer systems are already working toward that. Doctors, for example, are harnessing the abilities of IBM's Watson supercomputer to help diagnose and even treat cancer patients.

The future, as imagined by Whiteson, who works on telepresence robots called TERESAs, will bring us even closer to these super-smart AI systems.

"I really think in the future we are all going to be cyborgs," Whiteson said. "[People] have a tendency to think, there's us and then there's computers. Maybe the computers will be our friends and maybe they'll be our enemies, but we'll be separate from them. But I think that's not true at all, I think the human and the computer are really, really quickly becoming one tightly coupled cognitive unit."

Take the smartphone, Whiteson said. The devices now make information available to anyone with an internet connection. So, in a sense, smartphones already tightly integrate with, enhance, and supplement human intelligence.

In Whiteson and others' views, ever-more-sophisticated AI integrations will make humans smarter, more productive, and capable of solving the planet's worst problems.

"Where the computer is located in relation to your brain is not that important — what's important is you and the computer are becoming one cognitive unit that work together," Whiteson said. "This has huge implications for basically everything about life. Imagine how much more productive we would be if we could augment our brains with infallible memories and infallible calculators."

Join the conversation about this story »

NOW WATCH: 'MythBusters' Adam Savage Explains Why TARS From 'Interstellar' Is The Perfect Robot

Silicon Valley's hottest AI company says we're talking about smart machines all wrong

$
0
0

communicate robots

Artificial intelligence (AI) can mean a lot of things. It can include anything from digital assistants to warehouse robots. The algorithms used to power those devices are in almost everything — if it has wires and computer chips, it likely uses some form of AI.

But Scott Phoenix, the cofounder of AI company Vicarious, said we shouldn't be referring to all these things as AI.

"AI has become a very diffusely defined term that can be applied to anything," Phoenix told Tech Insider.

"They talk about it in terms of this spam filter ... or they'll talk about Google self-driving cars having AI because they can drive a car. It's very rapidly becoming a word that just means the system can do stuff you want it to."

According to Phoenix, there's only one thing that should be considered AI: an artificial being that can do all the different things a human can do, as well as a human can. 

"AI, to me, really means something specific which is, given the same kinds of sensory motor inputs that a human has, from birth to adulthood, your system should form the same concepts and have the same capabilities," he said.

In fact, that's exactly the kind of AI that Vicarious, which is backed by Elon Musk, Mark Zuckerberg, and a bunch of other tech stars, is trying to build. The company, which was founded in 2010 by Phoenix and neuroscientist Dileep George, is doing something revolutionary in computer science: They want to build the world's first human-level AI.

"Vicarious is building a single, unified system that will eventually be generally intelligent like a human," Phoenix wrote in a World Economic Forum Q and A.

There's nothing available now that would be considered AI according to Phoenix's definition. Nothing even comes close. The kind of AI systems available now are very good at narrow tasks, like playing chess or buying stocks — they're a long way off from human-level AI, which would have be good an almost limitless range of things.

Philosopher Nick Bostrom surveyed 550 AI researchers to gauge when they think human-level AI would be possible. The median answer from the researchers was that there is a 50% chance that it will be possible between 2040 and 2050, and a 90% chance that it will be built by 2075.

Having taken the survey, Phoenix agrees with that timeline, though he couldn't recall what his answer was. Asked when he thinks Vicarious' own human-level AI system will be ready, he responded that most predictions about when different kinds of technology are available usually completely miss the mark.

"The goal of Vicarious is to solve this problem and work on it for as long as it takes," Phoenix said.

Join the conversation about this story »

NOW WATCH: We tried the 'crazy wrap thing' everyone is talking about

The biggest obstacle to human-like artificial intelligence is inside your head

$
0
0

computer chip brain

We live in a world where computers can trade stocks faster than any broker on the floor and best any living human in the game of Jeopardy.

But it turns out even our smartest artificial intelligence (AI) is still relatively dumb.

So what do humans have that our machines don't?

For one, it's our seemingly common-sense ability to take what we know and apply it to new situations. Imagine talking to someone in a loud bar. Though you can't hear everything they're saying, you can plug in the gaps based on the context of the conversation and what you know about the person to understand what they're trying to say.

According to Bruno Olshausen, head of the Redwood Center for Theoretical Neuroscience at the University of California Berkeley, computers don't work like that — they can't fill in the gaps on the spot like a human can.

"Memory is central to cognition," Olshausen told Bloomberg Business.

The human brain doesn't completely remember everything that happens in a day. Instead, it summarizes the day and pulls out highlights when the details become relevant again, Olshausen told Bloomberg.

But that's not the only problem stopping scientists from building human-like AI: No one understands how our brains work to create intelligence.

To that end, the National Institutes of Health is devoting $300 million to the Brain Initiative, a project to find and explore mechanisms that allow brains to store and retrieve information. Even the project website admits that "the human brain remains one of the greatest mysteries in science and one of the greatest challenges in medicine."

Even now scientists are still trying to figure out how thousands of neurons work together to produce a physical action, like typing or reaching for a glass, according to the New York Times.

We also don't understand consciousness, sentience, and "self-hood," and we have no way to right now, according to computer scientist Stuart Russell.

"There is no scientific theory that could lead us from a detailed map of every single neuron in someone's brain to telling us how that physical system would generate a conscious experience," Russell told Tech Insider. "It's not just that: We could not know how the brain works, in the sense that we don't know how the brain produces intelligence."

Olshausen says neuroscience today is akin to where physics was in the 1600s — we've got a long way to go before we can figure out human intelligence, much less emulate it in code.

"Where we are in terms of our understanding of the brain — and what it takes to build an intelligent system — is kind of pre-Newton," Olshausen told Bloomberg. "If you're in physics and pre-Newtonian, you're not even close to building a rocket ship."

Join the conversation about this story »

NOW WATCH: Scientists are having robots play 'Minecraft' to learn about human logic

Smart robots will take our jobs — but it won't be as bad as you might think

$
0
0

Robot doing dishes

Robots are getting so sophisticated that some people fear they'll eventually replace human laborers. After all, robots have already transformed factories and other industrial work.

But while artificial intelligence (AI) researchers acknowledge that smarter robots will definitely eliminate more jobs, they say machines won't put us all out of work.

Instead, tomorrow's intelligent bots will create new, safer and more interesting jobs for people in their wake.

"Over the years we've seen tech develop further and further, and we've seen the nature of different jobs change," Carlos Guestrin, the CEO and co-founder of Dato, told Tech Insider.

"Nothing is going to be different than it was before" for the US economy, says Guestrin, whose company builds artificially intelligent systems that analyze data. "What we'll see is perhaps a shift in the kinds of things humans do."

Data from a recent AI industry report seems to back up Guestrin's views. Authored by analyst J.P. Gownder and published by Forrester, a Boston-based technology research firm, the document is based on government data and interviews with researchers and businesses, and it suggests 9.1 million US jobs will be automated by 2025. That's a lot fewer than what an oft-cited Oxford study from 2013 claims: 65 million US jobs lost by 2030.

"The future of jobs overall isn't nearly as gloomy as many prognosticators believe," analyst Gownder wrote in the report. "In reality, automation will spur the growth of many new jobs — including some entirely new job categories."

It may help to consider all of the jobs that didn't exist just 10 years ago, says Lynne Parker, the director for the Information and Intelligent Systems Division at the National Science Foundation.

"Look at the number of jobs that have been created by the intelligent software industry — it's a huge number of jobs," Parker told Tech Insider, including app developers and data miners. "Even if certain jobs might be lost to certain advances in AI, many more have been created through the whole industry."

Robots will eat up 16% of jobs by 2025, Gownder wrote in a recent blog post, but roughly "9% of today's jobs will be created." Doing the math, he wrote, that's about a 7% dip in human work over the next decade —"far [lower] than most forecasts, though still a significant job loss number."

But there's a silver lining to the loss, says Michael Taylor, a computer scientist at Washington State University: Robots will likely eliminate the dullest and most dangerous jobs, freeing up people to take on safer and more interesting gigs.

As an example, Taylor said he's working to automate Washington state's apple harvest — work that growers are struggling to find people to take on.

"AI and robotics will replace some people in some jobs but I don't see that as a bad thing," Taylor told Tech Insider. "As long as we're targeting the right jobs: jobs that are dirty, dangerous, or dull — jobs that we don't want people to have."

Join the conversation about this story »

NOW WATCH: How Elon Musk can tell if job applicants are lying about their experience

The future may be full of intentionally clumsy and apologetic robots

$
0
0

robot hand

Robots are increasingly becoming a part of our everyday lives as they take our jobs and we work more alongside the machines.

Trouble is, anti-robot sentiment is a real concern. Earlier this year children beat a robot in a mall and, in a separate incident, a hitchhiking robot was left for dead in Philadelphia just two weeks into its journey.

So how can engineers design robots that we actually get along with?

Researchers from the University of Lincoln in the UK have learned that it helps to not only give machines human-like expression, but also what is arguably the most human trait of all: the ability to make mistakes.

The researchers, who recently presented their findings at a robotics conference in Hamburg, Germany, figured this out by recruiting 60 people to talk and interact with two robots, called Erwin and MyKeepon.

Erwin (short for Emotional Robot with Intelligent Network) is a metal skeletal robot with a set of eyeballs and red metal bars as lips that can portray happiness, sadness, and surprise:

MyKeepon, meanwhile, is a small toy robot that can dance, beep to say hello and goodbye, slouch over to look sad, and hop up and down to look happy in response to certain sounds:

In the first series of tests, the participants talked with Erwin about their likes and dislikes, and MyKeepon jumped the number of times a person clapped. Neither robot was programmed to make any mistakes.

In a followup round, however, the robots intentionally messed up. Erwin, for example, would say a person is wearing a yellow shirt when they were not. The bot would then respond with, "I am sorry that I have forgotten that, but I don't have a true sense of color perception," while looking sorry and surprised. MyKeepon, on the other hand, swung between happiness and despondency after it didn't correctly match the number of times a person clapped.

Afterward, researchers asked participants how they felt after these interactions. People said they felt "more intimate with the robots when the robots made mistakes and showed imperfect activities during interactions," according to the study.

Why did this happen? It's hard for people to empathize with others who are intimidating, emotionless, and never wrong, the researchers argue — so it might be just as difficult to get along with cold, expressionless, perfect robots.

"People seemed to warm to the more forgetful, less confident robot more than the overconfident one," John Murray, one of the researchers on the study, told Motherboard. "We thought that people would like a robot that remembered everything, but actually people preferred the robot that got things wrong."

This may ring true to anyone who saw IBM's supercomputer Deep Blue defeat chess champion Garry Kasparov in 1997. A seemingly slick move by the machine led to Kasparov's demise, earning praise from the audience that "it had played like a human being,"according to FiveThirtyEight. (In fact, the move wasn't part of the device's programming — it was caused by a bug.)

The new study reflects a sentiment among researchers about making robots more acceptable to humans. Yann LeCun, the director of AI research at Facebook, told Tech Insider that while robots aren't likely to develop emotions on their own, they'll need to at least emulate emotion so humans can get along with them.

Subbarrao Kambhapati, a computer scientists at the Arizona State University, would agree.

"Many studies show that if you keep making a mistake, a computer voice response that sounds more sympathetic to your plight winds up increasing productivity then one just says 'try again,'" Kambhapati told Tech Insider. "Humans have biases and evolved emotional responses. Robots need to handle that to interact with us."

Join the conversation about this story »

NOW WATCH: Scientists have a groundbreaking new theory for why dinosaurs went extinct

This amazing shapeshifting surface can build tiny structures all on its own


Here's why it would be 'unintelligent' to build brain-like computer intelligence

$
0
0

Brain thought

Artificial intelligence (AI) researchers are working hard to make computers smarter and more capable, some in hopes of achieving human-level intelligence.

The surest strategy might seem like emulating human intelligence or even mimicking brain structure.

But creating an intelligent machine based on the human mind might be impossible, unnecessary, or even counterproductive.

In fact, many researchers believe what's inside our heads is just one example of how to achieve intelligence — and not necessarily the best way.

"An airplane flies in a way that's very different from the way a bird flies, but they both fly — it's the same thing with intelligence," Peter Stone, an AI researcher and computer scientist at the University of Texas, Austin, told Tech Insider.

Cornell University's Bart Selman, also an AI researcher, used speech recognition as an example to make a similar point.

"We don't quite know how the brain does it," Selman told Tech Insider, but the method "is probably more complicated than the way we're doing it right now" with machines. "The main progress right now and in the near future will be getting to a performance at a human-level without getting the details of the human brain all figured out."

Some computer systems do take some inspiration from the human brain, namely the interconnected structure of neurons, to form artificial neural networks. Such systems can be "trained" on a specific task like image recognition with many thousands of examples. Over time, the devices improve at the task.

Machine learning and neural networks have driven a lot of the recent successes in computer science, says Geoffrey Hinton, an AI researcher at Google and the University of Toronto.

But in many cases, it's best not to mimic human intelligence or brain structures. While humans are better than machines at things like perception, reasoning, and object manipulation, robots already exceed human intelligence along very narrow dimensions. Machines have faster reactions times, perfect memories, and are superior to humans at crunching a lot of numbers.

"In some cases it would be unintelligent to mimic how the human brain works," Peter Norvig, the director of research at Google, told Tech Insider in an email. "If the task is to multiply two ten-digit numbers, then using a human brain alone would be an error-prone mistake — it would be more intelligent to either reason like a computer to start with, or to use a tool, such as a calculator or computer."

Shimon Whiteson, a computer scientist at University of Amsterdam, told Tech Insider there's another, more practical reason to build AI that functions differently than the brain.

"There's actually not much point in just replicating human intelligence,"Whiteson says."We want intelligence that has different capabilities than humans so the two can work together in a complementary way."

Join the conversation about this story »

NOW WATCH: This new Mitsubishi self-driving car is so advanced it can parallel park itself

This robot can do one of the most dreaded chores for you

$
0
0

Laundroid

Put down that crumpled pile of clothes, a robot that can fold your laundry is coming soon to a closet near you.

Laundroid, a clothes-folding machine built by Japanese company Seven Dreamers, Panasonic, and Japanese homebuilder Daiwa House, was unveiled at an international technology trade show in early October, according to the Telegraph.

According to the Seven Dreamers website, it could free up a lifetime's worth of time wasted on folding clothes — about 375 days total.

It's not quite there yet, though.

To use the laundry-folding robot, you first have to individually load the garment in the slot. The robot then uses image recognition software to determine what the garment is and how it should be folded. Four minutes later, the slot opens and a previously crumpled shirt is perfectly folded.

While four minutes sounds like a lot, it's four minutes you get to go do something else entirely. And, according to CNET, Seven Dreamers plans to release the full version of Laundroid in 2020. With the full version, just load a pile of washed and dried laundry, and the machine would be able to sort, fold, and deposit the folded clothes into a drawer.

The mechanisms behind the folding is top-secret. Shin Sakane, the CEO of Seven Dreamers, assures a BBC reporter that a human isn't hiding out behind the chrome doors. (A human could fold a shirt much faster, anyway.)

"It took us ten years to develop this prototype technology, there are so many secrets in there," Sakane told the BBC.

Seven Dreams and their partners aren't the only researchers trying to do away with the pesky task of sorting and folding laundry. Researchers from the University of California, Berkeley, are building robots that can fold towels. Berkeley's PR2 robot initially took 20 minutes to fold one towel, though it eventually learned to do it in a minute and a half.

The Telegraph reports the beta version of Laundroid, which can fold individual T-shirts, collared shirts, skirts, shorts, pants, and towels, won't be available for purchase in Japan until 2017. So, unfortunately, people who dread folding fitted bed sheets are out of luck, at least for now.

Watch Laundroid in action below.

Join the conversation about this story »

NOW WATCH: How to fold a fitted sheet — the most frustrating piece of laundry

Robots are terrible at these 3 uniquely human skills

$
0
0

artist

You may have heard that robots are coming for our jobs, or maybe that super-intelligent programs could cause an apocalypse.

But given how today's artificially intelligent programs and robots stack up to humanity, the worst is still far away.

When you look closely, humans still have many advantages over artificial intelligence (AI).

New techniques and approaches are making huge strides in areas where AI traditionally lagged, including cataloging images and understanding language.

But even with these advances, humans remain the masters of these three following skills, likely for a long time.

Using common sense to solve new problems

Human are great at finding parallels they understand to give them insight when confronted with new situations. Toby Walsh, a professor of computer science at the National Information and Communications Technology Australia, calls this ability common-sense reasoning.

It's our ability to "look at objects we've never seen before and apply our common sense to understand how they work and what we need to do," in the situation, Walsh said. This ability has been a historical sticking point for AI researchers.

One example of common sense reasoning would be "if I tip over this cup of water, the water will fall out." Humans can make deductions on why that might be and predict what will happen if the cup tips over, even if it's coins in the cup the next time around instead of water.

But computer programs require very exact specifications of all the objects involved in a scenario to make the same predictions, according to Ernest Davis, a computer scientist at New York University who works on common sense reasoning. That would include the shape of the cup, the motion its being exposed to, the physical properties of whatever is inside, among others.

"People don't need that kind of information — all you need to know is that is a coffee cup and that it's open at the top, that it doesn't have a lid," Davis told Tech Insider.

Feeling emotions and understanding the emotions of others

Emotions and empathy remain one of the most human traits. According to Facebook's AI research director Yann LeCunn, it's not likely that robots would innately to develop any kind of emotion, without it being programmed into them.

"We can build into them altruism and other drives that will make them pleasant for humans to interact with them and be around them," LeCun wrote in an email to Tech Insider. But these kinds of emotions won't appear out of nowhere.

They are also not likely to be able to show genuine empathy and understand the emotions of those they are working with or caring for. There are robots that can recognize emotions out now, like Pepper, a social companion robot that can understand emotions and speak with people.

While Pepper has been programmed to respond emphatically, she doesn't fully understand what humans are feeling. On the other hand, humans feel empathy genuinely, making them better caretakers. That's why nursing is one of the careers least likely to be taken by robots, according to a 2013 Oxford study. Being able to understand the nuances of emotions and feel empathy allows nurses to better care for others.

Creativity

Artificially intelligent programs can create beautiful hallucinations of photos or mimic a famous artist's style. But making a work of art from scratch that resonates with people will remain a uniquely human skill, at least for now.

Creativity is basically a combination of the two skills above — if you can't apply old knowledge to new situations and can't empathize with other people, you'll have a tough time writing a book that touches readers or paint a landscape that makes museum-goers gasp.

AI are the products of very strict rules and explicit instructions, the exact opposite of creativity, Michael Osbourne, a computer scientist at the University of Oxford, told the Guardian.

"It is certainly possible to design an algorithm that can churn out an endless sequence of paintings, but it is difficult to teach an algorithm the difference between the emotionally powerful and the dreck," Osbourne said.

In the future that might change. For now, artists, actors, and writers can rest assured that no robot will come to take the arts away from them.

As Walsh told Tech Insider, "the artists of the world are for a long time still going to be real physical people."

Join the conversation about this story »

NOW WATCH: 4 ways to stay awake without caffeine

Google reportedly just invested more than $60 million in a Chinese artificial intelligence start-up

$
0
0

China Chinese Woman Google

BEIJING (Reuters) - Google will take a minority stake in Beijing-based artificial intelligence firm Mobvoi as part of a $75 million fundraising round, the start-up said on Tuesday, as the U.S. search giant tries to rebuild its presence in China.

Mobvoi will maintain a controlling stake, it said in an e-mailed statement. The size of Google's investment was not disclosed, although TechCrunch's Jon Russell reports that Google likely paid more than $60 million.

"Mobvoi's Yufan Wang confirmed that Google has become a minority shareholder. The deal, she said, takes the company to $75 million in investment to date. Since Mobvoi previously raised $10 million Series B and $1.6 million Series A rounds, Google's investment is just shy of $65 million," Russell writes.

Google's parent company is now named Alphabet Inc.

The Chinese start-up works on artificial intelligence (AI) voice-controlled software, like that used in Google's Android products for mobile search. Mobvoi previously partnered with the U.S. firm to provide Chinese-language voice search for the latter's Android Wear smartwatch operating system. 

(Reporting by Paul Carsten; Editing by Muralikumar Anantharaman)

Join the conversation about this story »

19 A.I. experts reveal the biggest myths about robots

$
0
0

hal 2001

Most people have gleaned their understanding of artificial intelligence (AI) from science fiction more than from real life.

But if you base all your knowledge about robots and AI on movies and books, you're bound to be either terrified or disappointed whenever a new robot comes out. 

Tech Insider asked 19 AI researchers about the biggest myths in their field. Their answers (lightly edited) are below.

Stuart Russell says no one is building conscious AI.

The most common misconception is that what AI people are working towards is a conscious machine, that until you have a conscious machine there's nothing to worry about. It's really a red herring.

To my knowledge, nobody, no one who is publishing papers in the main field of AI, is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I'm not aware that they've made any progress.

As far as AI people, nobody is trying to build a conscious machine, because no one has a clue how to do it, at all. We have less clue about how to do that than we have about build a faster than light spaceship.

Commentary from Stuart Russell, a computer scientist at the University of California, Berkeley.



Yann LeCun says we have robot emotions all wrong.

The biggest myths in AI are as follows:

(1) "AIs won't have emotions."

They most likely will. Emotions are the effect low-level/instinctive drives and the anticipations of rewards.

(2) "If AIs have emotions, they will be the same as human emotions."

There is no reason for AIs to have self-preservation instincts, jealousy, etc. But we can build into them altruism and other drives that will make them pleasant for humans to interact with them and be around them.

Most AIs will be specialized and have no emotions. Your car's auto-pilot will just drive your car.

Commentary from Yann LeCun, Facebook's Artificial Intelligence Research Director.



Yoshua Bengio says we've misconstrued how smart machines will act.

The biggest misconception is the idea that's common in science fiction, that AI would be like another living being that we envision to be like us, or an animal, or an alien. Imagining that an AI would have an ego, would have a conscience in the same way that humans do.

The truth is that you can have intelligent machines that have no self conscience, no ego, and have no self preservation instinct because we build these machines.

Evolution gave us an ego and a self preservation instinct because otherwise we wouldn't have survived. We were evolved by natural selection, but AIs are built by humans.

We can build machines that understand a lot of aspects of the world while not having more ego than a toaster.

Commentary from Yoshua Bengio, a computer scientist at University of Montreal.



See the rest of the story at Business Insider

NOW WATCH: 11 amazing facts about your skin

This guy wrote 'the bible of artificial intelligence' in 1979 — and then had a major split with the rest of the field

$
0
0

Douglas Hofstadter

In 1979, a groundbreaking book called "Godel, Escher, Bach: An Eternal Golden Braid" blew everyone's minds and won the Pulitzer Prize for nonfiction.

Written by computer scientist Douglas Hofstadter, the 777-page tome inspired many young aspiring computer scientists and mathematicians.

For Oren Etzioni, now the CEO of the Allen Institute for Artificial Intelligence, it determined the course of his life.

"When I read it, I was hooked," Etzioni told Tech Insider. " 'How do you build an intelligent machine?" is one of the most fundamental, intellectual questions across all the sciences. It's very very fundamental, and I felt like this was something that I could devote my professional life to."

The GEB, as it's often called, ties together the seemingly disparate fields of mathematics, science, music, and art. Its chapters are punctuated by obtuse conversations between fictional characters and puzzles that Hofstadter invites his readers to attempt. Having read through a portion of the book, it's easy to see why so many people were inspired by it — the book theorizes that there are hidden meanings and loops within theorems and works of art that are connected.

The book's tagline describes it as "a metaphorical fugue on minds and machines in the spirit of Lewis Carroll," a comparison which is pretty clear in the section below.

Screen Shot 2015 10 21 at 6.58.48 PMBut since inspiring a generation of artificial intelligence (AI) researchers with a book that many were calling the "bible of AI," Hofstadter has broken ranks with AI researchers. According to The Atlantic, he hasn't been to an AI conference in 30 years.

That's because Hofstadter's approach to understanding and recreating intelligence in machines has forked off from the mainstream researchers' brute-force methods of developing artificial intelligence through insane amounts of data analysis and program training.

images"There's no communication between me and these people," Hofstadter told the Atlantic. "None. Zero. I don't want to talk to colleagues that I find very, very intransigent and hard to convince of anything. You know, I call them colleagues, but they're almost not colleagues — we can't speak to each other."

Hofstadter has a fundamental disagreement with mainstream research on how AI programs should operate. The book that launched Hofstadter's career argued for an approach to AI that was more about understanding and emulating human intelligence than solving specific problems.

But at the time he wrote the GEB, AI was already taking a different tactic to intelligence, according to the Atlantic, because past approaches proved fruitless. At the time, scientists started turning away from building thinking machines to building applications that can solve specific problems.

Scientists started honing in specific parts of intelligence like vision, language understanding, speech synthesis. That's how it remains today — intelligence is attacked piecemeal, with different researchers specializing on different aspects and methods of intelligence.

One of the landmark successes in AI was when Deep Blue finally beat chess champion Garry Kasparov in 1997. Deep Blue didn't play chess like a human — it used brute force. It calculated the best future moves at any point in the game. But Hofstadter told the Atlantic that doesn't give us any insight into how humans play chess. In fact, he began to distance himself from the AI community once the game happened.

"Deep Blue plays very good chess — so what?" Hofstadter said. "I don't want to be involved in passing off some fancy program's behavior for intelligence when I know that it has nothing to do with intelligence. And I don't know why more people aren't that way."

Garry Kasparov Deep Blue

Those brute force methods, based in computational power and large amounts of data, are responsible for the most recent successes in AI. A statistical method called machine learning is driving improvements in areas where AI had traditionally faltered, like vision and language processing. While it roughly mimics the interconnected structure of brain cells, machine learning takes a sharp departure from how the human brain processes vision and language.

But Hofstadter's disagreements haven't stopped his work. Hofstadter has been a professor at Indiana University at Bloomington for 30 years, chipping away at the work he thinks the rest of the AI community has stopped doing.

Whether Hofstadter and mainstream AI research will one day reunite remains to be seen. But in the meantime, Hofstadter will quietly continue.

"Ars longa, vita brevis," Hofstadter told the Atlantic. "I just figure that life is short. I work, I don't try to publicize. I don't try to fight."

Join the conversation about this story »

NOW WATCH: We asked an exercise scientist how many days you have to work out to actually see results

This phenomenon explains what everyone gets wrong about AI

$
0
0

Maker Faire Robots

When we talk about AI the conversation will inevitably lead to one of many science fiction scenarios — whether it will take all our jobs or kill us all.

But the truth is that AI has been around for almost 60 years, and is increasingly permeating every part of our lives.

We have AI now that can read your emotions, solve SAT geometry questions, and create paintings like Vincent Van Gogh.

By ignoring what's actually being created and used now, and instead focusing on a version of AI that hasn't arrived yet, humanity has developed a blind spot to the technology.

That blind spot skews our understanding of AI, its usefulness, and the progress that's being made in the field. It is also setting us up for a lot of disappointment when the future of AI doesn't play out when and how we think it will.

This is such a common phenomenon that it has a name — the AI effect.

There are two phases to the AI effect. The first is that people don't see the programs they interact with as actually "intelligent" and therefore think that research in AI has gone nowhere.

But we're already surrounded by AI, and increasingly so, like the frog in a pot of water that doesn't realize that the water is getting hotter and hotter.

The AI we have now doesn't look anything like what most people picture in their science fiction dreams — machines that think and act human, called Artificial General Intelligence (AGI). Instead, we have Artificial Narrow Intelligence (ANI), AI that's very good at very specific tasks like image recognition or stock trading. When AGI will be created, if ever, is still a big question.

ex machina movie artificial intelligence robot"In the early years of AI, there had always been this worry that AI never lived up to its promise because anything that works, by definition, is no longer [seen as] AI," Subbarao Kambhapati, a computer scientist at Arizona State University, told Tech Insider.

Carlos Guestrin, the CEO of a Seattle-based company called Dato that builds AI algorithms to analyze data, said it might be because ANI looks nothing like human intelligence.

"Once something is done, it's not AI anymore," Guestrin told Tech Insider. "It's a perceptual thing — once something becomes commonplace, it's demystified, and it doesn't feel like the magical intelligence that we see in humans."

On the flipside, this also breeds fear of the unknown "future" AI that seems to always feel like it's just around the bend. When people do talk about AGI being possible, the conversation is always accompanied by fears that it will suddenly take over.

"I think there's this idea that AI is going to happen all of a sudden, questions like 'what are we going to do when AI is there?,' " Sabine Hauert, a roboticist at Bristol University, told Tech Insider. "The reality is that we've been working on AI for 50 years now, with incremental improvements."

This fear of future human-like AI is guided by the false belief that technology that's been actually been around for years will suddenly gain human attributes, which is called anthropomorphization. But given the way we're building AI now, it's unlikely that future AIs will have human attributes — things like emotions, consciousness, or even a self-preservation instinct, according to Yoshua Bengio, computer scientist at the University of Montreal.

That's because intelligent AI is going to be completely different than the intelligence we know in humans.

"The biggest misconception is the idea that's common in science fiction, that AI would be like another living being that we envision to be like us, or an animal, or an alien — imagining that an AI would have an ego, would have a conscience in the same way that humans do," Bengio told Tech Insider. "You can have intelligent machines that have no self-conscience, no ego, and have no self-preservation instinct."

IBM Watson - largerShimon Whiteson, a computer scientist at the University of Amsterdam, explained to Tech Insider why humans default to assigning human traits to AI.

"I think we have a tendency to anthropomophize any kind of intelligence, because we live in a world in which humans are the only example of high level intelligence," Whiteson said. "We don't really have a way of understanding what intelligence would be like if it wasn't human."

Through AI research, though, we are discovering that there are many other types of intelligence out there. Not every intelligent program must be essentially be human-like. When AI technology emerges that can do one specific task, it doesn't look human and therefore most people don't see it as AI.

But even when AGI does arrive, it most likely won't look human-like either.

"Intelligence is not a single property of a system — my colleague at MIT, Tomaso Poggio says 'intelligence is one word, but it refers to many things,'" Thomas Dietterich, the President of the Association for the Advancement of Artificial Intelligence, told Tech Insider in an email. "We measure intelligence by how well a person or computer can perform a task, including tasks of learning. By this measure, computers are already more intelligent than humans on many tasks, including remembering things, doing arithmetic, doing calculus, trading stocks, landing aircraft."

To do away with the paradox — flipping wildly between the belief that AI hasn't arrived yet and that AI will destroy us all when it does — the concept of human intelligence has to be rewritten.

We have to understand intelligence in broader terms and understand that a machine that gets a job done is intelligent. The sooner that happens, the easier it will be to focus on the benefits and real risks researchers think future AI could bring.

Join the conversation about this story »

NOW WATCH: This ingenious unicycle robot could reinvent the way we get mail


Google is 're-thinking' all of its products to include machine learning (GOOG)

$
0
0

Sundar Pichai

Machine learning and artificial intelligence got a lot of love on Google's Q3 earnings call.

During CEO Sundar Pichai's prepared remarks, he went out of his way to point out that investments in machine learning and artificial intelligence were a continued priority for the company moving forward. 

Pichai even went as far as to say that Google was "re-thinking" all of its products to include more AI and machine learning. 

In the words of Stanford University's Rob Schapire, the goal of machine learning is "to devise learning algorithms that do the learning automatically without human intervention or assistance."

Those smart algorithms already power Google's voice search and translation, its Photos product, and the new service Now On Tap which anticipates what information you might need before you ask for it.

When Google showed off the product at its developers' conference earlier this year, it was apparent that the smart assistant was miles ahead of Apple's Siri or Microsoft's Cortana. 

"Machine learning is a core, transformative way by which we’re re-thinking how we’re doing everything," Pichai said on the call, in response to an analyst question about Google's vision. "We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we're in early days, but you will see us — in a systematic way — apply machine learning in all these areas."

As Google was reporting earnings, Google's head of research Jeff Dean announced a new Brain Residency Program. The 12-month role will be similar to a Master's or PhD program in deep learning, a branch of machine learning that aims to mimic the brain.  

SEE ALSO: Google's stock soars on earnings beat and monster buyback program

Join the conversation about this story »

NOW WATCH: A random guy bought Google.com for $12

A Google researcher said that we’ll become ‘highly dependent’ on artificial intelligence in the near future

$
0
0

Geoffrey Hinton

Google just announced that they're making a big bet on artificial intelligence to move the company's services forward.

During Google's Q3 earnings call CEO Sundar Pichai said the company was "re-thinking" all of its products to include more artificial intelligence and an approach called machine learning.

Artificial intelligence (AI) has always played a huge part in Google's success. Behind the scenes, AI powers Google's translation program, image recognition, and email spam filters.

And this will only get more true over the years.

We talked to Geoffrey Hinton, AI researcher at Google and the University of Toronto, over email about the future of the field. He predicted that AI will likely loom ever larger in our lives, without our even knowing it.

"I think we will become highly dependent on very intelligent and knowledgeable assistants to help us with almost everything," Hinton told Tech Insider in an email.

Hinton is one of a few AI researchers who are responsible for the renewed interest in machine learning, the statistical AI algorithm that can learn and improve over time that was discussed in the earnings call.

Eventually, Hinton told Tech Insider, technology powered by machine learning could make the technologies we interact with even more seamless. It could learn our preferences and dislikes without asking us, like an attentive personal assistant who knows what you need before you do.

"At present, we have to adapt to new technology in order to use it, but as machine learning improves, the technology will adapt to us and this will make it very easy and natural to use," Hinton said.

Based on the earnings call, Google is making good on Hinton's predictions about integrating advanced AI technology into our lives.

"Machine learning is a core, transformative way by which we're re-thinking how we're doing everything," Pichai said. "We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we're in early days, but you will see us — in a systematic way — apply machine learning in all these areas."

It look's like Hinton's vision is coming true, one Google app at a time.

Join the conversation about this story »

NOW WATCH: We asked an exercise scientist how many days you have to work out to actually see results

This is the single most important key to making our technologies smarter

$
0
0

infinity

Technology is all about progress: We build tools to make life easier and learn how to improve those tools over time.

But in the past couple of decades, technology has improved dramatically thanks to the internet and the proliferation of internet-enabled devices, which have allowed us to create and build upon ideas at an incredible clip.

Now, the next big frontier is to teach our technologies to learn and improve on their own.

This concept is called “machine learning,” and it’s at the heart of most artificial intelligence systems you see.

Self-driving technology is one notable example of machine learning. The goal there is for cars to be autonomous: We want them to drive, navigate, and react like a human can — even better, in fact — to reduce the number of fatal accidents (over 32,000 in 2013, according to the Insurance Institute for Highway Safety), and improve the overall efficiency of ground transportation.

We want to get from A to B quicker, and more safely, too. But in approaching this problem, auto and tech companies like Google and Audi don’t try to teach their cars to know every possible situation that might occur.

Instead, they take the cars on the road and continuously gather data, logging every single mile. This data covers everything from the state of the car to the state of the environment to how a driver reacts in any given situation.

Audi piloted driving technologyHere’s why machine learning is so important here: Something needs to be done with all that data. Companies like Tesla have built large intelligence networks to absorb and crunch this information, which is basically used to inform every other car in the “fleet” — any vehicle touched by the network, basically.

But machine learning won’t just be the key to driverless cars; this technology is also going to help people get the most out of their computers, including their smartphones and tablets.

On a Thursday earnings call with investors, Google CEO Sundar Pichai spent a great deal of time discussing the company’s huge investments into machine learning, explaining why Google has decided to basically “re-think” all of its products to be more AI-friendly.

“Machine learning is a core, transformative way by which we’re re-thinking about how we’re doing everything,” Pichai said. “We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we’re in early days, but you will see us — in a systematic way — apply machine learning in all these areas.”

apple tvThis ability to absorb and analyze massive amounts of information, simply for the sake of improving a product’s performance, is going to go a long way for tech companies like Google and Apple. Their smart assistants — Google Now and Siri, respectively — have become much more proactive this year through some big software updates. Instead of you repeatedly asking for information, these digital assistants are now trying to learn your habits and anticipate what you might need— an app, someone’s contact information, sports scores — before you ask.

Since the basis of machine learning is statistics, and the ability to perform better through experiences, this kind of technology is also driving important advances in medicine, genetics, robotics, the internet, advertising, even video games. This list serves as a pretty good example for why a multifaceted technology company like Google is investing so heavily in machine learning.

Keep in mind, however, machine learning is not the same thing as artificial intelligence, though you might see the two working together in plenty of future consumer-leaning applications. Machine learning is more about creating algorithms that can generalize data to regularly improve performance and behaviors. Artificial intelligence covers areas beyond machine learning, particularly natural language processing and understanding.

But the jury’s still out on artificial intelligence: despite its importance, there is plenty of reluctance and a legitimate sense of fear around this issue. You know, since a sentient robot or machine that can think independently improve its own software could easily perceive humans as a threat (to itself, to the environment, etc.), and decide to wipe them/us out. Most people, however, seem to be all for machine learning, as it will help people get the most out of the tools we have, promoting even more progress across all technologies and disciplines, so we can live happier, healthier, easier lives.

Join the conversation about this story »

NOW WATCH: 6 cool things we know about Tesla's next car

Google is increasingly relying on artificial intelligence to enhance search results (GOOG)

$
0
0

Sundar Pichai, google, android

Google is great at finding answers to any question you have. After all, there are millions of people out there who have likely asked the same questions.

But what happens when you ask Google something it's never seen before?

That's the job of Google's artificial intelligence system RankBrain, which was revealed in a Bloomberg story on Monday.

According to Google's senior research scientist Greg Corrado, RankBrain tackles a "very large fraction" of all the total number of Google searches — about 15% of the millions of queries it receives every second, the company says — by embedding the text from people's searches into mathematical vectors that the computer can actually understand.

Once RankBrain analyzes the text through vectors, it can isolate words or phrases it doesn't understand. It can then guess the meaning based on similar words and phrases and filter the results accordingly.

Corrado says RankBrain is different from the "hundreds" of signals and technologies that contribute to Google's Search algorithms in that it actually learns and improves over time.

Google says it's only used RankBrain to handle this massive search load for the "past few months," but Corrado says RankBrain is actually better at predicting top search results than Google's own search engineers. And now, RankBrain appears essential to Google. Corrado said turning RankBrain off at this point "would be as damaging to users as forgetting to serve half the pages on Wikipedia."

RankBrain is just one part of Google's recent, major investment in artificial intelligence and machine learning.

Read Bloomberg's full story on RankBrain here >>

Join the conversation about this story »

NOW WATCH: These new iPhone features will change the way you use your phone

These are the 3 biggest obstacles to artificial intelligence, according to Google's researchers

$
0
0

ex machina movie artificial intelligence robot

Artificial intelligence (AI) has always played a huge part in many of Google's applications. Behind the scenes, AI powers Google's translation program, image recognition, and email spam filters.

It doesn't look like Google will stop there. Google CEO Sundar Pichai announced during Google's Q3 earnings call that they're "re-thinking" all of its products to include more AI and a method called machine learning.

What those products will be remains to be seen. No one knows exactly what the final frontiers of AI will be like, but three Google researchers told Tech Insider which problems AI researchers face when it comes to building machines capable of exhibiting extreme intelligence.

Getting machines to experience the world like humans do

Humans are champions when it comes to experiencing the world — we wouldn't have survived if we weren't.

Millions of years of evolution has sharpened humans' senses, like vision, hearing, and touch, to a fine point because they were necessary to the species' survival. As it turns out, these senses are also an important part of intelligence. While AI researchers are making unprecedented strides in these areas using machine learning, they've still got a way to go.

Peter Norvig, a director of research at Google and co-author of the textbook on modern AI research, said getting computers to experience the rest of the world as well as humans do will unlock problems AI researchers have long had with planning and reasoning.

"We are very good at gathering data and developing algorithms to reason with that data, but reasoning is only as good as the data, which means it is one step removed from reality," Norvig wrote to Tech Insider in an email. The closer we can get computers to experience reality, the better they'll be.

"I think that reasoning will be improved as we develop systems that continuously sense and interact with the world, as opposed to learning systems that passively observe information that others have chosen, like collections of web pages or photos."

Getting computers to learn without human teachers

Preschool BrooklynWhen you were growing up, you learned about the world in a number of different ways. You likely had parents or teachers who pointed to items and told you what it was called. But a lot of childhood learning was also implicit, the ability to make inferences to fill in the gaps and build on previous knowledge.

But computers don't have that ability. The most successful method of machine learning so far is called supervised learning, which works a lot like how teachers point to items and naming them. Each time the system learns a new task, it has to start essentially from scratch. It requires a lot of human involvement and time. Machines need to be able to learn without as much supervision and input from humans, according to Samy Bengio, researcher at Google.

"We need to work more on continuous learning — the idea that we don't need to start training our models from scratch every time we have new data or algorithms to try," Bengio wrote to Tech Insider in an email. "These are very difficult tasks that will certainly take a very long period of time to improve on."

Focusing on the right parts of human intelligence and not getting sidetracked

Geoffrey Hinton, a Google distinguished researcher, told Tech Insider that one of the biggest obstacles to building computers capable of human-level intelligence was making sure we don't get sidetracked by unnecessary considerations.

Take consciousness. It has long been considered integral to true intelligence, but it is also one of the most mysterious aspects of human intelligence. But Hinton said that's an outdated way of thinking about thinking.

"Consciousness is an old and very primitive attempt to explain what's special about a very complicated computational system by appealing to some unobserved essence," he wrote to Tech Insider in an email. "The concept is no more useful than the concept of 'oomp' for explaining what makes cars go ... that doesn't explain anything about how they work."

Join the conversation about this story »

NOW WATCH: We asked an exercise scientist how many days you have to work out to actually see results

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>