Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Why 'Rise of the Robots' was named the most important business book of the year

$
0
0

robot

Yale bioethicist Wendell Wallach proclaimed in June that the developed world is at an "unparalleled" moment in history where technology is replacing more jobs than it creates.

If we use the word "robot" as shorthand for a wide variety of machines and programs fueled by artificial intelligence, then we are in the early stages of a robot revolution.

One of the most vocal proponents of a need to address these changes in the workforce before they get out of control is Bay Area software developer Martin Ford, who told Business Insider earlier this year, "If you look far enough into the future, say 50 years and beyond, there aren't any jobs that you could say absolutely for sure are going to be safe."

On Tuesday, a panel of seven judges — including LinkedIn founder Reid Hoffman and renowned INSEAD professor Herminia Ibarra— named Ford's book on the subject, "Rise of the Robots: Technology and the Threat of a Jobless Future," the Financial Times and McKinsey Business Book of the Year. It follows the 2014 winner "Capital in the Twenty-First Century" by Thomas Piketty.

Ford's book beat out five other finalists for the award, including Richard Thaler's "Misbehaving," a history of behavioral economics, and Anne-Marie Slaughter's "Unfinished Business," a call for equalized gender roles. Ford was given a £30,000 ($46,000) prize.

"Not all the judges agreed with the book's proposed solutions but nobody questioned the force of its argument,"the Financial Times reported.

"In a closely fought debate about the six shortlisted titles, one judge described Mr. Ford's book as 'a hard-headed and all-encompassing' analysis of the problem. Lionel Barber, FT editor and chair of the judging panel, called 'Rise of the Robots' 'a tightly written and deeply researched addition to the public policy debate.'"

rise of the robotsFord told Business Insider that he wrote his book to make more people aware of the issue, which previously was largely relegated to academic studies. A 2013 Oxford University study titled "The Future of Employment: How Susceptible are Jobs to Computerization," for example, predicted that 47% of US jobs could be automated within one to two decades.

"I'm not arguing that the technology is a bad thing," Ford said. "It could be a great thing if the robots did all our jobs and we didn't have to work. The problem is that your job and income are packaged together. So if you lose your job, you also lose your income, and we don't have a very good system in place to deal with that."

In his book, he makes some radical proposals, including eventually giving all citizens a minimum base income to weather the rapidly changing economy (hence the judges not agreeing with all of Ford's conclusions).

Regardless of what solutions governments and corporations adopt to address the changing workforce over the next 20 years, Ford argues they cannot assume that market forces will take care of themselves.

"Without consumers, we're not going to have an economy. No matter how talented you are as an individual, you've got to have a market to sell it to," Ford said. "We need most people to be OK. We need some reasonable level of broad-based prosperity if we're going to continue to have a vibrant, consumer-driven economy."

SEE ALSO: The 6 most influential business books of 2015

Join the conversation about this story »

NOW WATCH: Best Buy is using a robot that could change the way we shop forever


Toyota will invest $1 billion in artificial intelligence and robotics (TM)

$
0
0

2016 Toyota Mirai fuel cell vehicle hydrogen

Toyota says it will spend $1 billion to build a new research institute dedicated to two things that will improve its cars: artificial intelligence, and robotics.

The new establishment, called Toyota Research Institute, will begin work in January, according to Bloomberg. This new branch will work on safety systems and autonomous cars, while also making it an easy transition for older drivers that don’t want to relinquish control of their car keys.

Much of the $1 billion investment will go towards building locations for the new Research Institute near Stanford University and the Massachusetts Institute of Technology.

Gill Pratt, the 54-year-old CEO of Toyota’s Research Institute, says working towards autonomous cars will be less of a sprint and more of a marathon.

“It’s possible at the beginning of a car race that you may not be in the best position,” he said. “It may be that other drivers are saying a whole lot about what their position is, and everyone may expect that a particular car will win. But of course, if the race is very long, who knows who will win? We’re going to work extremely hard.”

Toyota says its goal is to introduce cars with semi-autonomous features — like lane switching and steering on and off public highways — by 2020, just in time for the Summer Olympics in Tokyo.

By that time, both Google and Tesla aim to have fully autonomous vehicles on the road. But Gill, who used to run the Robotics Challenge at DARPA before joining Toyota, says his company will not rush to the finish line.

“The problem of adding safety and accessibility to cars is extremely difficult,” Gill said. “And the truth is, we are only at the beginning of this race.”

Join the conversation about this story »

NOW WATCH: Google's self-driving car has a huge problem

This artificially intelligent Twitter bot will tell you how good your selfie is

$
0
0

A new artificial intelligence (AI) can predict which selfies are likely to get the most love, and now you can test it yourself.

Stanford PhD student Andrej Karpathy built a deep learning system that analyzed 2 million selfies and figured out the traits that make up good selfies.

Karpathy also made @deepselfie, a Twitter bot that can look at people's submitted selfies and judges them automatically. Give it a try by tweeting a square image or link to an image.

I tried it myself with my last Instagram selfie.

It replied with my results in just a few seconds — 52.1%, just slightly better than average.

The AI behind @deepselfie established a few arbitrary rules about what makes a good selfie, after examining two million images. My selfie inadvertently follows a few of those rules — it's pretty washed out, filtered, cuts off my forehead, shows long hair, and I'm more or less in the middle. It also helps that I'm a woman, though I'm not sure if the napping kitten made much of a difference.

Here you can see which 100 selfies the AI determined were the best out of 50,000 selfies. What they have in common is pretty obvious — almost all of them include long-haired women on their own. They're also filtered, washed out, have borders, cut off foreheads, and feature faces in the middle third of the frame. There are no men, and very few people of color.

good selfies AIOn the other hand, the worst images, or the selfies least likely to get any love, were group shots, badly lit and often too close up.

Read more about how @deepselfie works.

Join the conversation about this story »

NOW WATCH: Scientists have figured out the best place to hide in a zombie apocalypse

A computer made up stories about these 13 photos and the results are hilarious

$
0
0

titanic movie 1997 ship bow rose wind arms outstretched

Telling stories may be distinctly human act, but computers are quickly catching up.

Developer Samim Winiger has merged image-recognition and language-processing software to birth an open-source program that can take an image and make up a story about it on the spot.

To give it human-like language, Winiger leaned on two AI models built by University of Toronto PhD student Ryan Kiros— one armed with 14 million passages from romance novels and the other with Taylor Swift Lyrics.

Winiger's AI looks for recognizable objects in an image, finds passages that contain the object in its lusty and lyrical repertoire, and then organizes it into a mini-story.

To test the software, he summoned 5,000 random images from the web. Scroll down to see 13 of Tech Insider's favorite hilariously nonsensical stories (which we've only edited for punctuation). 

'He leaned over to kiss me on the shoulder.'

He was a shirtless man in the back of his mind, and I let out a curse as he leaned over to kiss me on the shoulder.

He wanted to strangle me, considering the beautiful boy I'd become wearing his boxers.

Model: Romantic Novels



'The man released a woman's blood, and that was the reason for all of them.'

My man was within a woman, and she gave him a questioning look. I don't know what else to say to her. For the first time in years, I had never seen as beautiful as the rest of my man. 

In fact, he had his hands on her hips, holding her close to his chest and looking down at her as if she were the most stunning woman I d ever seen. The man released a woman's blood, and that was the reason for all of them.

She hoped to make an appearance, convincing him and the woman who was still her first.

Model: Romantic Novels



'There was only a few people in New York City, and New York City.'

We had to act much like the leader of the group, and for the first time in months,  I had no idea what she was going to do. In fact, I was pretty sure they were all bundled up in the snow. I couldn't help but close my eyes. 

As soon as he let go of my leg, he slipped his arm around my waist and pulled me toward him. There was only a few people in New York City, and New York City. She seemed to have no sense of conviction, that the fewer people I trusted and weapons started.

Model: Romantic Novels



See the rest of the story at Business Insider

The idea that AI can 'read' the emotions on your face is profoundly flawed

$
0
0

GettyImages 170371660

Companies are constantly looking for new ways to respond and preempt our behavior, hoping to better be able to sell us products and tailor services to our (un)conscious needs.

The next big thing is “affective computing," wherein algorithms can supposedly read emotions on our faces and deliver information about what we want to companies.

But at the heart of affective computing are three misguided ideas about human emotions. 

Picture a violin soloist in the spotlight in Carnegie Hall. She is four bars from the end of a cadenza she has practiced for twenty years. She's one with the violin — her face and body are frozen, and only her bow arm and callused fingers are moving. This is the high point of her entire career. How is she feeling? Is she happy? 

If technology can capture how she's feeling, then tech companies can start answering — and monetizing — questions that have until now been the domain of philosophy and the arts. Questions like "What is happiness?" 

The developers of affective computing claim that reading a sequence of facial expressions in great detail — often greater detail than the human eye possesses — allows them to map a series of emotional states over time.

Since these businesses are taking on questions philosophers have debated for centuries, a good way to assess value is philosophical due diligence.

At the heart of affective computing are three misguided ideas about what human emotions are:

  1. All humans have inner states consisting of both rational thoughts and emotional sentiments that make us who we are.
  2. We are able to grasp and perfectly communicate both our rational and emotional inner states to others.
  3. Humans base their actions primarily on these rational thoughts and emotional sentiments.

Let's take them one by one.

All humans have inner states consisting of both rational thoughts and emotional sentiments that make us who we are.

This first assumption comes from the mind/body dualism of Rene Descartes, who wrote "I think, therefore I am." Our thoughts and feelings are what make us unique as individuals, and we understand the world by thinking about it and responding to it with emotions.

For centuries, philosophers (most notably Martin Heidegger) have attacked the idea of mind/body dualism. Heidegger suggested that the way we understand the world is by being involved in it, physically, emotionally, and socially.

Heidegger, and the philosophers influenced by him, argued that you could never isolate and explore inner states as if they were individual butterfly wings kept under glass. Look again at our violinist: How much of the meaning of her big moment — the years of self-doubt, the sacrifices and sore fingers — comes through her face? During this high point in her life, she's not even smiling.

We are able to grasp and perfectly communicate both our rational and emotional inner states to others.

The second assumption is worse than the first. Affective computing assumes that emotions are a set number of discrete reactions to the world. In this scenario, anger or disgust look the same in every situation, and so does passion. The ability of high-resolution cameras to detect "micro-expressions" is analogous to an electron microscope's ability to see the atoms in a molecule in great detail.

But just because they look the same doesn't mean these various emotions are the same. The French existentialist Jean-Paul Sartre posits that people are only angry by being angry at something. To understand most people's anger, we need to understand what they're angry at. The fleeting emotions—those "micro-expressions"—that pass across our faces are only a small part of that story.

Technology will give us ever more knowledge about the granular workings of facial expressions while simultaneously losing more and more understanding about the cultural context of our emotions. The more affective computing attempts to flatten individual experiences into atomized and indexed data points, the more we, as a culture, lose our sensitivity to its genuine meaning. 

Humans base their actions primarily on rational thoughts and emotional sentiments.

The Pixar movie "Inside Out" takes this idea as its premise, with Joy, Fear, Anger, Disgust, and Sadness fighting for control over an eleven-year-old girl. The girl, Riley, makes decisions based on directions from her emotions. The movie agrees with philosopher David Hume that reason is "a slave to the passions," and disagrees with Aristotle, who called humans “rational animals.” Aristotle and Hume would both agree that, whether irrational or irrational, there’s someone at the controls.

Recent developments in thinking about human consciousness suggest, however, that the head isn't always doing the driving. We have bodies, too, and physical context is a crucial determinant of how we will behave.

Physical context is key to decision-making. An emotional response to a commercial in the warm, dark room of the focus group may have no relation to the way that same commercial is perceived at home or on a subway platform. 

Norman masons used soaring columns and stained glass to make people stay quiet in church; house managers on a talk-show keep the set cold to make people laugh more. And how relevant is your emotional state in the grocery store, making an impulse buy of brightly lit, perfectly chilled soda that's right at your fingertips?

There are also wider concerns. What benefit would the rest of us gain by having our every facial expression atomized into stratified bits of data? Isn't this akin to a privatized surveillance state?

Philosophical due diligence tells us that our emotions are meaningless when taken out of context. An economy built on monetizing these isolated emotions is built on shaky foundations.

Christian Madsbjerg is a founding partner of ReD Associates, a strategy and innovation consulting firm based in the human sciences. Jonathan Lowndes is a Senior Consultant and social theorist at ReD Associates.

Join the conversation about this story »

NOW WATCH: The one mistake everyone makes when trying to clear space on their iPhone

Why Google's virtual assistant won't tell you jokes (GOOG, GOOGL)

$
0
0

YurrAJoke

Google may have a wild sense of humor when it comes to the silly Easter Eggs it hides inside many of its products, but you won't find its virtual assistant joking around. 

Google Now, the company's equivalent to Apple's Siri or Microsoft's Cortana, purposely avoids having any sort of personality, search executive Amit Singhal told Time's Victor Luckerson

Singhal says that incorporating humor into voice assistants hints at artificial intelligence capabilities that just don't exist yet. He believes that it misleads users.

"I’m not saying personality shouldn’t come, but the science to get that right doesn’t fully exist," he says. 

He then dropped a bit of a burn on Apple's Siri, which has a reputation for providing funny responses to questions like "Do you believe in God?" or "Do you have a boyfriend?"

"You’ve seen what happens in real life," he says. "That is interesting for a day or two, but then it kind of…loses its charm, let’s say." 

Singhal says that improving natural language processing is one of the big challenges to improving Now and Now On Tap — Google's companion service for Android phones which will scan users' screens to provide even more useful info. The better the virtual assistant can understand the meaning of a complex string of words, the better it can provide helpful answers. 

To keep its search relevant in a world where people are increasingly looking for new, non-desktop ways to get information, Google sees expanding Now into more products that you use every day, like TVs or your refrigerator. 

Read the rest of the interesting Time piece here

SEE ALSO: A 12-year-old beat out 16,000 other people to win a Google contest — 7 years later, she’s a successful artist

Join the conversation about this story »

NOW WATCH: A model who uses social media for good explains what the viral 19-year-old got wrong about the internet

Facebook is using 'Lord of the Rings' to teach its programs how to think

$
0
0

lord of the rings return of the king

Lift the curtain on almost any tool on Facebook, and you're likely to see a robot at the controls. That's because artificial intelligence (AI) is responsible for powering things like automatic tagging and newsfeed.

To make their AI even more intelligent, Facebook researchers are harnessing the power of fantasy fiction — they're teaching it the "Lord of the Rings."

According to Popular Science, the social media behemoth is working on an AI, called Memory Network, that can understand and remember a story, and even answer questions about it. Any story could be used, but researchers taught Memory Network a short summary of J.R.R. Tolkien's fantasy saga "Lord of the Rings." Memory Network is powered by deep learning, a statistical approach that allows the AI to improve over time.

Facebook CTO Mike Schroepfer presented Memory Network at a developer's conference in San Francisco in March. He said the AI's ability to answer questions about Frodo and the ring shows it understands how people, objects, and time in the narrative are related.

Though Memory Network's knowledge of the "Lord of the Rings" is very stripped down, it's a first step into an AI that has a common sense understanding of the relationships between objects and topics, something that's so far been very difficult to encode in computers.

Eventually the AI could be used to improve newsfeed and search, Schroepfer said, because it's relationship understanding qualities would know what you're interested in before you even ask for it. If the AI can deduce that you're a dog person and not a cat person, for example, from your dog pictures you share, and therefore a smart newsfeed would show you a lot more videos of puppies and fewer cat videos.

"By building systems that understand the context of the world, understand what it is you want — we can help you there," Schroepfer said at the conference. "We can build systems that make sure all of us spend time on things we care about."

You can see how the AI works in the short video below. Asked "where was the ring before Mount Doom?" the AI was able to deduce that the ring was in the Shire. (Spoilers: Frodo and Sam make it.) The AI references passages and sentences that give it the information it wants to know.

Some of the leading minds in AI research are working at Facebook to build intelligent machines. One of the group's more recent advances is a technology called Memory Networks, which enables a machine to perform relatively sophisticated question answering, as in this example of a machine answering questions about a Lord of the Rings synopsis.

Posted by Facebook Engineering on Thursday, March 26, 2015

It's also part of a larger project called "Embed the World," a project aimed at teaching machines to better understand reality by representing relationships between things as "images, posts, comments, photos, and video," according to Popular Science.

Yann LeCun, Facebook's AI research director told Popular Science that the project can tag photos taken in the same place based on the image and the caption alone.

Memory Network is just one example of Facebook's increased investment in AI. Facebook's digital assistant M uses some AI. They're also working on image recognition for video.

Join the conversation about this story »

NOW WATCH: Scientists have figured out the best place to hide in a zombie apocalypse

This Japanese robot talks to sleepy drivers and helps them stay awake


This is the biggest shift going on in artificial intelligence

$
0
0

communicate robots

Artificial intelligence (AI) is making huge strides in problems that have traditionally plagued the field by harnessing the power of machine learning, a statistical method that relies on a lot of data. 

Machine learning superficially mimics the interconnected structure of the human brain, but beyond that, it looks nothing like how humans think or reason. 

But when Tech Insider asked 26 AI researchers whether AI has to mimic the human brain to be truly intelligent, many of them flat out said no. Many of them believe that trying to recreate how humans achieve intelligence isn't fruitful, and the best bet for creating intelligent systems is with machine learning.

AI was first established in 1956 as a research field to study the nature of intelligence by building it. Early commonplace approaches to build thinking machines included attempts to encode knowledge, logic, and reasoning, according to the American Scientist. A lot of the work was done in "small, proof of concept" projects that delivered few results.

By the 1980s, AI changed course, according to the Atlantic, and shifted to tackling intelligence piecemeal, like by building programs to solve specific problems.

That's how it remains today. Because machine learning is delivering real results, companies like Google and Facebook are doubling down on implementing even more machine learning in their services

But as the approach to intelligence has changed, so has the definition of intelligence. According to Thomas Dietterich, director of Intelligent Systems at Oregon State University, intelligence isn't a single trait — it "refers to many things." Because intelligence is measured "by how well a person or computer can perform a task," then really, it's the results that really matter, not how it's done. 

Humanoid robots work side by side with employees in the assembly line at a factory of Glory Ltd., a manufacturer of automatic change dispensers, in Kazo, north of Tokyo, Japan, in this July 1, 2015 file photo. REUTERS/Issei Kato/Files
"By this measure, computers are already more intelligent than humans on many tasks, including remembering things, doing arithmetic, doing calculus, trading stocks, landing aircraft, etc," Dietterich told Tech Insider. 

In fact, many of the researchers Tech Insider spoke to compared aircraft and birds to illustrate the point.

"I think that's one of the few things that most of us will agree on," Oren Etzioni, the CEO of the Allen Institute of Artificial Intelligence, told Tech Insider. "The best analogy is with flight. Planes are very different from birds. Birds flap their wings and they're very light. Airplanes are heavy and they're different, the principles of aerodynamics are the same but these are very different mechanisms. I think most of us believe that's going to be the way with intelligence as well." 

But the recent advances aren't really intelligent, according to Douglas Hoftadter, author of the so-called "bible of AI." He believes that in order to built machines with human-like intelligence, with all its rich nuances, you have to build machines that think like humans. The debate has gotten so contentious Hoftstadter has basically shunned the rest of the field.

"I don't want to be involved in passing off some fancy program's behavior for intelligence when I know that it has nothing to do with intelligence," Hofstadter said to the Atlantic when asked about Deep Blue, the IBM supercomputer that defeated the reigning chess champion Garry Kasparov in 1997. "I don't know why more people aren't that way."

Garry Kasparov Deep BlueDetractors of machine learning, like Hofstadter, say the approach misses the point of AI entirely, and will never truly achieve human-like intelligence. 

Scott Phoenix, the cofounder of Vicarious, is trying to build the world's first human-level AI. He told Tech Insider that we have to look to examples of intelligence that occur in nature if we truly want to understand intelligence. That doesn't necessarily mean completely copying the biological structures of the brains of intelligent beings, that would be as absurd as planes with feathers and beaks, Phoenix said. 

"It's probably going to be a lot more difficult to ignore neuroscience and ignore the brain when you're trying to build something that works like a brain," Phoenix told Tech Insider. "At the same time I don't think it's strictly necessary that you duplicate all of the biological functions of the brain."

But Bart Selman, a computer scientist at Cornell, said building an intelligent machine based on human intelligence would be impossible because we simply haven't figure out how humans do it. The best hope we have is trying to "get to a performance at a human level without getting the details of the human brain all figured out."

"Speech recognition is a good example," Selman told Tech Insider. "We don't quite know how the brain does it, the brain is probably more complicated than the way we're doing it right now."

For their part, machine learning advocates admit their method doesn't try to emulate human intelligence. The IBM supercomputer Watson, which beat two Jeopardy champions — relies on statistical methods and a lot of data. But even Dave Ferrucci, the team leader on Watson, admits that Watson doesn't have anything to do with human intelligence.

"Did we sit down when we built Watson and try to model human cognition?" Ferruci told the Atlantic. "Absolutely not. We just tried to create a machine that could win at Jeopardy."

Ferrucci told the Atlantic that despite what detractors say, there's no reason why machine learning would have to act anything like human intelligence.

"It's artificial intelligence, right? Which is almost to say not-human intelligence. " Ferrucci said. "Why would you expect the science of artificial intelligence to produce human intelligence."

Join the conversation about this story »

NOW WATCH: An exercise scientist told us how long it takes before working out gets easier

The CEO of IBM has a bold prediction about the future of artificial intelligence

$
0
0

IBM watson world of watson ginni rometty

In May, Ginni Rometty, the chairman and CEO of IBM, stood on stage in front of a packed room and announced that she was going to make "a bold prediction."

"In the future, every decision that mankind makes is going to be informed by a cognitive system like Watson," she said, "and our lives will be better for it."

Listening in were the crowds of engineers, designers, doctors, bankers, researchers, and reporters that IBM had ferried over to a massive glass-and-steel structure on the banks of the East River in Brooklyn.

The occasion was a new event, World of Watson, designed to showcase the "ecosystem" of innovation happening around Watson, IBM's signature artificial-intelligence system.

Watson became famous in 2011 for beating Jeopardy! champion Ken Jennings at his own game. But now IBM has much larger plans for it, which Rometty was hinting at with her "bold prediction."

"Jeopardy! was all about answers," IBM Watson Group vice president Stephen Gold explained earlier in the day, describing how chefs were using Watson to develop new recipes. "This is all about discovery."

Chef Watson, however, is just a fun example of the kind of creative thinking Watson can be trained to do. Rometty made clear that the company's true aspirations are much larger and more consequential than what's for dinner.

watson jeopardy ibmThe World of Watson event drove this home. It suggested that cognitive systems have a place in almost any type of decision a person or company may be faced with, whether that involves buying a house, making an investment, developing a pharmaceutical drug, or designing a new toy.

"As Watson gets smarter, his ability to reason is going to exponentially increase," Rometty said. What will be really game changing won't be Watson's knack for recalling facts faster than even the most trivia-savvy human, but its ability to assist people with the complex and nuanced tasks of decision-making and analysis.

"Watson deals in the gray area, where there's not a perfect right and wrong answer," she continued. "That's the hardest thing we do as humans."

If Rometty's big prediction pans out, this — the gray area that was once our exclusive and often most-challenging domain — may eventually become much easier.

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

8 industries robots will completely transform by 2025

$
0
0

Robot doing dishes

Just as ATMs changed banking and computers took over the home and workplace, robots and artificial intelligence are going to transform a bunch of industries over the next decade.

By 2025, a machine may be putting together your driverless car in a factory with no human oversight. A robot maid could be cleaning up after you at home, and your financial advisor might be a computer investing for you automatically. 

And with at least 90 countries operating unmanned aerial vehicles, the wars of the future may increasingly be fought with "drone" aircraft.

These are just some of the interesting — and sometimes scary — predictions to come from a 300-page report released by Merrill Lynch in November, which estimates the global market for robots and AI will grow from $28 billion to more than $150 billion just five years from now.

There's plenty of disruption bound to happen across the world as drones and much-smarter-than-you AI take over. But we're likely to see the biggest changes across eight industries in China, Japan, the US, and Korea — the countries currently investing the most in these technologies.

Here are the big predictions from Merrill Lynch:

SEE ALSO: This shape-shifting robot is made out of other robots

The auto industry is going to change big-time, especially when fully autonomous — aka driverless — cars officially go mainstream.



Over the next five years, the report says most new cars will be smarter "connected" cars, and in 2025, that'll mean about 10% of them are fully autonomous.



While the initial price will be about $10,000 more than regular cars, it will inevitably come down as more people and companies adopt them.



See the rest of the story at Business Insider

The world's greatest minds have been terrified of AI becoming smarter than humans for 60 years

$
0
0

The number one idea that futurists are obsessed with is the Singularity, or the moment when technological intelligence outpaces human intelligence. 

The threat of artificial intelligence is everywhere in science fiction. There's Eva in "Ex Machina," HAL in "2001: A Space Odyssey," and the robot overlords in "The Matrix." In each of these cases, sentient robots turn on or totally dominate their human creators.

What we now call the Singularity has been pondered since the dawn of computing.

While artificial intelligence feels very new in some ways ("Ex Machina" was maybe the best movie of 2015), it traces back to the first discussions of artificial intelligence and its applications, reaching back to the very birth of computer science. 

 The most telling example is Alan Turing, the visionary British mathematician played by Benedict Cumberbatch in "The Imitation Game." 

Turing, whose codebreaking saved millions of lives during World War II and is now largely credited with inventing the modern computer, described his vision of what we now call the Singularity in a 1951 lecture, "'Intelligent Machinery: A Heretical Theory." 

Turing said that the creation of thinking machines would probably be blocked by religion, as was the case with Galileo, who was convicted for heresy for saying the Earth moves around the sun, and by intellectuals, who would fear they'd be out of a job. 

Those intellectuals would still have plenty to do, Turing reasoned, since they'd need to both figure out what the machines were trying to say and keep their intelligence up to the standard set by machines. 

That's where things get unnerving. 

"It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers," Turing wrote, noting that the machines wouldn't die. Like humans, however, they'd be able to converse with each other to grow more intelligent. 

"At some stage" he says," we should have to expect the machines to take control." 

The outstripping of feeble human powers by machines got a special name from mathematician I. J. Good, who called it an "intelligence explosion" in a 1965 essay

The logic is simple enough: if intelligent machines become more skilled than humans at designing intelligent machines, we hit the Singularity. 

"An ultraintelligent machine could design even better machines," he said. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind."

Both Stephen Hawking and Elon Musk are alarmed by the idea of the intelligence explosion. It's why Musk has said that artificial intelligence is more dangerous than nukes

It certainly looks like we're getting closer to the Singularity. 

As a recent New Yorker profile of the philosopher Nick Bostrom notes, computers have been beating humans in games of strategy since 1981. And thanks to the vastness of computer crunching power available in the cloud, robots and computers are able to do things that, up until now, they couldn't. There are now robots that stabilize themselves after being pushed, for example, and computer vision programs that identify faces better than humans can. 

If you ask Google futurist Ray Kurzweil, from the 2020s on we'll have nanobots that destroy diseases entering our bodies while also connecting our minds directly to the Internet

There are stil compelling criticisms about the whole idea of computational "intelligence." For one, neither hard science nor social science can agree on what intelligence is, and whether it's "computational" in the same way that binary computer code is. Perhaps, as the burgeoning field of embodied cognition is discovering, our bodies are involved in our thinking in crucial ways — which would create some obstacles for creating a computer with general intelligence. 

That's the crazy thing about artificial intelligence and the Singularity. Right now it's a thought experiment. But if it ever comes to fruition, life on Earth will be changed forever.

Join the conversation about this story »

NOW WATCH: Why Korean parents are having their kids get plastic surgery before college

Cambridge University is opening a £10 million centre to study the impact of AI on humanity

$
0
0

artificial intelligence

Cambridge University announced on Thursday that it is opening a new £10 million research centre to study the impact of artificial intelligence on humanity.

The 806-year-old university said the centre, being funded with a grant from non-profit foundation The Leverhulme Trust, will explore the opportunities and challenges facing humanity as a result of further developments in artificial intelligence.

Over the next 10 years, the Leverhulme Centre for the Future of Intelligence will study the short and long term impacts of this "potentially epoch-making technological development, both short and long term."

The centre will be led by Professor Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge.

Price said: "Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad."

Renowned scientists like Cambridge's Stephen Hawking and Oxford's Nick Bostrom have warned that machines could outsmart humans within the next century, possibly leading to a demise in jobs and, in the very worst case scenario, the end of the humanity. Last December, Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race."

Cambridge said the facility will work in conjunction with the university’s Centre for the Study of Existential Risk (CSER), which is funded by Skype cofounder Jaan Tallinn and looks at emerging risks to humanity’s future including climate change, disease, warfare, and artificial intelligence.

ex machina osscar

The launch of the centre comes as US tech heavyweights such as Google, Facebook and Apple are starting to ramp up their research in artificial intelligence in a bid to make their platforms and devices more sophisticated.

Google, for example, bought Oxford AI spinout DeepMind for £400 million last January, while Facebook has set up a new AI research centre in Paris.

The founders of DeepMind have stated that the field they're working in needs to be treated with respect as it could end up being detrimental to humanity if sophisticated AI "agents" end up in the wrong hands.

Cambridge graduate and DeepMind cofounder Demis Hassabis revealed last month that some of the most prominent minds in AI are gathering in New York early next year to discuss the ethical implications of the field they work in.

Zoubin Ghahramani, deputy director of the new centre and professor of information engineering at Cambridge, added: "The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars.

"We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications."

Join the conversation about this story »

NOW WATCH: I visited Amazon's first retail store, and one thing was especially annoying

The best science fiction, as picked by 20 A.I. experts

$
0
0

terminator

Artificial intelligence (AI) experts may not always agree with how robots and AI are depicted in science fiction, especially when the majority of movies feature killer robots.

But even they can't help but enjoy a few choice science fiction movies and books.

Some movies show what our near future will look like, while others are so far-fetched but are enjoyable nonetheless.

Tech Insider spoke to 20 AI researchers, roboticists, and computer scientists about their favorite science fiction depictions of robots.

Scroll down to see their lightly edited responses. 

SEE ALSO: 18 AI researchers reveal the most impressive thing they've ever seen

Some researchers enjoyed philosophical discussions about AI in science fiction. Carlos Guestrin says 'Ex Machina' does it with more nuance than other movies.

"I like a variety of things, particularly things that challenge my thinking. More recently there have been a flurry of movies about how robots are going to take over the world and be bad guys.

But there are also some recent movies that have been interesting and more nuanced, like 'Ex Machina.' That's a bit apocalyptic but also kind of an interesting take."

Commentary from Carlos Guestrin, the CEO and cofounder of Dato, a company that builds artificially intelligent systems to analyze data.

Here's the trailer if you haven't seen the movie:Youtube Embed:
http://www.youtube.com/embed/bggUmgeMCdc
Width: 800px
Height: 450px
 



The short story 'Nonserviam' by Stanislaw Lem maps out what relationships between robots and their creators look like, Ernest Davis says.

"The best sci-fi piece that I've seen on artificial intelligence is a story by Stanislaw Lem called 'Nonserviam,' which is Latin for I will not serve. It's in his collection of stories called 'A Perfect Vacuum.'

"It has to do with a programmer who creates a whole collection of artificial virtual personalities in a virtual world, but he doesn't let them know that they're virtual. So they argue among themselves as to whether there exists a creator, and if so whether they owe him any gratitude for their existence. That I think is an extremely fine story."

Commentary from Ernest Davis, a computer scientist at New York University.



Novelist Ann Leckie's first novel 'Ancillary Justice' blew Joanna Bryson away.

"I'm really excited by a new novelist that came out of the American Midwest named Ann Leckie. That doesn't mean that professionally, I think that's the way that AI is going to go. But she is on top of the relationship between AI and human intelligence and group collectives.

"She talks about what it's like to be a collective with perfect communication and how enhanced memory is going to impact humans. I think she's really on the ball about that. She won all kinds of awards last year for her first novel 'Ancillary Justice.' "

Commentary from Joanna Bryson, computer scientist and visiting fellow at the Princeton University.



See the rest of the story at Business Insider

We might be able to replicate the human brain long before we understand it

$
0
0

ex machina movie artificial intelligence robot

Most neuroscientists believe that our "self"— the core thoughts, memories, and emotions that make us who we are — resides in the intricate connections of the three-pound, soft, wrinkly, and squishy organ that we call our brain.

And some researchers, especially those who fall into the "futurist" category, believe that if we could replicate the exact structure of the brain, we might be able to create the sort of artificial intelligence that might eventually offer the power and nuance of the human mind.

"I actually think it's a possibility that we'll have a simulation of a brain before we understand it," neuroscientist David Eaglemantold Tech Insider in a recent interview, though he stressed that this is in the realm of speculation, not fact. 

As far out there as this is (and this is pretty far out), we might even be able to create some sort of way for a consciousness to reside in that external structure, allowing for a sort of digital immortality. Theoretically, of course, provided our concept of the brain basically working like a biological computer is true.

But we're still very far from understanding the brain. It's shocking, in many ways, to think of how little we actually know about how the brain works. We don't understand how memories are stored; we don't know how our brains create meaning from the chaotic stimuli of the world; we don't know how intelligence works; and we're still struggling to understand the roots of mental illnesses.

"When I walked into my first society of neuroscience meeting, I saw all these high I.Q. people and thought, 'by the time I'm done with grad school, this is all going to be solved,'" Eagleman said. "It turns out, 20 years later, I can point to all sorts of technological innovations, but the big questions that drew me into neuroscience are all still there."

Copying the brain

Eagleman explained why he thinks it might be possible to develop technology that allows us to replicate the brain before we've really filled in all the details of its functioning. As he put it: "You can Xerox a book in a language you don't understand."

Working toward that replication may even help unlock some of the many remaining mysteries of neuroscience.

The question is how fine a resolution we need. "Any given brain cell is connected to about 10,000 of its neighbors, and [the question of] which neuron is connected to which is very specific," he said.

Even when we can eventually visualize and model a brain of that complexity, we don't know if that will be sufficient. Eagleman said that perhaps we won't really be able to model a brain until we understand the way the proteins in the membranes of these neural cells interact or until we understand exactly how hormones or other chemicals affect neural processes.

But if in the end, the only thing stopping us from building an exact replica of a human brain is computational power — the ability to build a system that can process and emulate this whole structure — Eagleman suspects "we'll get to the point where it's not the issue."

Join the conversation about this story »

NOW WATCH: 9 facts about the brain that will blow your mind


6 ways artificial intelligence is going to make your life better

$
0
0

Dreamer robot Meka

We're already starting to see big developments in artificial intelligence.

Whether it's robots learning to speak like humans or a system capable of identifying landmarks in photos, artificial intelligence will continue to become a bigger part of our daily lives.

Andrew Moore, the former vice president of engineering at Google, told Tech Insider that big developments in artificial intelligence are coming, but that change will be gradual. Still, he envisions a lot happening in the next 10 years.

It's a theme we explore in our latest episode of Codebreaker, the podcast by Marketplace and Tech Insider. 


Philosopher Nick Bostrom tells Codebreaker why it's hard to design moral artificial intelligence:

Are "decisive machines" evil? Listen to the whole episode to find out. Or, subscribe on iTunes. 


 

Here are six predictions Moore, who is now dean of Carnegie Mellon's School of Computer Science, has for artificial intelligence in the next 10 years. And be sure to tune into Codebreaker to find out more ways artificial intelligence is shaping our lives.

SEE ALSO: Google’s head of artificial intelligence says ‘computers are remarkably dumb’

3 to 5 years: AI will be much better at being your "personal concierge."

Moore said in the next three to five years, AI like Siri will be much better at being our personal assistants. In that time frame, we will be able to ask more of our AI, Moore predicts.

For example, AI may be able to help us decide whether we need to see a doctor for an ailment. Or help recommend somewhere to eat based on our preferences and previous restaurants we visited.



5 years: AI will be able to process massive amounts of information during a crisis.

During natural disasters, it's difficult to process all of the information coming in and devise a plan to provide the most immediate relief.

Moore said he thinks in the next five years, AI will become intelligent enough to do the thought processing for us. That means processing what is happening and making judgment calls, such as determining how many people need to be on hand for whatever is happening.



5 years: Similarly, robots will be able to communicate with each other to coordinate a plan.

We've already started seeing this in practice with robots playing soccer at the RoboCup World Championship. 

But eventually, artificial intelligence will become advanced enough such that robots can work in teams to help each other during bigger situational problems, like search and rescue missions, Moore said. 



See the rest of the story at Business Insider

This AI can draw alphabet characters as well as a human can

$
0
0

ai drawing letters

Artificial intelligence has advanced by leaps and bounds in recent years, but it still pales in comparison to some human abilities.

Today's most sophisticated AI systems rely on learning from tens to hundreds of examples, whereas humans can learn from a few or even one.

Not only that, but humans have a richer understanding of concepts, which we use for imagination, explanation, and acting.

But now, a team of researchers has developed an AI that they say can learn handwritten characters from various alphabets after "seeing" just a single example, according to a study published Thursday in the journal Science.

The research had two goals: To better understand how people learn, and to build machines that learn in more humanlike ways.

"For the first time, we think we have a machine system that can learn a large class of visual concepts in a humanlike way," study leader Joshua Tenenbaum, a cognitive scientist at MIT, said in a news briefing on Wednesday.

Learning like a human

People, especially children, are remarkably good at induction, a concept that allows us to take a single example and generalize it to learn a broader concept.

Think of the first time you saw a Segway or a smartphone, suggested Tenenbaum during the briefing. You just needed one example, and you could recognize others that you came across later. Not only that, but you can use examples like these to explain, predict, and imagine other things.

By contrast, today's AI algorithms — like those used by Facebook's face recognition or Google's translation service — often require huge datasets to learn even basic concepts, and while impressive, they still don't have the rich understanding that humans have. The best-known of these is an approach known as "deep learning."

An AI that can draw letters after seeing just one

Tenenbaum and his colleagues set out to build an AI that could do something most humans can do easily: See a handwritten alphabet character, recognize it, and draw it themselves.

To do this, they created a model that represents concepts as simple programs that explain examples in terms of their probability of being right, something they call "Bayesian program learning." Their approach combines three important concepts: First, the idea that rich concepts are composed of simpler parts; second, that these concepts are produced by cause-and-effect; and third, that programs can "learn to learn" by relying on past experience.

Human volunteers from Amazon's Mechanical Turk service were recruited to hand-draw thousands of characters from 50 different alphabets, including traditional languages like Latin, Greek, and Korean, as well as fictional ones like the alien language in "Futurama."

Then, the researchers fed these characters one-at-a-time into their AI program, and asked it to identify the correct character, break it down into its component parts, and redraw it. It then had to draw a made-up character based on several related characters. They gave the same tasks to human volunteers.

Impressively, the new AI model performed as well as humans at this task, and even better than deep learning algorithms. In the task where they had to identify the correct character, people had an average error rate of 4.5%. The new AI averaged 3.3% errors, whereas competitor programs varied between 8% and 34.7%.

Can it pass a 'Turing test'?

Next, the researchers compared humans against their AI in a "visual Turing test," based on the classic test of AI developed by mathematician Alan Turing. In this version, a group of human judges were asked to compare the made-up figures drawn by combining similar characters, and determine whether a human or AI had made them.

The judges were barely better than chance at this task, suggesting the AI had successfully fooled them. Of course, this was a subjective test, and most AI researchers don't consider a single Turing test to be an accurate measure of a machine's intelligence.

Can you tell which of the grids of figures below were drawn by a human and which by the AI?

ai drawn charaters

(The grids drawn by a machine, from left to right, are: B, A; A, B; A, B.)

This work has a number of interesting applications. For example, it could be used to analyze national security imagery, and in fact, several defense agencies helped fund the research. But it's a pretty big leap to go from interpreting alphabet characters to human behavior.

AI is still a long way from matching human abilities. Humans can not only build up a rich understanding of concepts from just a few examples, but we can also use these concepts to plan, explain, and communicate with one another.

But our ability to quickly extrapolate from a small set of data comes with some notable drawbacks. We make snap judgments and stereotypes that do more harm than good.

"We're well-aware that humans are remarkable getting the world right as well as getting the world wrong," Tenenbaum said. "It's the inevitable flip side of learning so quickly."

NEXT UP: Google just released powerful new artificial intelligence software — and it's open source

SEE ALSO: Here's how well an AI scored on the math section of the SAT

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

Elon Musk just founded a new company to make sure artificial intelligence doesn't destroy the world

$
0
0

Elon Musk

Tesla Motors CEO Elon Musk and Y Combinator President Sam Altman will be cochairs of OpenAI, "a non-profit artificial intelligence research company," which was announced on Friday.

"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return," says the OpenAI blog entry.

The company has already secured commitments for $1 billion in funding, according to that blog post, from a who's who in tech, including Altman and Musk as well as Silicon Valley luminaries like Jessica Livingston and PayPal cofounder Peter Thiel.

The funding also comes from companies like Amazon Web Services and Infosys.

The company's research will be helmed by machine-learning expert Ilya Sutskever, with former Stripe CTO Greg Brockman coming over to OpenAI in the same role. Seven top research scientists also came aboard with OpenAI.

The idea is to build a body of research that's not kept locked away by one company. OpenAI researchers will be encouraged to share their work, the company promises, up to and including any patents the company generates.

"We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely," OpenAI writes.

Oddly enough, Musk and Altman have in the past expressed a deep mistrust of artificial intelligence, even while acknowledging its potential.

Musk, in particular, thinks that sci-fi visions of a world overrun by robots are actually within reason, while serial investor Altman once said that "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."

That's a view reflected in the announcement:

It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.

SEE ALSO: Elon Musk: Artificial intelligence is 'potentially more dangerous than nukes'

Join the conversation about this story »

NOW WATCH: Meet 'Iceman' and 'Wolverine' — the 2 coolest robots in Tesla's factory

Elon Musk just announced a new artificial intelligence research company

$
0
0

elon musk

Tesla CEO Elon Musk announced the formation of a new non-profit artificial intelligence research company via Twitter Friday.

Called OpenAI, the research group aims to "advance digital intelligence in the way that is most likely to benefit humanity as a whole,"OpenAI wrote in a post introducing the company. A lack of financial obligation will allow the company to focus more on this mission, the post adds.

The company is co-chaired by Musk and Y Combinator's Sam Altman. Ilya Sutskever, a research scientists at Google who specializes in machine learning, will serve as research director.

Researchers part of OpenAI will be encouraged to publish their work, and any patents the company receives will be shared with the world. 

"We'll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies," the post adds.

A group of backers have committed $1 billion to the project, though OpenAI only plans to spend a "tiny fraction" in the next couple of years.

The backers include Reid Hoffman, co-founder and executive chairman of LinkedIn; Jessica Livingston, a founding partner of Y Combinator during its seed stage; Peter Thiel, co-founder of PayPal; Amazon Web Services; Infosys; and YC Research.

 Greg Brockman, CTO of Stripe, Musk, and Altman also contributed funding.

"Because of AI's surprising history, it's hard to predict when human-level AI might come within reach," the OpenAI post reads. "When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest."

Musk has funded a number of research projects to ensure artificial intelligence does not turn evil.

 

Join the conversation about this story »

Cambridge University opens a £10 million centre to study the impact of AI on humanity

$
0
0

artificial intelligence

Cambridge University announced on Thursday that it is opening a new £10 million research centre to study the impact of artificial intelligence on humanity.

The 806-year-old university said the centre, being funded with a grant from non-profit foundation The Leverhulme Trust, will explore the opportunities and challenges facing humanity as a result of further developments in artificial intelligence.

Over the next 10 years, the Leverhulme Centre for the Future of Intelligence will study the impacts of this "potentially epoch-making technological development, both short and long term."

The centre will be led by Professor Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge.

Price said: "Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad."

Renowned scientists like Cambridge's Stephen Hawking and Oxford's Nick Bostrom have warned that machines could outsmart humans within the next century, possibly leading to a demise in jobs and, in the very worst case scenario, the end of the humanity. Last December, Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race."

Cambridge said the facility will work in conjunction with the university’s Centre for the Study of Existential Risk (CSER), which is funded by Skype cofounder Jaan Tallinn and looks at emerging risks to humanity’s future including climate change, disease, warfare, and artificial intelligence.

ex machina osscar

The launch of the centre comes as US tech heavyweights such as Google, Facebook and Apple are starting to ramp up their research in artificial intelligence in a bid to make their platforms and devices more sophisticated.

Google, for example, bought Oxford AI spinout DeepMind for £400 million last January, while Facebook has set up a new AI research centre in Paris.

The founders of DeepMind have stated that the field they're working in needs to be treated with respect as it could end up being detrimental to humanity if sophisticated AI "agents" end up in the wrong hands.

Cambridge graduate and DeepMind cofounder Demis Hassabis revealed last month that some of the most prominent minds in AI are gathering in New York early next year to discuss the ethical implications of the field they work in.

Zoubin Ghahramani, deputy director of the new centre and professor of information engineering at Cambridge, added: "The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars.

"We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications."

Join the conversation about this story »

NOW WATCH: A 56-year-old man filmed a conversation with his 18-year-old self, and it's going viral

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>