Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

18 artificial intelligence researchers reveal the profound changes coming to our lives

$
0
0

robot hand

Artificial intelligence (AI) has been changing our lives for decades, but never has AI felt more ubiquitous than now.

It seems as though not a week passes without yet another AI system overcoming an unprecedented hurdle or outperforming humans.

But how the future of AI will pan out for humans remains to be seen. AI could either make all our dreams come true, or destroy society and the world as we know it.

To get an a realistic handle on what that future might look like, Tech Insider spoke to 18 artificial intelligence researchers, roboticists, and computer scientists about the single most profound change artificial intelligence could bring.

Scroll down to see their lightly edited responses.

Pieter Abbeel says robots will keep us safer, especially from disasters.

AI for robotics will allow us to address the challenges in taking care of an aging population and allow much longer independence.

It'll enable drastically reducing, maybe even bringing to zero, traffic accidents and deaths. And enable disaster response for dangerous situations, for example, the nuclear meltdown at the Fukushima power plant.

Commentary from Pieter Abbeel, a computer scientist at the University of California, Berkeley.



Shimon Whiteson says we will all become cyborgs.

I really think in the future we are all going to be cyborgs. I think this is something that people really underestimate about AI. They have a tendency to think, there's us and then there's computers. Maybe the computers will be our friends and maybe they'll be our enemies, but we'll be separate from them.

I think that's not true at all, I think the human and the computer are really, really quickly becoming one tightly-coupled cognitive unit.

Imagine how much more productive we would be if we could augment our brains with infallible memories and infallible calculators.

Society is already wrestling with difficult questions about privacy and security that have been raised by the internet. Imagine when the internet is in your brain, if the NSA can see into your brain, if hackers can hack into your brain.

Imagine if skills could just be downloaded — what's going to happen when we have this kind of AI but only the rich can afford to become cyborgs, what's that going to do to society?

Commentary from Shimon Whiteson, an associate professor at the Informatics Institute at the University of Amsterdam.



Yoky Matsuoka says these implants will make humans better at everything.

I think the way I have been promoting AI as well as the next big space aspect for AI is to become really an assistant for humans. So making humans better, making what humans want to do and what humans want to be, easier to achieve with the help from AI.

What if I lost a limb and I can't swim as fast, what if an AI can actually know how to control this robotic limb that's now attached to me to quickly and efficiently let me swim?

Those are the ways, my brain is doing control but to an extent, things that I can't do anymore or things I want to be, if that part can be intelligently handled that's really great. It's almost like a partnership.

Commentary from Yoky Matsuoka, former Vice President of Technology at Nest.



See the rest of the story at Business Insider

Google’s head of artificial intelligence says ‘computers are remarkably dumb’

$
0
0

old-computers-ap

Google’s head of machine learning doesn’t seem too worried about computers becoming smarter than humans anytime soon. 

John Giannandrea, who leads the company’s machine learning efforts, said that he is not very impressed with the current intelligence of computer systems, according to a Fortune report.

“I think computers are remarkably dumb,” Giannandrea told Fortune. “A computer is like a 4-year-old child.”

Giannandrea, who has been working on making computers more human-like since the early 90s, currently works on Google’s self-driving car project using machine learning to help the vehicles detect human pedestrians. 

While AI is much broader than machine learning, machine learning plays a critical role in most AI systems today. Machine learning is basically the AI algorithm that learns from data and is capable of improving over time. 

For example, Google and Apple use machine learning in their smart assistants, Google Now and Siri. The virtual assistants learn your behavior to provide better results when you ask a question or make a request. 

Artificial intelligence and machine learning have always played a key role in Google’s products and services, but the company is looking to ramp up its use of the technology even more. 

Earlier this month, during Google’s third quarter earnings call CEO Sundar Pichai said that the company was “re-thinking” all of its products to include more artificial intelligence and machine learning.

For example, the company has begun to lean more on artificial intelligence to help enhance its search results. On Monday, Google revealed that it created an artificial intelligence system called RankBrain that is used when someone searches for something that its search system has never seen before.

However, while AI and machine intelligence has come a long way, Giannandrea told Fortune that researchers are still searching for the “holy grail” of AI, which is a computer system with human-level intelligence, capable of understanding language and context. 

Read more on what Google researchers consider to be the biggest obstacles to artificial intelligence on Fortune >>

Join the conversation about this story »

NOW WATCH: There’s a surprisingly simple explanation for why iPhone owners can accept calls two different ways

Nissan built a self-driving car with a steering wheel that transforms into a tablet

$
0
0

Nissan IDS

Nissan unveiled its self-driving, IDS concept car at the Tokyo Motor Show today, and it comes with an animated steering wheel that transforms into a touch screen.

While there are no current plans to bring the IDS into production, the car showcases some of the technology that may end up in future Nissan vehicles. 

For example, Nissan's autonomous driving technology is on track for a 2020 release, and the electric car shows how the interior can change as a perk to the piloted experience.

With a click of a button, the steering wheel flips into the dashboard and is replaced by a smiling touch screen. The seats move back to provide more leg room and the driver can control aspects of the car by voice command or gesture.

Drivers can also insert their schedule into the car so it will automatically start itself when you're ready to go. The car can display the schedule and traffic information on its dashboard, and even offer restaurant recommendations.

The car also communicates with the outside world when in autonomous mode, from welcoming the driver to letting passengers know it's ok to cross. 

Nissan's concept car also has a different mode for traffic jams where it'll stay in the right lane and keep pace with the vehicle ahead of it.

The IDS Concept comes with a 60 kWh battery, which is double the battery in the Nissan Leaf that has a range of 84 miles per charge (the press release does not mention the IDS Concept's range, only noting that it "will be able to go great distances on a single charge.") It can also charge wirelessly. 

Join the conversation about this story »

Apple's culture of secrecy is reportedly hurting its artificial intelligence efforts (AAPL)

$
0
0

Tim Cook looking worried or sad

Apple's culture of secrecy is hurting its efforts to hire artificial intelligence experts, according to a report by Bloomberg. While other companies, such as Google, Microsoft, and IBM, allow researchers to publish research papers, Apple requires that its employees do not.

Artificial Intelligence (AI) is arguably the next frontier of computing and losing out on key talent could hurt Apple's chances of success. "Apple is off the scale in terms of secrecy," Richard Zemel, a computer science professor at the University of Toronto told Bloomberg. "They’re completely out of the loop."

All of the big technology companies, from Google to IBM, are working on AI software. For example, Facebook can tell blind users who is in certain photos, Google can create GIFs and collages out of photos, and Microsoft is working to bolster Cortana, its virtual assistant.

By preventing its researchers from publicly talking about their work, Apple is potentially damaging its chances of competing. According to Bloomberg, new hires are prevented from even announcing the job on LinkedIn or Twitter. "The really strong people don’t want to go into a closed environment where it’s all secret," Yoshua Bengio, a professor of computer science at the University of Montreal, told Bloomberg.

Apple recently acquired two AI startups, Perceptio and VocalIQ, to help its efforts. The company also hired away influential researchers from Microsoft and is advertising hundreds of AI-related jobs, focusing on all aspects of the company, from Siri to machine learning.

Join the conversation about this story »

NOW WATCH: This iPhone feature is draining so much data that Apple is getting sued — here's how to turn it off

American's 10 biggest fears are mostly technology related

$
0
0

top american technology related fears

What are you most afraid of? If you’re like many Americans, you’re more scared of technology issues than dying, mass shootings, or ghosts. 

The Chapman University Survey on American Fears shows that three of the top five American fears are in the technology domain. We’re going to dive into the top eight technology fears. The overall top 10 American fears are shown in a chart at the end of the post. 

SEE ALSO: Pentagon: 'We are in combat' in Iraq

Cyberterrorism:

Cyberterrorism certainly sounds scary, combining the buzz words “cyber” and “terrorism,” but what is it exactly? There isn’t a single, agreed-upon definition because the boundaries between cyberterrorism, terrorism, cybercrime, and cyberwar are seemingly arbitrary.

The Oxford English Dictionary defines cyberterrorism as “the unlawful (and often politically motivated) use of computers or information technology to cause disruption, fear, or financial loss.” This could mean terrorists launching a cyberattack against power grids, manufacturing facilities, or Wall Street.

Fear of cyberterrorism likely ranks number two overall not because the risk of any single individual becoming a victim is great, but because of the uncertainty regarding the term and the wide range of potential ramifications.



Tracking of Personal Information:

Just as they dominate news headlines, privacy concerns dominate Americans’ fears. As of October 27, 2015, there have been 629 data breaches in 2015, exposing over 175,000,000 records.

On top of the threat of data theft, there are fears about what both corporations and the government will do with the data they collect. A 2014 Pew Research Center study found that “91% of adults in the survey ‘agree’ or ‘strongly agree’ that consumers have lost control over how personal information is collected and used by companies.”

In 2013, Edward Snowden’s release of top-secret government documents discussing surveillance programs brought the debate about privacy versus security to the minds of Americans. Though it’s arguably not a top issue anymore, it is still greatly affecting the global surveillance and cybersecurity environments.

 



Robots and Artificial Intelligence:

Long-time elements of science fiction, robots are often portrayed as villains. Aside from initially being perhaps a bit demeaning, letting one do your job might not be all that bad. Darrell West recently released a paper and hosted a panel discussion outlining how the workforce might change due to increased automation.

He gave several creative solutions to the ways society might adapt to a world with less work for humans, such as offering benefits outside of employment and having more time for creative and leisurely pursuits. There is still debate, however, about whether the increased presence of automation will ultimately create, maintain, or destroy jobs.

As far as artificial intelligence goes, those who fear it include Elon Musk, Steve Wozniak, and Stephen Hawking. This field is rapidly evolving, and has the potential to bring “unprecedented benefits to humanity.”

It will be increasingly important for policymakers to take an active role in ensuring that robots and artificial intelligence evolve in a way that is only beneficial to humanity.



See the rest of the story at Business Insider

Apple's artificial intelligence efforts are shrouded in secrecy (AAPL)

$
0
0

apple logo shadow silhouette

Don't even think about updating your LinkedIn profile if you get hired to work on one of Apple's teams focused on the company's artificial intelligence efforts.

According to Bloomberg Businessweek's Jack Clark, Apple's AI teams are so secretive that people who get hired to work on them are instructed not to announce their new jobs on networks like Twitter or LinkedIn. 

Bloomberg's story posits that Apple's secrecy around AI has hampered the company's efforts in the field — Apple researchers have never published a paper on AI, for example. Also, the report says the environment is so secretive that some AI teams don't even know what other AI teams are working on.

Employees working on AI at Apple are also supposed to keep their office doors locked when they're not in them, according to Clark's report.

Be sure to read the full story at Businessweek.

Join the conversation about this story »

NOW WATCH: The new iPhones have different processors — here's how to tell if your phone has the good one

Humans still have a major advantage over robots, and it's not changing any time soon

$
0
0

china robotWhile automation is becoming more common in some industries, humans still have a big leg-up on the ever-increasing robotic competition, Manuela Veloso, international expert in AI and robotics, told Tech Insider.

One of the biggest advantages humans have over robots is that humans have a tremendous breadth of capabilities, while robots tend to only exceed in one specific area, Veloso, who is also a professor of computer science and robotics at Carnegie Mellon University, said.

“I can scramble eggs, I can cook squash, I can speak five languages, I can teach AI. In no way are we getting close to having a single robot, a single computer, a single mind that can do all of these things,” she said.

“I do believe that we are very far from having machines that exhibit human-level intelligence,” she added.

Veloso has spent a good chunk of her career developing autonomous robots, so she knows a great deal about their limitations. However, she’s also always trying give them more capabilities.

She recently won a grant from the Future of Life Institute to develop AI systems that can explain why they made certain decisions, a skillset needed for robots if we are going to increasingly rely on them for things like driving or caretaking.

However, Veloso said that just because humans have a wider range of abilities, doesn’t mean that robots won’t exceed humans in certain areas. For example, we are already seeing robots exceed humans in manufacturing a specific product or doing a specific task, like stocking shelves.

“If you think about single capabilities, I do believe that robots will match humans at some point. Now whether they will surpass humans, I do not think that is the case,” she said.

Veloso acknowledges that we will increasingly compete with robots, but says that it won’t be much different than how we already compete with other humans. Just like some people exceed others in certain abilities, robots will exceed humans in certain things and visa versa.

“We, as humans, surpass each other. Einstein surpassed me tremendously in terms of scientific intelligence, we humans are all about the diversity of capabilities,” she said. “We just have to acknowledge that within this realm there will also eventually be robots. So robots will proceed in parallel with humanity.”

Join the conversation about this story »

NOW WATCH: Best Buy is using a robot that could change the way we shop forever

The world's leading futurist wants to live forever — here's why

$
0
0
YOUTUBE SUBSCRIBE BUTTON EMBED CODE

 

Google's resident futurist and famed inventor Ray Kurzweil says humans are just a few scientific breakthroughs away from achieving eternal life. During a visit to our office, the Google director of engineering talked to us about the future of health and medicine and his case for why we should live forever.

Kurzweil is one of the world's leading minds on artificial intelligence, technology and futurism. He is the author of five national best-selling books, including "The Singularity is Near" and "How to Create a Mind." 

Produced by Christine Nguyen and Will Wei

Follow TI: On Facebook

Join the conversation about this story »


This robot has a skill that was once reserved only for humans

$
0
0

Simon Georgia Tech talking robot

Robots are becoming more capable of performing tasks like humans — we're even sending them to assist astronauts in space— but when it comes to speaking like humans, that's a major challenge.

We don't think much about it since it's such a native skill, but learning the nuances of human speech is no easy feat. Think of Siri: you may be able to ask her to check the weather, but having a casual conversation is impossible.

So researchers at Georgia Tech are working to develop software that would give robots the ability to hold a conversation, IEEE Spectrum first reported. The researchers are developing artificial intelligence to allow a robot named Simon converse in a more fluid matter.

That means keeping up when people abruptly change a conversation topic or interrupt each other. It also just means sounding less stiff and talking with more cadence.

The Georgia Tech researchers, Chrystal Chao and Andrea Thomaz, have developed a model using engineering software called CADENCE that allows Simon to understand the concept of taking turns when speaking.

Simon was given two speech patterns: active and passive.

For the active speech pattern, Simon exhibits an extroverted personality who talks at length and at a louder volume. Simon was also more likely to talk over others.

When set on the passive speech pattern, Simon spoke less and allowed humans to interject more often.

“We expect that when the robot is more active and takes more turns, it will be perceived as more extroverted and socially engaging,” Chao told IEEE Spectrum. “When it’s extremely active, the robot actually acts very egocentric, like it doesn’t care at all that the speaking partner is there and is less engaging."

Finding an appropriate balance between active and passive, as well as making advancements in body language to truly mimic how people converse, is necessary for Simon to talk with the same cadence as C-3PO did with Luke.

Watch Simon talk in active and passive mode:

Join the conversation about this story »

How Facebook will use artificial intelligence to organize insane amounts of data into the perfect News Feed and a personal assistant with superpowers (FB)

$
0
0

mike schroepfer

Using some quick and dirty math, Facebook CTO Mike Schroepfer estimates that the amount of content that Facebook considers putting on your News Feed grows 40% to 50% year-over-year.  

But because people aren't gaining more time in the day, the company's algorithms have to be much more selective about what they actually show you. 

"We need systems that can help us understand the world and help us filter it better," Schroepfer said at a press event prior to his appearance at the Dublin Web Summit Tuesday morning. 

That's why the company's artificial intelligence team (called FAIR) has been hard at work training Facebook's systems to make them understand the world more like humans, through language, images, planning, and prediction. 

It already has trained its computer vision system to segment out individual objects from photos and then label them. The company plans to present a paper next month that shows how it can segment images 30 percent faster, using much less training data, than it previously could. 

Ultimately, Schroepfer explains, this could have practical applications like helping you search through all your photos to surface any that contain ocean scenes or dogs. Or, you could tell your News Feed that you like seeing pictures with babies, but hate seeing photos of latte art. 

It could also come in handy for photo editing. For example, you could tell the system to turn everything in a photo black-and-white, except one object (like the image below). 

These improving visual skills pair well with Facebook's language recognition. 

Schroepfer says that the company is in the early stages of building a product for the 285 million people around the world with low vision capabilities and the 40 million who are blind that will let them communicate with an artificial intelligence system to find out details about what is in any photo on their feed. 

"We're getting closer to that magical experience that we’re all hoping for," he says. 

The team is also tackling predictive, unsupervised learning and planning. 

Making M into a superpower

FacebookBoth of these research areas will be important to powering M, the virtual personal assistant that Facebook launched earlier this summer in its chat app, Messenger. Right now it's in limited beta in the Bay Area, but the goal, Schroepfer says, is to make it feel like M is a superpower bestowed upon every Messenger user on earth. 

Right now, everything M can do is supervised by real human beings. However, those people are backed up by artificial intelligence. Facebook has hooked up its memory networks to M's console to train on the data that it's gotten from its beta testers. 

It might sound obvious, but the memory networks have helped M realize what questions to ask first if someone tells M they want to order flowers: "What's your budget?" and "Where do you want them sent?"

The AI system discovered this by watching a handful of interactions between users and the people currently powering M. 

"There is already some percentage of responses that are coming straight from the AI, and we're going to increase that percentage over time, so that it allows us to train up these systems," Schroepfer says.

"The reason this is exciting is that it's scalable. We cannot afford to hire operators for the entire world, to be their virtual assistant, but with the right AI technology, we could deploy that for the entire planet, so that everyone in the world would have an automated assistant that helps them manage their own online world. And that ends up being a kind of superpower deployed to the whole world."

Schroepfer says that the team has made a lot of progress over the last year, and plans to accelerate that progress over time. 

"The promise I made to all the AI folks that joined us, is that we're going to be the best place to get your work to a billion people as fast as possible." 

Watch this video about the FAIR team and its research here:

 

 

SEE ALSO: Facebook's 3-step plan to take over the rest of the world

Join the conversation about this story »

NOW WATCH: We asked a bunch of kids what they think about Facebook

Facebook already uses AI to recognize photos — the next step is video (FB)

$
0
0

IMG_2842.PNG

While uploading photos to Facebook, you may notice that the social network will try to automatically tag faces and match them with their respective profiles.

Facebook's artificial intelligence chief, Yann LeCun, thinks the company's facial recognition is the best in the world, according to an interview with Popular Science. And the next step for Facebook's AI efforts are recognizing what's in the videos you watch.

While speaking at the Dublin Web Summit on Tuesday, Facebook CTO Mike Schroepfer said that while the amount of content Facebook considers showing in your News Feed grows every year, the company's algorithms have to be more selective to surface what you actually care about.

"We need systems that can help us understand the world and help us filter it better," Schroepfer said during the press event, according to Business Insider.

To get better at filtering video — Facebook said it expects to account for the majority of content shared on its network in a few years — the company plans to use AI to scan the contents of videos like it already does for photos.

Popular Science recently talked to Rob Fergus, who leads the AI research team at Facebook, about the new frontier of using AI to scan video:

"Lots of video is “lost” in the noise because of a lack of metadata, or it’s not accompanied by any descriptive text. AI would “watch” the video, and be able to classify video arbitrarily.

This has major implications for stopping content Facebook doesn’t want from getting onto their servers—like pornography, copyrighted content, or anything else that violates their terms of service. It also could identify news events, and curate different types of video category. Facebook has traditionally farmed these tasks out to contracted companies, so this could potentially play a role in mitigating costs.

In current tests, the AI shows promise. When shown a video of sports being played, like hockey, basketball or table tennis, it can correctly identify the sport. It can tell baseball from softball, rafting from kayaking, and basketball from street ball."

Make sure to read the full story at Popular Science for more details.

Join the conversation about this story »

NOW WATCH: We asked a bunch of kids what they think about Facebook

Facebook wants to use artificial intelligence to crack the game no computer has ever mastered

$
0
0

Go

There's one surefire way researchers like to test the capabilities of their artificial intelligence systems—games.

Earlier this year, Google Deepmind created a computer that can learn how to play video games with no instructions. In 30 minutes, the computer became the best Space Invader player in the world.

And most are familiar with the time when IBM's Deep Blue computer beat world chess championGarry Kasparov in 1996.

But artificial intelligence has yet to tackle the game Go — an Eastern, two-player game that has more than 300 times the number of plays as chess.

Facebook, though, is now attempting to do what other computers have thus far failed at and beat a human at Go for the first time ever, Wired first reported. 

Go is a truly difficult game to learn that requires a lot of thoughtful planning (I attempted to master it junior year of college and have yet to revisit that quest). The game is played on a 19 X 19-line grid. The players are given a set of black and white stones and must attempt to cover a larger surface area of the board than their opponent. 

Where it gets tricky is you can play in patterns that allow you to take over your opponents stones and claim territory. To be successful, you have to envision what kind of moves your opponent is trying to make, just like when playing Chess. 

This is why Facebook researchers are teaching their artificial intelligence system to recognize visual patterns.

Facebook notes on their research page that using games to train machines is a common approach, and that the difficulty of Go will help refine Facebook's artificial intelligence so it's capable of sophisticated pattern recognition.

Think of it this way: after the first two moves of a chess game, there are 400 possible next moves. In Go, there are close to 130,000.

"We’ve been working on our Go player for only a few months, but it's already on par with the other AI-powered systems that have been published, and it's already as good as a very strong human player,"Mike Schroepfer, CTO of Facebook, wrote on the research page.

Here's Facebook's artificial intelligence bot playing Go:

To improve the system, researchers modeled out each possible move as the game progress (which is a traditional search-based approach) and combined that with a pattern matching system built by their computer vision team.

We've interacted with forms of Facebook's artificial intelligence, perhaps unknowingly. When Facebook recommends you tag a specific friend in a photo, it's using artificial intelligence for facial recognition.

So why develop a bot capable of playing Go? Well, Facebook's research page notes that it stems back to their development of Facebook's personal assistant, M. Strong pattern recognition will be necessary for M to complete tasks like making purchases.

Apple is also working on different artificial intelligence projects, though the company is hush when it comes to its projects. So hush that people who are hired by Apple to work on artificial intelligence are told not to announce the job on social media.

But Apple acquired two artificial intelligence companies in October, indicating that something is brewing.

Google acquired two artificial intelligence companies in 2014 and has been working on a host of projects. Recently, the company announced that it has developed artificial intelligence capable of writing your emails for you, and is beginning to roll out the feature.

Join the conversation about this story »

Why Facebook thinks its super-smart digital assistant won't cross the ‘creepy line' (FB)

$
0
0

David Marcus

Facebook exec David Marcus promises that its new, incredibly ambitious virtual assistant, M, won't be creepy. 

Right now M has only rolled out to select people in the San Francisco Bay Area, where they've started using the human-powered service to complete tasks like ordering flowers, booking flights, or sending parrots to someone's office

Facebook imagines a world where M would remind you of a friend's birthday, suggest a local restaurant, and then book a table, without the user having to expend any effort whatsoever. Marcus wants M to be incredibly proactive, helping the user complete tasks before they even realize they needed them done. 

But to reach that level of pro-activeness, the virtual assistant would need to know a lot of information about you. Couldn't that start feeling a little freaky?

"How do you do that without breaching the creepy line?" Murad Ahmed from the Financial Times asked Marcus on stage today at the Dublin Web Summit. 

"While this is a very popular theme for news, I think there’s zero creepiness when there’a a lot of utility," Marcus responded. "The minute it gets creepy is when a company gets a lot of information and doesn’t give anything back."

If M continues delivering experiences that feel magical and like a superpower, he doesn't think that "creepiness "will be something that users care about. 

That's an argument that Google has made as well, when questioned about its personal assistant, Google Now or Now on Tap. But Facebook's M may be a slightly tougher pill to swallow there, because questions and tasks will be supervised by real humans, to some extent. 

Right now, everything M does is supervised by real people. Those people are backed up by an artificial intelligence system, though, that is learning from interactions with beta testers. Some percentage of responses are already coming straight from the AI, but Facebook needs to drastically increase that number to make the system scalable.  

"We cannot afford to hire operators for the entire world, to be their virtual assistant, but with the right AI technology, we could deploy that for the entire planet, so that everyone in the world would have an automated assistant that helps them manage their own online world," Facebook CTO Mike Schroepfer recently explained at a press briefing. "And that ends up being a kind of superpower deployed to the whole world."

Marcus says he's "cautiously optimistic" about M's progress weaving human interactions with human intelligence so far.

 

SEE ALSO: How Facebook will use artificial intelligence to organize insane amounts of data into the perfect News Feed and a personal assistant with superpowers

Join the conversation about this story »

NOW WATCH: Mark Zuckerberg just got on stage and raved about the future of virtual reality

A new study reveals that humans may exhibit the same level of empathy towards robots as they do humans

$
0
0

pepper the robot

Human empathy may extend further than we thought.

In fact, humans may even have the ability to empathize with manufactured machines.

Researchers at Toyohashi University of Technology in collaboration with Kyoto University, released a study Tuesday revealing that when shown images of human and humanoid robotic hands in painful situations, humans responded with similar immediate levels of empathy, as evidenced by recorded electrical activity of the brain.

“I think a future society including humans and robots should be good if humans and robots are prosocial,” study co-author Michiteru Kitazaki told Inverse.

“Empathy with robots as well as other humans may facilitate prosocial behaviors. Robots that help us or interact with us should be empathized by humans.”

However, humans did still respond with stronger levels of extended empathy towards humans than robots.

This could be “caused by humans' inability in taking a robot's perspective,” the researchers say. “It is reasonable that we cannot take the perspective of robots because their body and mind (if it exists) are very different from ours.”

While these results represent the first neurophysiological evidence that humans are able to identify with the perceived pain of robots, studies of human-robot empathy are not entirely new.

In 2013, two studies were conducted by German researchers from the University of Duisburg-Essen measuring human empathy levels for robots by varying methods.

The first study measured skin conductance levels of volunteers while watching videos of a dinosaur robot being treated affectionately or abusively.

“When a person is experiencing strong emotions, they sweat more, increasing skin conductance,” explains Live Science. “The volunteers reported feeling more negative emotions while watching the robot being abused. Meanwhile, the volunteers' skin conductance levels increased, showing they were more distressed.”

In the second study, researchers visualized volunteers’ brain activity while volunteers watched subsequent videos of a human and then a robot being strangled by a plastic bag. Lead study author Astrid Rosenthal-von der Pütten concluded, "in general, the robot stimuli elicit the same emotional processing as the human stimuli."

As robots are further introduced into human life, it becomes increasingly important to understand human-robot interaction, a phenomena that may be related to the humanoid nature of robots.

pepper the creepy robot

Over 40 years ago, Masahiro Mori developed the "uncanny valley" theory which suggested a “person's response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance.” This “descent into eeriness” is what he called the uncanny valley.

Mr. Mori’s theory may be disproved in the near future as researchers continue to work towards the development of human-friendly robots that humans can identify with. But while this advance in artificial intelligence suggests human potential to empathize with robots, it’s unfair to assume that robots have the ability to empathize with humans just yet.

“True empathy assumes a significant overlap in experience between the subject of the empathy and the empathizer,” says Skye McDonald, writer for Phys.org. “We are still a long way from fully understanding the complexities of how human empathy operates, so are still far from being able to simulate it in the machines we live with.”

SEE ALSO: Here's how well an AI scored on the math section of the SAT

CHECK OUT: The 9 coolest robots Alphabet is working on

Join the conversation about this story »

NOW WATCH: This new robot smartphone dances, responds to voice commands, and has a built-in projector

The robot apocalypse is going to look a lot different than what's in the movies

$
0
0

arnold schwarzenegger 1991 4x3 terminator

In the next two decades, somewhere in the United States, a legal secretary is going to be handed a pink slip when she gets to work. Her worst fears have come true — a computer program has made her work obsolete. It's faster, cheaper, more accurate, and doesn't take up any space.

Most science fiction movies portray apocalyptic scenarios of malevolent robots taking over but the scenario described above is actually more accurate. It may be much more mundane, but no less terrifying.

Artificial intelligence (AI) and robots are more likely to take our livelihoods than our lives. And it may leave a big gap between the haves and have-nots in its wake.

A 2013 Oxford study estimates that up to 47% of jobs are likely to be automated by 2030. The jobs being lost will be unspecialized professions or jobs traditionally held by the middle class — putting a wedge on an already widening inequality gap.

AI has been around a long time but we're just entering an era where AI is booming. But the boom in AI is coming at a time when income and wealth inequality is at its widest since the Federal Reserve began collecting data, according to a 2014 Pew Research Center Report.

The report found that the median wealth of the country's richest families was "nearly 70 times that of the country's lower-income families," and seven times that of middle income families.

And it doesn't seem like anyone else will ever get the chance to catch up. Wages have stagnated, making it difficult for anyone to rise up the ranks. According to a 2015 Economic Policy Institute report, wages for 90% of the workforce grew just 15% between 1979 and 2013. Contrast that with the annual pay of the top 1%, which grew 138%.

Now combine that trend with the kind of jobs being lost to automation and it's clear why AI researchers, economists, and even physicists think AI and robots could hollow out jobs originally held by the middle classes in the coming years.

While it's no surprise that AI is taking jobs in sectors like manufacturing, white collar jobs that pay middle class incomes are also being automated. Credit analysts and legal secretaries both have middle class incomes and a 98% risk of automation, according to the Oxford study. Meanwhile, jobs at the very top of the ladder are nearly robot-proof. For example, CEOs and managers, have just a 1% chance of automation.

In fact, Jerry Kaplan, author of "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence," told Tech Insider that any person that toils through many "repetitive and structured" tasks for a living will be at risk of having their job automated.

Take the legal secretary above — her work required researching relevant materials before trials, filling out forms and contracts, and scheduling appointments with clients. But there are plenty of programs that can do those tasks on their own, like Fair Document and Judicata.

"Even for what you think of as highly-trained, highly-skilled, intuitive personable professions, it is still true that the vast majority of the work is routine," Kaplan told Tech Insider.

In other words, it may be harder for people with fewer means to catch up and retrain for career changes. Imagine the legal secretary's scenario again. She goes to a packed job fair, but discovers that much of the work experience on her resume can be automated.

Between her part-time job as a barista, caring for her family, applying and interviewing for other work, and managing her household, she can barely find the time and resources to retrain for the kind of skills that she noticed most of the job postings are asking for.

While new technologies will make humanity smarter, AI will likely do little to fix the middle-class gap. New technology tends to be expensive, and only those who can afford it can participate in its advantages. The secretary likely wouldn't be able to afford these advantages, giving those who have access to the technology an even larger head start on her.

"Imagine if skills could just be downloaded," Whiteson said. "What's going to happen when we have this kind of AI but only the rich can afford to become cyborgs, what's that going to do to society? What's that going to do to exacerbate problems like income equality?"

Eric Brynjolfssen, co-author of "The Second Machine Age," told MIT Technology Review that AI and other digital technologies do benefit some people — the creative minds responsible for building the AI that could be putting people out of work. One example would be a person who can write an AI "to automate tax preparation that might earn millions or billions of dollars while eliminating the need for countless accountants," according to MIT.

But, at a certain point, even these software engineer jobs could be automated.

"Even there I can envision systems that become better and better at writing software," AI researcher Bart Selman told Tech Insider. "A person complemented with an intelligent system can write maybe ten times as much code, maybe a hundred times as much code. The problem then becomes you need a hundred times fewer human programmers."

robotThe rapid pace at which technology can be adopted is also playing a part in broadening the inequality gap. Toby Walsh, a computer science professor at the National Information and Community Technology at Australia, told Tech Insider that pace could make it difficult for people to retrain for jobs that are in demand.

"The great thing about computers is that you can reproduce the software almost at no cost," Walsh told Tech Insider. "The changes that we see precipitated by changes in computing are ones that tend to happen very very quickly. The challenge there is that society tends to change rather slowly."

According to a new Brookings study, public policy will have to change to take care of people who lose their jobs, especially as they retrain for new jobs. Social benefits like health care and retirement pensions are largely accessible through jobs, but as fewer people remain employable, it will fall on the government to figure out how to provide for them.

The study offers a few ideas on how the government could do this, including offering a basic income, expanding earned income tax credit, providing more opportunities for lifelong education, expanding companies' profit sharing, subsidizing volunteerism, and revamping education to cater to the job market. In other words, we need to completely change how people work and earn money.

Michael Littman, an AI researcher at Brown University, told Tech Insider doing that means fundamentally changing how we think of people and their worth.

"My biggest concern at the moment is that we as a society find a way of valuing people not just for the work they do," Littman said. "Otherwise these intelligent systems will get out there and people will become valueless, and society falls apart. We need to value each other first and foremost."

Join the conversation about this story »

NOW WATCH: The biggest science mistakes in 'The Martian'


The CEO of Google's £400 million AI startup is going to meet with tech leaders to discuss ethics (GOOG)

$
0
0

Demis Hassabis DeepMind

Demis Hassabis, the CEO of artificial intelligence (AI) startup DeepMind that was acquired by Google last year for £400 million, has revealed that some of the most prominent minds in AI are gathering in New York early next year to discuss the ethical implications of the field they work in.

The AI ethics meeting will be held at New York University in January and attended by the heads of big tech firms, according to Hassabis.

It’s not clear at this stage exactly which individuals or which companies will attend but global technology giants such as Facebook and Apple are likely candidates given their well-documented interest in AI — think of Apple's virtual assistant Siri, for example. 

Google is using DeepMind’s technology across its organisation to make many of its best-known products and services smarter than they were previously. For example, Google is starting to use DeepMind's algorithms to power its recommendation engines and improve image recognition on platforms like Google+. 

DeepMind's self-learning algorithms, or "agents," can already outperform humans on computer games like "Space Invaders" and "Breakout" but the company has no plans to stop there. It's now teaching its algorithms to play 3D racing games and understand other complex puzzles. Ultimately, DeepMind wants to "solve intelligence" and then use that to "solve everything else". No mean feat.

Last night, Hassabis  a Cambridge graduate with a double-first in computer science and a chess master at the age of 13  acknowledged the impact AI systems like DeepMind could have on the world.

"If we have something this powerful we need to think about the ethics,” he said during a public talk at the British Museum in London, before reassuring the audience that machines won't possess human-level intelligence for at least a few more decades.

Renowned scientists such as Stephen Hawking and Oxford University’s Nick Bostrom have warned that machines could outsmart humans within the next hundred years. Hawking told the BBC earlier in the year that artificial intelligence could spell the end for humanity, while Bostrom agrees that the future of the human race is likely to be shaped by machines. Billionaires like PayPal founder Elon Musk and Microsoft cofounder Bill Gates have also expressed their concerns over the uncontrolled development of AI. However, many other scientists in the field, such Microsoft Research chief Eric Horvitz, say AI fears have been greatly overblown.

Demis Hassabis

During the Q&A with Hassabis, chaired by BBC Worldwide CEO Tim Davie, a concerned audience member addressed Hassabis, saying: "You've taken the Yankee dollar. I hope everything you do improves society rather than kills us off." He added: "Once you let the genie out of the bottle we’re all f-----."

Hassabis responded by saying that DeepMind spent a lot of time doing the due diligence on Google before agreeing to the deal. One of the conditions of the deal was that Google must create an internal ethics committee, which it has done. However, Google is yet to publicly state who sits on the committee and what they’re doing.

Google AI ethics committee will be publicised

Defending why Google hasn’t revealed the members of its AI ethics committee yet, Hassabis said "it’s very early days" and "there’s lots of scrutiny on this". He said he’d like to get everyone "up to speed" on artificial intelligence first. "We wanted to have a calm, collected debate first,” he said. "At some point we will reveal who these people [on the ethics committee] are and what issues are being discussed."

Hassabis also assured the audience that he will not allow DeepMind technology to be used in military applications.

Hassabis also revealed that he spoke with Hawking on the topic of AI a few months ago. "I think [Hawking] was quite reassured about how we specifically were approaching AI," said Hassabis. "Most of the people worrying about this are not in the AI field," he continued, adding "it’s easy to get carried away with science fiction scenarios."

DeepMind now has over 150 scientists working in an office in London's King's Cross, making it the largest collection of machine learning experts anywhere in the world, according to Hassabis.

The company is due to release more research "in the next few months" outlining how its algorithms are advancing. 

Join the conversation about this story »

NOW WATCH: This woman was a genius at stop-motion Vine videos, so she turned her hobby into a business

An artificial intelligence researcher created a computer program to judge your selfies

$
0
0

kim kardashian selfie

Selfies are an amazing way to tell the world "here I am, rock you like a hurricane!" But according to a new artificial intelligence (AI) system built by a Stanford University researcher, not all selfies are considered equal.

Stanford PhD student Andrej Karpathy built a deep learning system that analyzed 2 million selfies and could tell which selfies would attract the most likes.

It turns out, if you want your selfie to take over Instagram, you might want to follow a few rules — be a woman, have long hair, take a close up, and crop it close enough that the forehead is cut off, among other things.

But before the AI got to these conclusions, it had to be trained.

Here's how it works

The AI system is based on a technology called convolutional nets, which were first developed by Facebook's head of AI research Yann LeCun in the 1980s. If you've ever used image recognition or deposited a paycheck at an ATM, you've used a convolutional net.

The selfie-judging AI system works like an assembly line — the image goes in on the left, goes through levels of analysis, and comes out on the right. Each level breaks the image down pixel by pixel. The first few layers look at simple facets, like shapes and colors, while the layers toward the end look at "more complex visual patterns," Karpathy writes.

Karpathy found the selfies by looking for images tagged #selfie, then divided them into good and bad according to the number of likes (taking into account the number of followers that the person had). He also filtered out images that used too many tags, and people who had too few followers or too many followers.

selfie AIThen the magic began. According to Karpathy, the system "'looked' at every one of the 2 million selfies several tens of times," and found the components that either make a selfie good or bad.

The results

After the system was trained, he fed it 50,000 selfies that the AI had never seen before, and it was able to "rank" them based on the images alone from good to bad. In the image below, the AI system ranks the selfies from good to worse.

Selfies good to badThe AI found that the selfies most likely to get the hearted had a few things in common. They often contained long-haired women on their own. The selfies were also very washed out, filtered, bordered, cut off the forehead, and featured a face in the middle third of the photo.

Below are the cream of the crop. Notice that of the best selfies, not a single man is included, and there are very few people of color.

good selfies AIOn the other hand, the worst images, or the selfies least likely to get any love, were group shots, badly lit and often too close up.

bad selfieGet your selfie judged by a robot

Karpathy even made a Twitter bot, which looks at people's submitted selfies and judges them automatically. I tried it out myself with my latest Instagram selfie from about two weeks ago, and got a slightly better than average score.

It's also pretty fast — it replied with my results in just a few seconds. Try it out by tweeting a square image or link to an image at @deepselfie.

My selfie follows a few of the rules — it was square, pretty washed out, filtered, cuts off my forehead, showed my long hair, and I'm more or less in the middle. I don't know if it got any more love for featuring a napping kitten.

But it might be a bad idea to follow the rules to a tee just for the likes. The AI system is more like an amalgamation of the things that a lot of people like, excluding any sort of creativity like funny faces, blue wigs, or pictures of you and your friends.

After all, selfies are supposed to be an expression of self-love. So if your favorite selfie doesn't score that high, who cares — you just do you.

Join the conversation about this story »

NOW WATCH: We asked an exercise scientist what and when you should eat before working out

An AI program analyzed two million selfies and found out what makes a great one

$
0
0

Kim

How do you look good in a selfie? Luckily for anyone wondering, a robot has figured it out.

A new artificial intelligence (AI) system built by a Stanford University researcher Andrej Karpathy looked at two million selfies and learned what makes a great selfie.

In other words, it discovered what elements make up a selfie that is more likely to get hearted.

Here's a few tips the program came up with for women wanting to take a better selfie, according to Karpathy's blog post about the project:

1) Be female.

2) Show your long hair.

3) Take it alone.

4) Use a light background or a filter: Selfies that were very washed out, filtered black and white, or had a border got more likes. According to Karpathy, "over-saturated lighting ... often makes the face look much more uniform and faded out."

5) Crop the image so your forehead gets cut off and your face is prominently in the middle third of the photo. Some of the "best" selfies are also slightly tilted.

Below are the cream of the crop — the top 100 of 50,000 images that the AI analyzed after being trained on more than 2 million selfies.

Notice that of the best 100 selfies, not a single man is included, and there are very few people of color.

good selfies AIAnd here are the male images that did the best, you can see similar trends cropping up, especially the number of images with white borders, though the male images more frequently included the whole head and shoulders, Karpathy writes:

malesOn the other hand, the worst images, or the selfies that probably wouldn't get as many likes, were group shots, badly lit, and often too close up.

So if you want your selfie to get a lot of love, make sure you follow the rules above. Read more about the program's creation.

Join the conversation about this story »

NOW WATCH: Samsung has re-envisioned the most mundane piece of technology

Toyota will invest $1 billion in artificial intelligence and robotics (TM)

$
0
0

2016 Toyota Mirai fuel cell vehicle hydrogen

Toyota says it will spend $1 billion to build a new research institute dedicated to two things that will improve its cars: artificial intelligence, and robotics.

The new establishment, called Toyota Research Institute, will begin work in January, according to Bloomberg. This new branch will work on safety systems and autonomous cars, while also making it an easy transition for older drivers that don’t want to relinquish control of their car keys.

Much of the $1 billion investment will go towards building locations for the new Research Institute near Stanford University and the Massachusetts Institute of Technology.

Gill Pratt, the 54-year-old CEO of Toyota’s Research Institute, says working towards autonomous cars will be less of a sprint and more of a marathon.

“It’s possible at the beginning of a car race that you may not be in the best position,” he said. “It may be that other drivers are saying a whole lot about what their position is, and everyone may expect that a particular car will win. But of course, if the race is very long, who knows who will win? We’re going to work extremely hard.”

Toyota says its goal is to introduce cars with semi-autonomous features — like lane switching and steering on and off public highways — by 2020, just in time for the Summer Olympics in Tokyo.

By that time, both Google and Tesla aim to have fully autonomous vehicles on the road. But Gill, who used to run the Robotics Challenge at DARPA before joining Toyota, says his company will not rush to the finish line.

“The problem of adding safety and accessibility to cars is extremely difficult,” Gill said. “And the truth is, we are only at the beginning of this race.”

Join the conversation about this story »

NOW WATCH: Google's self-driving car has a huge problem

A computer discovered something weird about the best selfies

$
0
0

selfie ai crop copy

There are a lot of unspoken rules about what makes a good selfie.

It turns out how selfies are cropped makes a huge difference, according to a new artificial intelligence (AI) system built by Stanford University researcher Andrej Karpathy.

He built a deep learning system that analyzed 2 million selfies and then could predict which would attract the most likes.

Strangely enough, Karpathy found that the AI gave higher scores to selfies that were slightly tilted and cropped so as to cut off the person's forehead, at least if it featured a woman.

After training the program, he used it to study what crops work best by having the program analyze and rate multiple different crops of the same image to see which came out the winner.

The more prominently the person's face was featured, the higher the score (above the image). In each instance, the AI system also loved when the person's forehead was completely chopped off.

selfie AI crop"Amusingly, in the image on the bottom right the ConvNet decided to get rid of the "self" part of selfie, entirely missing the point,"Kaparthy writes. Here's a few more examples he included in the post:

Selfie AI cropsInterestingly these same cropping rules don't necessarily apply for men. The best men's selfies were more widely cropped so the person's whole head and shoulders are in frame, like the images below. So, men, take a step back.

malesRead more about how the AI works.

Join the conversation about this story »

NOW WATCH: We asked an exercise scientist what and when you should eat before working out

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>