Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

This artificially intelligent Twitter bot will tell you how good your selfie is

$
0
0

A new artificial intelligence (AI) can predict which selfies are likely to get the most love, and now you can test it yourself.

Stanford PhD student Andrej Karpathy built a deep learning system that analyzed 2 million selfies and figured out the traits that make up good selfies.

Karpathy also made @deepselfie, a Twitter bot that can look at people's submitted selfies and judges them automatically. Give it a try by tweeting a square image or link to an image.

I tried it myself with my last Instagram selfie.

It replied with my results in just a few seconds — 52.1%, just slightly better than average.

The AI behind @deepselfie established a few arbitrary rules about what makes a good selfie, after examining two million images. My selfie inadvertently follows a few of those rules — it's pretty washed out, filtered, cuts off my forehead, shows long hair, and I'm more or less in the middle. It also helps that I'm a woman, though I'm not sure if the napping kitten made much of a difference.

Here you can see which 100 selfies the AI determined were the best out of 50,000 selfies. What they have in common is pretty obvious — almost all of them include long-haired women on their own. They're also filtered, washed out, have borders, cut off foreheads, and feature faces in the middle third of the frame. There are no men, and very few people of color.

good selfies AIOn the other hand, the worst images, or the selfies least likely to get any love, were group shots, badly lit and often too close up.

Read more about how @deepselfie works.

Join the conversation about this story »

NOW WATCH: Samsung has re-envisioned the most mundane piece of technology


This robot has a skill that was once reserved only for humans

$
0
0

Simon Georgia Tech talking robot

Robots are becoming more capable of performing tasks like humans — we're even sending them to assist astronauts in space— but when it comes to speaking like humans, that's a major challenge.

We don't think much about it since it's such a native skill, but learning the nuances of human speech is no easy feat. Think of Siri: you may be able to ask her to check the weather, but having a casual conversation is impossible.

So researchers at Georgia Tech are working to develop software that would give robots the ability to hold a conversation, IEEE Spectrum first reported. The researchers are developing artificial intelligence to allow a robot named Simon converse in a more fluid matter.

That means keeping up when people abruptly change a conversation topic or interrupt each other. It also just means sounding less stiff and talking with more cadence.

The Georgia Tech researchers, Chrystal Chao and Andrea Thomaz, have developed a model using engineering software called CADENCE that allows Simon to understand the concept of taking turns when speaking.

Simon was given two speech patterns: active and passive.

For the active speech pattern, Simon exhibits an extroverted personality who talks at length and at a louder volume. Simon was also more likely to talk over others.

When set on the passive speech pattern, Simon spoke less and allowed humans to interject more often.

“We expect that when the robot is more active and takes more turns, it will be perceived as more extroverted and socially engaging,” Chao told IEEE Spectrum. “When it’s extremely active, the robot actually acts very egocentric, like it doesn’t care at all that the speaking partner is there and is less engaging."

Finding an appropriate balance between active and passive, as well as making advancements in body language to truly mimic how people converse, is necessary for Simon to talk with the same cadence as C-3PO did with Luke.

Watch Simon talk in active and passive mode:

SEE ALSO: 24 life skills every functioning adult should master

Join the conversation about this story »

A computer made up stories about these 13 photos and the results are hilarious

$
0
0

titanic movie 1997 ship bow rose wind arms outstretched

Storytelling may be quintessential human act, but computers are quickly catching up.

Developer Samim Winiger has merged image-recognition and language-processing software to birth an open-source program that can take an image and make up a story about it on the spot.

To give it human-like language, Winiger leaned on two AI models built by University of Toronto PhD student Ryan Kiros— one armed with 14 million passages from romance novels and the other with Taylor Swift Lyrics.

Winiger's AI looks for recognizable objects in an image, finds passages that contain the object in its lusty and lyrical repertoire, and then organizes it into a mini-story.

To test the software, he summoned 5,000 random images from the web. Scroll down to see 13 of Tech Insider's favorite hilariously nonsensical (and unedited) stories.

 

'He leaned over to kiss me on the shoulder.'

"He was a shirtless man in the back of his mind, and I let out a curse as he leaned over to kiss me on the shoulder.

He wanted to strangle me, considering the beautiful boy I'd become wearing his boxers."

Model: Romantic Novels



'The man released a woman 's blood , and that was the reason for all of them.'

"My man was within a woman, and she gave him a questioning look. I don't know what else to say to her . For the first time in years , I had never seen as beautiful as the rest of my man . 

In fact, he had his hands on her hips , holding her close to his chest and looking down at her as if she were the most stunning woman I d ever seen . The man released a woman 's blood , and that was the reason for all of them.

She hoped to make an appearance, convincing him and the woman who was still her first ."

Model: Romantic Novels



'There was only a few people in New York City , and New York City.'

"We had to act much like the leader of the group, and for the first time in months ,  I had no idea what she was going to do . In fact , I was pretty sure they were all bundled up in the snow . I couldn't help but close my eyes . 

As soon as he let go of my leg , he slipped his arm around my waist and pulled me toward him . There was only a few people in New York City , and New York City . She seemed to have no sense of conviction , that the fewer people I trusted and weapons started ."

Model: Romantic Novels



See the rest of the story at Business Insider

This is what 100 perfect selfies look like

$
0
0

Ever wonder how to snap a selfie that will break the internet? Luckily for you, a computer figure out how to do it.

A new artificial intelligence (AI) system built by a Stanford University researcher Andrej Karpathy looked at two million selfies and figured out which ones are most likely to get a lot of love.

Below are the cream of the crop — the top 100 of 50,000 images. Notice that of the best 100 selfies, not a single man is included, and there are very few people of color.

Best 100 selfies

Here's a few tips the program came up with for women wanting to take selfies just like these, according to Karpathy's blog post about the project:

1) Be female.

2) Show your long hair.

3) Take it alone.

4) Use a light background or a filter: Selfies that were very washed out, filtered black and white, or had a border got more likes. According to Karpathy, " over-saturated lighting ... often makes the face look much more uniform and faded out."

5) Crop the image so your forehead gets cut off and your face is prominently in the middle third of the photo. Some of the "best" selfies are also slightly tilted.

And here are the male images that did the best, you can see similar trends cropping up, especially the number of images with white borders. But the rules do change slightly — the male images more frequently included the whole head and shoulders, Karpathy writes.

malesOn the other hand, the worst images, or the selfies that probably wouldn't get as many likes, were group shots, badly lit, and often too close up.

So if you want your selfie to get a lot of love, make sure you follow the rules above. Read more about the program's creation.

Join the conversation about this story »

NOW WATCH: Scientists just discovered what destroyed Mars’ atmosphere

Science reveals how not to take a selfie

$
0
0

A new artificial intelligence system has figured out how not to take a selfie.

Stanford University PhD student Andrej Karpathy built a deep learning system called a ConvNet that studied 2 million selfies and found that the worst selfies share a few things in common with bad photos in general — they're usually badly lit or too close up.

However, the AI also found that selfies containing more than one person tended to not get very many likes. It's easy to see why the photos below may not attract a lot of likes — it's difficult to even make out any faces in many of them. Here's what the worst of the worst look like:

bad selfieKarpathy writes what not to do to take a good selfie in his blog about the project. He writes not to do the following:

  • Take selfies in low lighting. Very consistently, darker photos (which usually include much more noise as well) are ranked very low by the ConvNet.
  • Frame your head too large. Presumably no one wants to see such an up-close view.
  • Take group shots. It's fun to take selfies with your friends but this seems to not work very well. Keep it simple and take up all the space yourself. But not too much space.

In contrast, the top 100 selfies are very washed out and only featured one person. The best selfies also tended to follow a set of specific rules — had a black and white filter, included white borders, featured a face in the middle third of the photo, and cropped so that the person's forehead is cut off.

The best selfies are also uniform in other ways — very few women of color and virtually no men appear in the top 100.

Best 100 selfiesThe rules for great selfies change slightly when it comes to men, though many of them are still very washed out and feature white borders. Karpathy writes that the best selfies taken by men also more frequently included the whole head and shoulders.

males

So if you want your selfie to get a lot of love, make sure to include a lot of light, be on your own, and back up a little. Read more about the program's creation.

Join the conversation about this story »

NOW WATCH: Science says that parents of successful kids have these 7 things in common

Google just released powerful new artificial intelligence software — and it's open source

$
0
0

google translate

Google has long been ahead of the curve on artificial intelligence (AI), and things just got a lot more interesting.

Today, the tech giant released TensorFlow, a new AI system that's used in everything from recognizing speech on a noisy sidewalk to finding photos of your pet dog Fluffy.

And in a rare move for Google, it's making its software open source, meaning anyone can access and edit the code.

The software passes complex data structures, or tensors, through a neural network, or artificial brain, hence the name Tensor Flow. This process is a core part of deep learning, a powerful AI tool that is used in many of Google's products. 

Google says the new program is five times faster than its first-generation system, and can be run on thousands of computers or a single smartphone. Google uses it in everything from Search to Photos to Inbox. For example, it's what lets Google Translate detect foreign words on a street sign and translate them in real-time, as shown above.

The reason for making TensorFlow open source is to spur innovation and make it easier for researchers to share their ideas and code, Google spokesperson Jason Freidenfelds told Business Insider.

Currently, Google has only released a version of the new technology that runs on a single machine, but it plans to release a multiple-machine version in the future, Freidenfelds said.

But Google is not making everything open source. "It's sharing only some of the algorithms that run atop the engine. And it's not sharing access to the remarkably advanced hardware infrastructure that drives this engine,"Wired's Cade Metz reported.

According to Google, TensorFlow can also be used for other types of machine learning, or for things you might use a supercomputer for — from protein folding to analyzing astronomy data.

"We've seen firsthand what TensorFlow can do, and we think it could make an even bigger impact outside Google," the company wrote on its blog. "We hope this will let the machine learning community — everyone from academic researchers, to engineers, to hobbyists — exchange ideas much more quickly, through working code rather than just research papers."

Google made this YouTube video describing what TensorFlow is:

NEXT UP: Google’s AI system created some disturbing images after ‘watching’ the film Fear and Loathing in Las Vegas

NOW READ: The CEO of Google's £400 million AI startup is going to meet with tech leaders to discuss ethics

Join the conversation about this story »

NOW WATCH: Google's translate app will redefine the way we travel

Facebook is using 'Lord of the Rings' to teach its programs how to think

$
0
0

lord of the rings return of the king

Lift the curtain on almost any tool on Facebook, and you're likely to see a robot at the controls. That's because artificial intelligence (AI) is responsible for powering things like automatic tagging and newsfeed.

To make their AI even more intelligent, Facebook researchers are harnessing the power of fantasy fiction — they're teaching it the "Lord of the Rings."

According to Popular Science, the social media behemoth is working on an AI, called Memory Network, that can understand and remember a story, and even answer questions about it. Any story could be used, but researchers taught Memory Network a short summary of J.R.R. Tolkien's fantasy saga "Lord of the Rings." Memory Network is powered by deep learning, a statistical approach that allows the AI to improve over time.

Facebook CTO Mike Schroepfer presented Memory Network at a developer's conference in San Francisco in March. He said the AI's ability to answer questions about Frodo and the ring shows it understands how people, objects, and time in the narrative are related.

Though Memory Network's knowledge of the "Lord of the Rings" is very stripped down, it's a first step into an AI that has a common sense understanding of the relationships between objects and topics, something that's so far been very difficult to encode in computers.

Eventually the AI could be used to improve newsfeed and search, Schroepfer said, because it's relationship understanding qualities would know what you're interested in before you even ask for it. If the AI can deduce that you're a dog person and not a cat person, for example, from your dog pictures you share, and therefore a smart newsfeed would show you a lot more videos of puppies and fewer cat videos.

"By building systems that understand the context of the world, understand what it is you want — we can help you there," Schroepfer said at the conference. "We can build systems that make sure all of us spend time on things we care about."

You can see how the AI works in the short video below. Asked "where was the ring before Mount Doom?" the AI was able to deduce that the ring in the Shire. (Spoilers: Frodo and Sam make it.) The AI references passages and sentences that give it the information it wants to know.

Some of the leading minds in AI research are working at Facebook to build intelligent machines. One of the group's more recent advances is a technology called Memory Networks, which enables a machine to perform relatively sophisticated question answering, as in this example of a machine answering questions about a Lord of the Rings synopsis.

Posted by Facebook Engineering on Thursday, March 26, 2015

It's also part of a larger project called "Embed the World," a project aimed at teaching machines to better understand reality by representing relationships between things as "images, posts, comments, photos, and video," according to Popular Science.

Yann LeCun, Facebook's AI research director told Popular Science that the project can tag photos taken in the same place based on the image and the caption alone.

Memory Network is just one example of Facebook's increased investment in AI. Facebook's digital assistant M uses some AI. They're also working on image recognition for video.

Join the conversation about this story »

NOW WATCH: What processed meat really is — and why it could give you cancer

Coming Home: This West Point grad is using AI and Big Data for national security

$
0
0

Dr. Paulo Shakarian

As a computer science professor at Arizona State University, Paulo Shakarian applies artificial intelligence methods to pressing national security problems, using programs that try to map out what groups like ISIS will do next.

This work isn't abstract or theoretical for him.

As a West Point graduate and then an army intelligence officer during some of the most difficult years of the Iraq war, Shakarian got a ground-level view of the same kinds of groups whose behavior his work now helps predict.

"Growing up in the 80s and 90s in the US, war always seemed like a remote possibility," Shakarian told Business Insider.

But the September 11th attacks took place during Shakarian's senior year at West Point. The next generation of US military leaders realized that their futures had changed almost instantly. 

Within a year of graduating in 2002, Shakarian was deployed to Iraq as a Tactical Intelligence Officer in the Army's 1st Armored Division. He spent 14 months there between 2003 and 2004.

After that deployment, he served as the Platoon Leader of 501st Military Intelligence Battalion based in Wackernheim, Germany, where he continued to process, collect, and analyze intelligence for the Army through 2005.

In 2006, Shakarian returned to Iraq with the 1st Infantry Division , this time as a Military Advisor for Intelligence with the National Police Transition Team. 

In 2003, Iraq was still in a post-invasion daze. In 2006, it was in an incipient state of civil war. "There was a lot of combat, a lot of IEDs," recounts Shakarian. As it turned out, Shakarian would be deployed during the Iraq war's most violent year before the 2007 troop "surge."

During this time he earned a Bronze Star for combat service while continuing to work as an intelligence analyst.

shakarian

Because of Shakarian's position as an Army Captain and an intelligence officer advising the Iraqi national police, he would would often touch base with local-level US and Iraqi intelligence officials throughout the country.

It occurred to him that there were numerous US intelligence techniques that never saw much application in the field. Intelligence workers are advised to analyze all the data they have available and hypothesize possible causes or courses of action when their data no longer makes sense, or when the usual analysis methods fail. But few in the field actually have time for tiresome guess-and-check work, especially in the midst of a war.

In Iraq, Shakarian began to see ways to merge his knowledge of computer science, which he had studied at West Point, with his work as an intelligence analyst. He began to envision ways to use artificial intelligence to model the behaviors of often-unpredictable insurgent groups. 

"In an operating environment, there is no time to work on a single project of this size," said Shakarian of his early ideas for transforming intelligence analysis. After his deployment, he was back in garrison with Task Force Iron Sentinel in Wiesbaden, Germany, analyzing intelligence and managing the brigade's anti-terrorism efforts.

Paulo Shakarian

As part of an Army student detachment, Shakarian then went to the University of Maryland where he obtained his Masters Degree in computer science and worked as a research assistant.

He developed a focused yet highly ambitious new goal: To revolutionize intelligence analysis with the help of machine learning, or the science behind getting machines to operate without the painstaking and explicit programming they usually require. Among other things, machine learning could help cars become self-driving. Shakarian wanted to harness this ability to make a similar leap in intelligence analysis. 

Shakarian's work started to earn him attention. After authoring a paper called "Shaping Operations to Attack Robust Terror Networks," the House Permanent Select Committee on Intelligence invited him to brief them on his findings.

Shakarian eventually finished a Ph. D at the University of Maryland and fulfilled a teaching commitment at West Point. When he exited the military in 2014, he continued his artificial intelligence work at ASU.

Shakarian's research has spawned several potentially game-changing programs, like the SCARE software, which Task Force Paladin used to detect IEDs in Afghanistan, or the GANG and SNAKE social media analysis packages that help the Chicago police fight gang activity.

Recently, his work at ASU made headlines when he presented a paper at the Knowledge Discover and Data Mining conference on a mathematical model of ISIS' behavior. Currently he is working for the Department of Defense on a Minerva grant award, developing more cutting edge technologies that use computer science to save lives. And this year, he won the Air Force's Young Investigator award.

Shakarian Lab asu

Immersed in engaging and cutting-edge work, Shakarian transitioned smoothly out of the military.

Shakarain says that his military experienced taught him leadership skills he wouldn't have gotten in the civilian world. When running his team of 15-20 researchers at ASU Shakarian is guided by leadership skills he learned during his decade of service.

Overall, Shakarian says the military gave him the flexibility and the temperament needed to thrive in diverse environments — whether it's on the battlefields in Iraq or in a civilian research lab. "My time in the army helped me to adapt to other cultures," says Shakarian. "I find myself falling back on that a lot."

 

SEE ALSO: The Obama administration is making a major shift in its Syria strategy

Join the conversation about this story »


Google's new artificial intelligence tool has massive potential

$
0
0

google photos california mountains

Google made waves Monday when it made its new artificial intelligence system TensorFlow open source.

Google has used TensorFlow for the past year for a variety of applications. For example, Google Photos is scary good at search because it uses TensorFlow to recognize places based on popular landmarks or characteristics, like the Yosemite National Park mountain range. 

Other Google products that use TensorFlow include Google search, Google's voice recognition app, and Google Translate. 

By making TensorFlow open source and letting any developer use it, Google can improve the system and see other ways it can be used beyond its current applications. That means it has the potential to make Google smarter at doing everything from delivering you better search results to recommending what YouTube videos to watch. It'll also make third-party apps a lot more useful.

Google declined to comment for this story.

But what is TensorFlow? To fully understand it, you need to get artificial intelligence and deep learning on a basic level.

How does it work?

The easiest way to understand TensorFlow and Google's approach to AI is with image recognition. In 2011, Google created DistBelief, which used machine learning to identify what's in a photo by recognizing certain patterns. For example, it'll look for whiskers of a certain length to help determine if the image is of a cat.

The system was built using positive reinforcement — essentially, the machine was given an image and asked if the image was a cat. If the machine was correct, it was told so. But if the machine was wrong, the system was adjusted to recognize different patterns in an image so that it was more likely to get it right next time. 

TensorFlow takes the concept a step further by using deep learning, or an artificial neural network composed of many layers. 

Basically, TensorFlow sorts through layers of data, called nodes, to learn that the image it's viewing is of a cat. The first layer will ask the system to look for something as basic as determining the general shape in the picture. The system then moves, or flows, to the next data set — like looking for paws in the photo.

Here's a demo of TensorFlow in action:

The system moves from node to node to compile enough information to say that the image is, in fact, of a cat. That flow process is called a tensor, hence the name TensorFlow.

What's its potential?

As Google writes on its blog: "TensorFlow is faster, smarter, and more flexible than our old system (DistBelief), so it can be adapted much more easily to new products and research."

Stanford University researcher Andrej Karpathy recently built an AI system capable of looking at 2 million selfies and figuring out which ones are most likely to get a lot of love. Karpathy noted that the only tools available to developers for deep learning projects, such as Theano, are grown out of graduate students' side projects.

"TensorFlow is the first serious implementation of a framework for Deep Learning, backed by both very experienced and very capable team at Google," Karpathy wrote in an email to Tech Insider.

Jon Van Oast is a senior engineer for nonprofit WildMe, which compiles photos of different species of animals for research purposes.

The project began by collecting photos of Whale Sharks. Each Whale Shark has a unique configuration of spots, which allows you to identify one from another. WildMe uses a machine learning system called AdaBoost that allows researchers to upload a photo of a Whale Shark and see if there is a match for it already in the system.

If the system finds a match, the researcher now has more information about that whale's life, such as where it was previously located. WildMe has extended from that original purpose to sort through other species of wildlife as well.

Van Oast told Tech Insider that he thinks TensorFlow could help sort through more images at a faster rate, providing researchers with more information. 

"It's a way to look at a big set of data where humans can't discern what's important in it," he said.

Delip Rao, who consults with startups on machine learning and is a former engineer for Twitter, told Tech Insider that you could use TensorFlow to build natural language processing systems, such as an intelligent messenger similar to Facebook's virtual assistant M. He said he's exploring of how he would use TensorFlow, but thinks it looks promising.

Join the conversation about this story »

This robot has a skill that was once reserved only for humans

$
0
0

Simon Georgia Tech talking robot

Robots are becoming more capable of performing tasks like humans — we're even sending them to assist astronauts in space— but when it comes to speaking like humans, that's a major challenge.

We don't think much about it since it's such a native skill, but learning the nuances of human speech is no easy feat. Think of Siri: you may be able to ask her to check the weather, but having a casual conversation is impossible.

So researchers at Georgia Tech are working to develop software that would give robots the ability to hold a conversation, IEEE Spectrum first reported. The researchers are developing artificial intelligence to allow a robot named Simon converse in a more fluid matter.

That means keeping up when people abruptly change a conversation topic or interrupt each other. It also just means sounding less stiff and talking with more cadence.

The Georgia Tech researchers, Chrystal Chao and Andrea Thomaz, have developed a model using engineering software called CADENCE that allows Simon to understand the concept of taking turns when speaking.

Simon was given two speech patterns: active and passive.

For the active speech pattern, Simon exhibits an extroverted personality who talks at length and at a louder volume. Simon was also more likely to talk over others.

When set on the passive speech pattern, Simon spoke less and allowed humans to interject more often.

“We expect that when the robot is more active and takes more turns, it will be perceived as more extroverted and socially engaging,” Chao told IEEE Spectrum. “When it’s extremely active, the robot actually acts very egocentric, like it doesn’t care at all that the speaking partner is there and is less engaging."

Finding an appropriate balance between active and passive, as well as making advancements in body language to truly mimic how people converse, is necessary for Simon to talk with the same cadence as C-3PO did with Luke.

Watch Simon talk in active and passive mode:

Join the conversation about this story »

Virtual assistants are going to replace your mouse and touchscreen

$
0
0

iPhone 4s Siri

The idea of everyone having a virtual assistant sounds a bit pompous. After all, an "assistant" has traditionally been a luxury reserved for the elite.

Yet, many companies are working on bringing virtual assistants to the masses — including Facebook.

On-stage at the O'Reilly Next:Economy summit, Facebook's AI guru Alexandre Lebrun and Siri's original creator Adam Cheyer took issue with the notion that these tools are for the elite. 

"Really it’s just another way to interact," Cheyer said. 

Both argue that these "virtual assistants" are just new ways of interfacing with a computer. First came the keyboard and the mouse, and next it will be using natural language, Lebrun explained. 

Cheyer used the example of needing to pick up a bottle of wine that pairs well with a lasagna en route to his brother's house. If you wanted to do that yourself, you'd need to determine what wine pairs well with lasagna (search #1) then find a wine store that carries it (search #2) that is on the way to your brother's house (search #3). Once you have that figured out, you have to calculate what time you need to leave to stop at the wine store on the way (search #4). 

Instead, Cheyer's new company, Viv, is designing an "assistant" which can string all those answers together using artificial intelligence and give you the right answer. These assistants are designed to save time like a traditional assistant does, but they're also fundamentally changing how you would interact with your phone, your watch or your computer, Cheyer said. 

The future of human-computer interaction will be more like asking complex questions in natural language, rather than searching for the answer piece by piece by piece across a bunch of apps and websites.

Facebook is beginning to address this with its M service, which handles tasks such as calling Comcast to get your cable bill lowered or book a reservation. Of course, M is currently powered by real humans toiling away on these jobs and "training" the artificial intelligence technology to eventually shoulder more of the load. 

The goal of M is that virtual assistant technology will gradually learn how to do more transactional operations, rather than just rote searching of the web. It's a change in how users and companies interface as much as it is a change in how people interact with computers.

"The better we get, the less time you spend talking to customer service," Lebrun said. "It’s a gain for companies, but it’s also a gain for personal life."

SEE ALSO: My month as a human version of Siri — and how Facebook can avoid repeating history

Join the conversation about this story »

NOW WATCH: 11 Easter-egg comebacks Siri has when you ask her about 'Back to the Future' day

These are the most mind-blowing robots, according to 18 artificial intelligence researchers

$
0
0

pepper

Hardly a day goes by where a robot doesn't beat a human at things originally thought to be impossible to automate.

This year especially, artificial intelligence (AI) has had a renaissance — Tesla pushed their self-driving autopilot out to all eligible cars, and Google and Facebook have both announced large investments in AI research.

The latest human jobs to be taken by robots include video game playing and trading stocks. In the near future, robots might even become your best friend.

Where will these technologies take us next? Well to know that we should determine what's the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

Scroll down to see their lightly edited responses.

Subbarao Kambhapati is impressed by who quickly we've developed self-driving cars.

I think autonomous driving is most impressive to me. Autonomous driving first started in the Nevada deserts. It's harder to drive in the urban streets than in rough, almost nonexistent roads in the Nevada desert. Again, because the hardest thing is reasoning the intentions, to some extent, of other drivers on the road.

That has been quite impressive, that we went that far that quickly. I'm pretty much sure that some years down the line, none of us actually have to drive.

Commentary from Subbarao Kambhapati, a computer scientist at Arizona State University.



At this rate, cars will be driving themselves in no time, and Carlos Guestrin can't wait.

It took me a long time to really understand what the implications or impact of the self driving cars would be on our society. I don't like to drive now, so this is kind of a commodity for me.

The recent results that we're seeing with things such as self-driving cars, like an ability to significantly decrease traffic accidents— I think that's really exciting to think about.

I think about a world with no cars would be exciting to me but think about a world with automation of vehicles and the impact it will have on society. That's really exciting.

Commentary from Carlos Guestrin, the CEO and cofounder of Dato, a company that builds artificially intelligent systems to analyze data.



A program that learned to fly a model helicopter like a world-champion blew Peter Norvig away.

One of my favorite systems is Andrew Ng's system that learned to pilot a model helicopter from a few hours of observation, and was able to perform tricks at the level of world-champion pilots.

This was before the introduction of super-stable quadcopters — the copter used in this experiment was extremely challenging to control.

Commentary from Peter Norvig, director of research at Google.



See the rest of the story at Business Insider

Facebook already uses AI to recognize photos — the next step is video (FB)

$
0
0

IMG_2842.PNG

While uploading photos to Facebook, you may notice that the social network will try to automatically tag faces and match them with their respective profiles.

Facebook's artificial intelligence chief, Yann LeCun, thinks the company's facial recognition is the best in the world, according to an interview with Popular Science. And the next step for Facebook's AI efforts are recognizing what's in the videos you watch.

While speaking at the 2015 Dublin Web Summit , Facebook CTO Mike Schroepfer said that while the amount of content Facebook considers showing in your News Feed grows every year, the company's algorithms have to be more selective to surface what you actually care about.

"We need systems that can help us understand the world and help us filter it better," Schroepfer said during the press event, according to Business Insider.

To get better at filtering video — Facebook said it expects to account for the majority of content shared on its network in a few years — the company plans to use AI to scan the contents of videos like it already does for photos.

Popular Science recently talked to Rob Fergus, who leads the AI research team at Facebook, about the new frontier of using AI to scan video:

"Lots of video is “lost” in the noise because of a lack of metadata, or it’s not accompanied by any descriptive text. AI would “watch” the video, and be able to classify video arbitrarily.

This has major implications for stopping content Facebook doesn’t want from getting onto their servers—like pornography, copyrighted content, or anything else that violates their terms of service. It also could identify news events, and curate different types of video category. Facebook has traditionally farmed these tasks out to contracted companies, so this could potentially play a role in mitigating costs.

In current tests, the AI shows promise. When shown a video of sports being played, like hockey, basketball or table tennis, it can correctly identify the sport. It can tell baseball from softball, rafting from kayaking, and basketball from street ball."

Make sure to read the full story at Popular Science for more details.

Join the conversation about this story »

NOW WATCH: We asked a bunch of kids what they think about Facebook

Microsoft CEO Satya Nadella: Smart agents like Cortana will replace the web browser

$
0
0

Satya Nadella Dreamforce

First, we had the PC. Then we had the browser. 

Next, Microsoft CEO Satya Nadella thinks it will be the "agent," the sort of virtual assistant like his company's Cortana that controls the apps on our phone or computer for us.

"To me, AI is going to happen," Nadella said, on-stage at the O'Reilly Next:Economy summit. "It's technology that's inevitable."

To Nadella, the new wave of "agents", or AI-assisted services like Cortana or Siri, are going to change how we browse the web. 

It's still browsing, but it's different because you're not invoking every app, Nadella said. Instead, if someone asks "do I need to bring an umbrella today?" it will be the agent who knows your location and can look up the weather to see if it's raining and say yes or no. 

To Nadella, the future will be a lot of people walking around talking to their agents naturally. "'Hey Cortana' is in my vocabulary. Having that become more pervasive is my pursuit," Nadella said. 

Microsoft isn't the first to consider this a marked change in user interface and how people will interact with computers.

Earlier at the event, Facebook's AI guru Alexandre Lebrun and Siri's original creator Adam Cheyer took issue with the notion that these tools are for the elite. 

Both argue that these "virtual assistants" are just new ways of interfacing with a computer. First came the keyboard and the mouse, and next it will be using natural language, Lebrun explained. 

Cheyer used the example of needing to pick up a bottle of wine that pairs well with a lasagna en route to his brother's house. If you wanted to do that yourself, you'd need to determine what wine pairs well with lasagna (search #1) then find a wine store that carries it (search #2) that is on the way to your brother's house (search #3). Once you have that figured out, you have to calculate what time you need to leave to stop at the wine store on the way (search #4). 

Instead, Cheyer's new company, Viv, is designing an "assistant" which can string all those answers together using artificial intelligence and give you the right answer. These assistants are designed to save time like a traditional assistant does, but they're also fundamentally changing how you would interact with your phone, your watch or your computer, Cheyer said. 

SEE ALSO: My month as a human version of Siri — and how Facebook can avoid repeating history

Join the conversation about this story »

NOW WATCH: 11 Easter-egg comebacks Siri has when you ask her about 'Back to the Future' day

Researchers say this is the most impressive act of artificial intelligence they've ever seen

$
0
0

deep mind dqn atari

It's probably been awhile since you picked up an Atari controller, but even if you played for decades, you probably couldn't beat Google's DeepMind artificial intelligence program.

While a program that plays video games might seem gimmicky, artificial intelligence researchers (AI) have told Tech Insider over and over again that it's one of the most impressive technology demonstrations they've ever seen.

DeepMind made big news back in February 2015, when researchers announced it could learn to play and win games on the Atari 2600 — a simple console that was popular in the 1980s — without any instructions or prior knowledge of how to play video games.

The program watched its own gameplay and learned how to win all by itself. It also had a "reward" system so it knew when it was improving at gameplay.

The computer beat all human players in 29 Atari games, and performed better than any other known computer algorithm in 43 games.

Its one weakness, according to the MIT Technology Review, was in games like Ms. Pac Man, where it had to plan ahead to clear the last dots from the maze.

Still, the program came up over and over again in the last few months, while Tech Insider reporter Guia Marie Del Prado interviewed dozens of AI researchers. Del Prado asked them what AI they've seen that really blew them away, and the Atari-playing DeepMind program was the most-mentioned.

"The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games," Stuart Russell, a computer scientist at the University of California at Berkeley, told Tech Insider.

You can see how bad it is during the first ten minutes of learning the game Breakout here:

"The DeepMind results on learning to play Atari games while only having access to raw pixels and the game score have been very inspiring," Pieter Abbeel, another computer scientist at UC Berkeley, told Tech Insider. "I have been very excited about our own recent results on the same benchmark, as well as learning to walk in simulation — with a single algorithm able to learn those two very different types of tasks."

It takes about 120 minutes for the program to reach expert level at Atari's Breakout:

Michael Littman, a computer scientist at Brown University, told Tech Insider: "I think that's really neat because it starts to point the way towards systems that aren't just really really clever pieces of programming but are actually taking their experience and turning it into intelligent behavior."

But Russell notes the program is so good it's almost frightening. "That's both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you'd be terrified," he said.

Check out DeepMind in action playing Breakout in this video from Karoly Zsolnai-Feher on YouTube. After a few hours of existing, the AI learns the easiest way to beat the game is to tunnel through the bricks (bet your newborn can't do that):

Join the conversation about this story »

NOW WATCH: We asked an astronaut if aliens exist and his answer was spot on


Here's how Mark Zuckerberg chooses who sits next to him at the Facebook office (FB)

$
0
0

zuckerbergDuring Fast Company’s recent in-depth look into Facebook’s plans for the future – which includes artificial intelligence, virtual reality, and drones – one employee offered some interesting insight into the way CEO Mark Zuckerberg decides who sits next to him.

It turns out that Zuckerberg has a "signature move" to help him "absorb new material," where he places teams near to his desk that in charge of new initiatives or areas he'd like to learn more about.

Yann LeCun, a New York University faculty member and head of Facebook’s AI Research headquarters in Manhattan, told Fast Company that Mark Zuckerberg actually shuffled the seating order of the office so that artificial intelligence researchers who were located at its main campus would be in his direct vicinity. 

"When we moved to the new building, we ended up being separated from Zuck by about 10 yards," LeCun explained. "He said, ‘No, this is too far, move closer.’ "

This is not the first time Zuckerberg has gravitated his hired talent closer to him. When Facebook was preparing to launch its Timeline feature in 2011, he required that Instagram cofounder Kevin Systrom and other key design talent sit within arm's length of the boss’ desk.

You can read the whole Fast Company interview here.

SEE ALSO: Mark Zuckerberg explains that the key difference between Facebook and Alphabet is that it 'can't fail'

Join the conversation about this story »

NOW WATCH: Facebook and Instagram won’t let you mention or post links from this competitor

Mark Zuckerberg wants Facebook to become better than you at hearing and seeing things (FB)

$
0
0

mark zuckerbergIn Fast Company’s expansive look into Mark Zuckerberg’s vision for the future of Facebook – which includes virtual reality and drones – the founder revealed that researchers at the company’s artificial intelligence division have one clear focus: make machines better than man.

And Facebook thinks it's not far-fetched for its AI engineers to create a Facebook of the future that is even more powerful than a human's primary senses.

"One of our goals for the next five to 10 years is to basically get better than human level at all of the primary human senses: vision, hearing, language, general cognition," Zuckerberg told Fast Company. "Taste and smell, we’re not that worried about ... For now."

Facebook's 50-person AI team, headed by Yann LeCun in Manhattan, is tasked with preparing Facebook for an era where all devices are connected, researching ways to harness the vast amounts of data that will soon be flowing through the social media site.

"If there's 10x or 20x or 50x more things happening around you in the world, then you're going to need these really, really intelligent systems like what Yann and his team are building," Jay Parikh, Facebook's VP of engineering, told Fast Company.

You can read the whole Fast Company interview here.

SEE ALSO: Mark Zuckerberg explains that the key difference between Facebook and Alphabet is that it 'can't fail'

Join the conversation about this story »

NOW WATCH: Facebook and Instagram won’t let you mention or post links from this competitor

Pinterest is about to launch an AI tool that will make online shopping a lot easier

$
0
0

Pinterest Visual Search Tool

Ever looked at an image and wanted to buy a specific item in it?

Well, Pinterest is currently rolling out an artificial intelligence tool that utilizes deep learning that will allow you to search items within an image, find it, and potentially buy it, according to Pinterest's blog.

To use the function, Pinterest users can tap a search tool that will appear in the corner of pins. From there, users will be able to draw a box around the specific item they would like to search. So, if you're looking at a photo of a living room and are only interested in the lamp, you can single out that specific item.

Pinterest will then draw from its index to produce the same exact item or visually similar items. Pinterest also rolled out a buy button feature over the summer for Pinterest posts, meaning that users could theoretically search for an item in an image and buy it on the spot.

“Image representation coming from deep learning is much, much more accurate,” Kevin Jing, head of visual search at Pinterest, told MIT review. “Even this year there has been so much improvement.”

Pinterest is not the first to use deep learning for image searching purposes.

Google's TensorFlow, an artificial intelligence system made open source earlier this month, is scarily good at searching through photos because it can recognize places based on popular landmarks.

Deep learning works by searching through layers of data to recognize images. In regards to Google's TensorFlow, the system will look through data to determine what the picture is. It'll first look for basic characteristics, like the shape of the image, and then get more specific, such as whether it has paws indicating it's an animal. 

Pinterest's deep learning system will work similarly to determine what the image is and then scan an index to see if similar ones are available.

Footwear retailer Shoes.com is currently testing a deep learning tool in its Canadian store, according to MIT review. The system will allow you to continuously filter through boot styles to find the particular type you are looking for.

Join the conversation about this story »

Neil deGrasse Tyson explains why killer robots don't scare him

$
0
0


Movies would have you believe that killer robots are the inevitable future of technology gone awry — but Neil deGrasse Tyson isn’t afraid, here’s why.

Produced by Kevin ReillyDarren Weaver and Kamelia Angelova. Animations by Rob Ludacer .

Follow TI: On Facebook


StarTalk Radio is a podcast and radio program hosted by astrophysicist Neil deGrasse Tyson, where comic co-hosts, guest celebrities, and scientists discuss astronomy, physics, and everything else about life in the universe. Follow StarTalk Radio on Twitter, and watch StarTalk Radio "Behind the Scenes" on YouTube.

Join the conversation about this story »

An artificial intelligence researcher created a computer program to judge your selfies

$
0
0

kim kardashian selfie

Selfies are an amazing way to tell the world "here I am, rock you like a hurricane!" But according to a new artificial intelligence (AI) system built by a Stanford University researcher, not all selfies are considered equal.

Stanford PhD student Andrej Karpathy built a deep learning system that analyzed 2 million selfies and could tell which selfies would attract the most likes.

It turns out, if you want your selfie to take over Instagram, you might want to follow a few rules — be a woman, have long hair, take a close up, and crop it close enough that the forehead is cut off, among other things.

But before the AI got to these conclusions, it had to be trained.

Here's how it works

The AI system is based on a technology called convolutional nets, which were first developed by Facebook's head of AI research Yann LeCun in the 1980s. If you've ever used image recognition or deposited a paycheck at an ATM, you've used a convolutional net.

The selfie-judging AI system works like an assembly line — the image goes in on the left, goes through levels of analysis, and comes out on the right. Each level breaks the image down pixel by pixel. The first few layers look at simple facets, like shapes and colors, while the layers toward the end look at "more complex visual patterns," Karpathy writes.

Karpathy found the selfies by looking for images tagged #selfie, then divided them into good and bad according to the number of likes (taking into account the number of followers that the person had). He also filtered out images that used too many tags, and people who had too few followers or too many followers.

selfie AIThen the magic began. According to Karpathy, the system "'looked' at every one of the 2 million selfies several tens of times," and found the components that either make a selfie good or bad.

The results

After the system was trained, he fed it 50,000 selfies that the AI had never seen before, and it was able to "rank" them based on the images alone from good to bad. In the image below, the AI system ranks the selfies from good to worse.

Selfies good to badThe AI found that the selfies most likely to get the hearted had a few things in common. They often contained long-haired women on their own. The selfies were also very washed out, filtered, bordered, cut off the forehead, and featured a face in the middle third of the photo.

Below are the cream of the crop. Notice that of the best selfies, not a single man is included, and there are very few people of color.

good selfies AIOn the other hand, the worst images, or the selfies least likely to get any love, were group shots, badly lit and often too close up.

bad selfieGet your selfie judged by a robot

Karpathy even made a Twitter bot, which looks at people's submitted selfies and judges them automatically. I tried it out myself with my latest Instagram selfie from about two weeks ago, and got a slightly better than average score.

It's also pretty fast — it replied with my results in just a few seconds. Try it out by tweeting a square image or link to an image at @deepselfie.

My selfie follows a few of the rules — it was square, pretty washed out, filtered, cuts off my forehead, showed my long hair, and I'm more or less in the middle. I don't know if it got any more love for featuring a napping kitten.

But it might be a bad idea to follow the rules to a tee just for the likes. The AI system is more like an amalgamation of the things that a lot of people like, excluding any sort of creativity like funny faces, blue wigs, or pictures of you and your friends.

After all, selfies are supposed to be an expression of self-love. So if your favorite selfie doesn't score that high, who cares — you just do you.

Join the conversation about this story »

NOW WATCH: Scientists have figured out the best place to hide in a zombie apocalypse

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>