Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

A new Facebook prototype AI can watch and describe what’s happening in videos

$
0
0

facebook

Facebook has been ramping up its use of artificial intelligence (AI) in the past few years: improving newsfeeds, unveiling their own personal digital assistant, and using automatic tagging on photos.

Now Facebook is working on programs than can "watch" videos and classify and tag them.

Rob Fergus, head of the computer vision team at the Facebook Artificial Intelligence Research (FAIR), told Popular Science they are creating what's basically image recognition for video.

Popular Science reported that lot of video being posted on Facebook now is lost in the shuffle because they lack the descriptive text that accompanies images, called metadata. The new AI they are developing would be able to "watch" a video and describe what's happening in it.

They're testing early versions of it now and published a paper about their research on the FAIR website. Below is a video of Facebook's AI system watching and tagging short scenes of people playing sports below.

Here's another video of Facebook's prototype AI, including it deciding if a scene is people playing baseball or softball.

Popular Science reports that this program would do a lot to curb users from uploading offensive videos like porn, or stealing a copyrighted video and uploading it as their own. They also hope that it would help track viral news events and be able to identify different kinds of video genres — like sports or animals.

AI watching a video and recognizing what's happening in it is still a pretty major feat, at least for social media networks. It could make finding similar videos on Facebook and Instagram easier, especially for those days when you just need to watch a lot of cute animal videos.

But there are already AI systems that can spot erratic behavior on surveillance systems in real time, while some systems that can look at and describe still images in real time and in natural language are still being perfected.

Join the conversation about this story »

NOW WATCH: What happens to your body when you get a tattoo


If you think robots are amazing, check out what toddlers can do

$
0
0

Kids Babies Toddlers

Artificial intelligence can perform some mind-boggling tasks faster and better than any human. Time and time again, robots have proved their superiority over humans in Jeopardy, chess, and even stock trading.

But strangely enough, AI systems still struggle at tasks that come naturally to human, even toddlers.

Pieter Abeel, a roboticist at the University of California, Berkeley and co-founder of Gradescope, told Tech Insider that what today's AI struggles most with with are simple sensory tasks — including touch, vision, and locomotion.

Called the "Hans Moravec's Paradox," it's a problem that plagues today's most sophisticated AI. The paradox is named after the AI researcher who wrote about the problem in his 1990 book "Mind Children: The Future of Robot and Human Intelligence."

25 years ago Moravec wrote that "it's comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception or mobility."

That paradox holds true today.

"This [problem] is well appreciated by researchers in robotics and AI, but can be rather counter-intuitive to people not actively engaged in the field," Abeel told Tech Insider. "Replicating the learning capabilities of a toddler could very well be the most challenging problem for AI, even though we might not typically think of a one-year-old as the epitome of intelligence."

Working on unraveling this paradox has given Toby Walsh, a professor of computer science at the National Information and Communications Technology Australia, a profound respect for the human brain.

"To see my daughter (my wife is German, so my daughter is bilingual) simultaneously learning two languages, just blew my mind about the capabilities of the human brain," he said.

This video, taken at a competition called RoboCup where robots play soccer, exemplifies the Hans Moravec Paradox. The robots struggle to stand, walk, and kick while trying to play soccer.

The video is titled "Robots playing soccer at RoboCup2015 is like watching toddlers learn to kick," but based on footage of peewee soccer teams, 3-year-old soccer players would easily and adorably annihilate these robot opponents.

There is one possible explanation for why vision, bipedal locomotion, and sensory tasks are so difficult to get right. According to Moravec, AI hasn't learned from "a billion years of experience about the nature of the world and how to survive in it."

These years of evolution gave humans remarkable sensory and motor abilities so we can understand threats and take action.

Today's AI also lacks the ability to "look at objects we've never seen before and apply our common sense to understand how they work and what we need to do," in the situation, Walsh said.

"Some of the early pioneers in AI, John McCarthy and his colleagues, identified common sense reasoning as being a fundamental challenge for AI," Walsh said. "There's a huge amount of implicit common sense that we pick up as children and apply to our daily lives that we're perhaps not actually that aware of."

Join the conversation about this story »

NOW WATCH: Only 6% of Americans surveyed can answer these basic science questions — how would you do?

Test your smarts against this program that can solve geometry SAT questions

$
0
0

Student SAT Test Studying

Scientists recently unveiled a new AI system that can outfox average high school students on the geometry portion of the SAT.

The program answered 49% of the official SAT geometry questions correctly, and got 61% of the practice test questions right. If you extrapolate the it's performance to the entire SAT math section, it would have gotten a 500 out of 800.

That's about the score of an average high school student, according to the press release from the Allen Institute for Artificial Intelligence and the University of Washington.

Scroll down for some of the SAT geometry practice test questions the program, called GeoS, solved to see how well you stack up.

When GeoS encounters a new SAT question it first looks at the diagram and reads the question. It then interprets the diagram and the question as equations.



Then the system scores and ranks the accuracy of the equations and chooses the answer that best reflects those equations. The GIF below shows the equations that the program found in the question and used to solved it.

AW Embed



Oren Etzioni, AI2's CEO, told Tech Insider by email "that SATs are more useful than the Turing Test" for testing how smart AI systems are. Take a stab at the question below:



See the rest of the story at Business Insider

NOW WATCH: Why BMI is BS

The 3 biggest misconceptions about artificial intelligence, according to Facebook's expert

$
0
0

2001 a space odyssey original

Science fiction movies of artificial intelligence (AI) are abound with plots of benign robots and computers that suddenly gain sentience and emotions, leading them to destroy all of humanity.

The seemingly helpful HAL in "2001: A Space Odyssey" is suffocating astronauts in their sleep by the end of the movie.

The latest in a long history of murderous robots, "Ex Machina's" Ava kills one man and leaves another for dead in her quest to escape.

But these stories of emotionally unstable robots breed baseless fears about what our AI future will really be like. That future robots will have human-like emotions is a huge misconception, said Yann LeCun, the director of Facebook's Artificial Intelligence Research team.

LeCun explains to Tech Insider via email how robot emotions will actually work.

Myth #1: Advanced robots will have feelings.

ex machina movie artificial intelligence robotThe AI we have right now don't look anything like Ava, they look more like the specialized robots of Wall-E. Ava is what's called artificial general intelligence (AGI), an AI that exhibits human-level intelligence on as many different tasks as an average human. She can see, speak, listen, reason, and even manipulate.

The AI we have now are what's called artificial narrow intelligence (ANI), or AI that are amazing at very narrow, specialized tasks, like trading stocks or solving geometry. Moving forward, researchers will likely develop more sophisticated ANI for many different tasks — at least in the near future.

"Most AIs will be specialized and have no emotions," LeCun told Tech Insider. "Your car's auto-pilot will just drive your car," and it won't be programmed to "feel" any certain way about that.

Myth #2: Robots will develop emotions spontaneously.

WALL-EBut even Wall-E got it wrong. He was an ANI — a garbage compactor — that developed a lot of emotions. He experiences awe, fear, and love — emotions that don't make him more efficient at his task. In reality, AI will only have emotions if they're programmed with them. It wouldn't be a byproduct of creating super-intelligent programs, because it wouldn't be of any use.

The one reason we might program emotions into robots would be to make them easier to work with — to make them less like unfeeling automatons and more like a favorite coworker.

"We can build into them altruism and other drives that will make them pleasant for humans to interact with them and be around them," LeCun wrote.

LeCun points to the AI character Samantha in "Her" as "not entirely implausible" because she's built specifically to feel emotions — namely love— to build a relationship with her human partner. Still, LeCun said an AGI that you see in "Her" is still decades away.

But that wouldn't include negative (and, in science fiction, dangerous) emotions like anger and jealousy.

Myth #3:Robot emotions will be similar to human emotions.

Gigolo Joe A.I.If humans program robots with emotions it's likely they'll look nothing like ours, LeCun said. Their emotions will be very rudimentary in comparison to humans. AI's emotions will more reflect their programmed goals, based on the "anticipation of rewards," according to LeCun.

"Right now, the way we train machines is 'supervised,' a bit like when we show a picture book to a toddler and tell them the name of everything," LeCun said.

In this case, the AI system takes the role of a student while the researcher is the teacher. And like a human student, the AI would have an built-in drive to succeed, and would work to anticipate improved success.

But LeCun assures us that even if AI have some form of emotions, there's no reason to fear. LeCun said that this goal-reward behavior will likely be the whole scope of an AI's emotional depth. The most destructive emotions like greed and anger, will remain uniquely human.

"There is no reason for AIs to have self-preservation instincts, jealousy," LeCun said. "AIs will not have these destructive 'emotions' unless we build these emotions into them. I don't see why we would want to do that.

Join the conversation about this story »

NOW WATCH: A psychologist reveals how to get rid of negative thoughts

The 10 craziest projects Google has acquired

$
0
0

M1 Manipulator Meka robot

Google has acquired 182 companies to date, a lot of which specialize in some crazy stuff.

With a market cap of roughly $436 billion, Google has dipped its toe in just about everything. From music streaming (via their 2014 acquisition of Songza) to child-friendly apps (via the Launchpad Toys acquisition in February), Google seemingly has it all with no sign of slowing down.

The 10 biggest acquisitions Google has made totaled more than $24 billion, and that's considering most of the financial details behind Google's nearly 200 acquisitions have never been released.

Most of the acquisitions listed here will be under the purview of Alphabet, which will become Google's new holding company. This isn't surprising, since Alphabet is basically designed to oversee Google's more ambitious projects while Google manages the core products.

Here's a list of the 10 craziest companies Google has acquired thus far (hint: get ready for a lot of robots).

Tilt Brush designs a tool that lets you paint three-dimensional images

What they do:Tilt Brush allows you to paint in 3D. You can stick with basic brush strokes or add smoke or light to your images. You can choose from a variety of brush types, from dry ink strokes to strokes that look like leaves. There's even an option to use brushes that animate what you draw. Users can then make an AutoGif to share what they created.

When it was acquired: April 16, 2015

What it integrates with: Tilt Brush integrates with Google Cardboard, which lets you experience virtual reality, so that you can experience the 3D images created using Tilt Brush.

 



THRIVE Audio creates headphones that provide "3D audio"

What they do: THRIVE is a company that was born out of Trinity College Dublin's engineering department that creates headphones that let you experience 3D audio. It's hard to conceptualize 3D audio, but essentially it reacts to a listener's movements in virtual scenario, accounting for height, depth, and distance.

When it was acquired: April 16, 2015 (same day as Tilt Brush)

What it integrates with: Like Tilt Brush, THRIVE integrates with Google Cardboard so that you can experience surround sound in virtual reality.



It's been reported Google bought this artificial intelligence startup DeepMind Technologies for more than $400 million

What they do: Founded in 2010, DeepMind Technologies was acquired by Google in 2014 to create Google DeepMind. DeepMind specializes in artificial intelligence, creating algorithms capable of learning games like Space Invader. In this case, the only instructions the algorithm was given was to maximize the score. In 30 minutes, it was the best Space Invader player in the world.

Created by Demis Hassabis, a child chess prodigy turned artificial intelligence researcher, DeepMind's artificial intelligence technology is capable of mining social networks for patterns at record speeds and learning to play 49 Atari 2600 video games when given only minimal background information.

DeepMind has not released any products. But with Elon Musk and Peter Theil as investors, Google DeepMind is worth keeping an eye on.

When it was acquired: October 23, 2014. The same time Google acquired another artificial intelligence company, Vision Factory.

What it integrates with: Nothing. It was turned into its own thing. DeepMind Technologies was acquired to become Google DeepMind. Vision Factory integrates with Google DeepMind as well.



See the rest of the story at Business Insider

NOW WATCH: It's harder than you think

An emotional robot just met Neil deGrasse Tyson and the results were adorably strange

$
0
0

Screen Shot 2015 09 28 at 1.12.33 PM

One world-renowned astrophysicist. 

One emotive robot.

One giant hug:

What lead to this embrace? 

Neil deGrasse Tyson, the astrophysicist at the American Museum of Natural History and host of "Cosmos," was moderating at the Clinton Global Initiative's "The Future of Global Impact" conference on Monday.

He was briefly joined on stage by Pepper, the emotive humanoid robot built by the French lab Aldebaran for the Japanese telecom giant Softbank. 

Described by its designers as "the first humanoid robot designed to live with humans," Pepper has already been dispatched to do consumer research in phone stores, and it's intended to help take care of Japan's booming elder population

The robot came on stage and talked with Tyson about why it didn't want robots to be thought of as scary machines intent on taking over the world.

Screen Shot 2015 09 28 at 1.12.29 PMPepper wants to help people out. 

Screen Shot 2015 09 28 at 1.12.31 PM

It was enough for Tyson, who said that Pepper was "nothing like Terminator," referencing the action films where robots dominate the human race.

Pepper will cost about $1,900 when it's available in 2016.

Here's the whole video, take a look (About 10 minutes in):

 

Join the conversation about this story »

NOW WATCH: 4 ways to stay awake without caffeine

This 'emotional' robot is about to land on US shores — and it wants to be your friend

$
0
0

pepper

Pepper the robot wants to be your friend. It can listen to you, can tell when you're feeling down, dance, and follow you around — all on its own. Next year, Pepper is coming to America.

In the meantime, Pepper is already making friends with one of the world's most famous astrophysicist, Neil deGrasse Tyson.

At the Clinton Global Initiative's "The Future of Global Impact" conference on Monday, Pepper hugged it out with Tyson, an astrophysicist at the Natural Museum of History and the host of "Cosmos" and "Star Talk," who was moderating the event.

But before it can take over the US, Pepper needs to get to know us better. To better fit in with its new audience, Pepper's creators are giving the robot's manners an American makeover.

When the "emotional" robot debuted in Japan earlier this year— selling for $2000 American dollars — it sold out in under a minute. Japanese consumers were eager to get their own personal robot buddies, outfitted to understand the ins and outs of Japanese culture. In Japan, Pepper bows and "is much more silly and cute," Aldebaran Robotics communications manager Alia Pyros told MIT Technology Review.

Screen Shot 2015 09 28 at 1.12.33 PM"In the US, we have this kind of C3-PO idea, where he's kind of snarky and kind of smart," Pyros said. So they adapted the robot's personality accordingly.

The robot's manufacturer Aldebaran Robotics, a French company now owned by the Japanese corporation Softbank, is giving Pepper an American education a la "My Fair Lady." But in addition to pronunciation and etiquette, Pepper is also learning to give fist bumps and deliver sassy, sarcastic comebacks.

"I think we should partner together," Pepper told Tyson shortly after giving him a fist bump. "With your brain and my shiny good looks, we would make a great team."

It may seem kind of silly, but an easygoing interaction will be key to Pepper's success in American markets. Yann LeCun, Facebook's artificial intelligence (AI) research director, told Tech Insider in an email that while AI would never develop feelings on their own, some will be programmed with emotions because it makes it easier for humans to work with them.

"We can build into them altruism and other drives that will make them pleasant for humans to interact with them and be around them," LeCun wrote.

This is exactly what Pepper's creators are trying to build into the robot. And since American culture is so different from Japanese, Pepper's creators changed its programming to adapt it to our emotions.

Screen Shot 2015 09 28 at 1.12.31 PMWhile Pepper still sounds a little alien, Brian Scassellati, a professor Yale University who researches robot-human interactions, told MIT Technology Review that AI will soon getting to the point where these interactions will be more fluid.

"Human-robot interaction has really started to home in on the kinds of behaviors that give you that feeling of presence," Scassellati said.

Pepper is pretty limited to social interaction — it can entertain and help people in stores and offices. Pepper told the audience at the Clinton Global Initiative that it can also play games, keep track of calendar events, pull information from the internet, assist doctors and nurses in the hospital, and keep the elderly company in their homes.

It can navigate spaces on its own, carry on conversations, and even recognize whether you're frowning or smiling, but the child-sized robot can't do useful things like hold or carry objects.

That might be changing in the future. According to MIT Technology Review, Aldebaran will soon work with software developers to build custom capabilities and is collaborating with IBM Watson's supercomputer chef to create and walk humans through recipes.

But don't expect Pepper to be anything more than a friend — the company is strictly against it. They've asked buyers to sign a user agreement stating that they wouldn't "perform any sexual act or other indecent behavior" with the robot, according to Japan Times.

If you are willing to sign on to that user agreement, Pepper will be available in the US in 2016. Pepper's US price hasn't been released yet but it sells for around $2000 in Japan, with required monthly subscription fees for updates and maintenance, according to MIT Technology Review.

Join the conversation about this story »

NOW WATCH: Starbucks has a new fall drink for the first time in 4 years — here's what it's like

This drone is one of the most secretive weapons in the world

$
0
0

BAE Taranis

The drone above, called the Taranis, is one of the most cutting-edge drones in production.

At speeds of more than 700 miles an hour, it could come and go without anyone on the ground noticing it, but for the sound of its sonic boom.

It's "virtually invisible to radar," according to David Coates, a spokesperson for BAE Systems, the company that manufactured the drone.

Aptly named for the Celtic god of thunder, the British-made Taranis is one of the most advanced aircraft ever built, and certainly the most advanced built by British engineers.

TaranisBut its development, especially some of its automatic features, was described in a 2013 UN report as "shrouded in secrecy." Details about what the Taranis is capable of are under lock and key. Here's what we do know and why its development has some ethicists worried.

The Taranis isn't deployed yet, and the UK military has no plans to make it part of its official fleet as it is. It's what's called a demonstrator, meaning its being used to test technologies that may be used in future aircraft, Coates said.

According to BAE's website, the Taranis is capable of "undertaking sustained surveillance, marking targets, gathering intelligence, deterring adversaries, and carrying out strikes in hostile territory," all with the supervision of a human operator on the ground. Basically things that all drones can do.

TaranisWhat sets the Taranis apart is its stealth and autonomy functions.

According to Popular Science, "it could technically fly autonomously," though during flight tests it's under the control of a human operator.

At 39 feet long with a 32-foot wingspan, the Taranis is about the size of a school bus. One of its most sophisticated features is its ability to evade detection while keeping in contact with the human pilot on the round, though how it does that is unclear.

According to an infographic from BAE, the Taranis can also target threats and is able to fire on that target on its own after a remote pilot gives the go-ahead.

To do this, the Taranis would reach a preselected area using a programmed flight path. It would automatically identify and target the threat within that search area. It sends this data back to its home base, where the information is verified by the human operator, and the target is OK'd for attack.

The remote pilot would then essentially pull the trigger, and the Taranis would fire before flying back to the base on its own.

taranis diagramBecause the Taranis is a prototype, it doesn't currently carry missiles, but future generations will likely carry weapons, The Telegraph reports.

And future iterations of these weapons could technically find targets and fire "semiautonomously," meaning it can target on its own, but still needs the human pilot to pull the trigger — a major concern for artificial-intelligence researchers.

Autonomous weapons are a problem

More than 16,000 artificial-intelligence researchers who have openly urged government leaders to take action against banning the creation of semiautonomous and autonomous weapons like the Taranis in an open letter to the UN. Tesla and SpaceX CEO Elon Musk, physicist Stephen Hawking, and Google director of research Peter Norvig have also signed on to the petition.

The big problem that has everyone worried is that it's often unclear where the human comes into the decision process of targeting and firing an intelligent weapon, Heather Roff, a professor of international ethics at the University of Denver, told Tech Insider.

But Roff, a contributor to the open letter, said that's just the problem. The UN doesn't currently provide any guidance on what role autonomy should play when it comes to international war.

As more weapons with capabilities like the Taranis' are built, Roff and other proponents of the ban believe that the human will get further and further removed from the firing process. Considerations like whether the target is near a school might not be included in the information that the weapon sends to the human operator, Roff said.

As militaries continue to develop these kinds of weapons, Roff believes it may set a precedent for weapons that can target and fire on their own.

When asked for a response to those concerns, Coates reiterated that the Taranis "is a technology demonstrator designed to trial technologies which may form the future of combat aircraft, with the results of these trials being used to inform future decisions by the UK Ministry of Defence and Royal Air Force."

"We are designing systems that will always be required to comply with the rules of engagement and legal and regulatory requirements, which includes having a human in the loop before selecting targets or deploying weapons," he said.

Watch the Taranis in action below.

Join the conversation about this story »


This drone is one of the most secretive weapons in the world

$
0
0

BAE Taranis

The drone above, called the Taranis, is one of the most cutting-edge drones in production.

At speeds of more than 700 miles of an hour, it could come and go without anyone on the ground noticing it, but for the sound of its sonic boom.

It's "virtually invisible to radar," according to David Coates, a spokesperson for BAE Systems, the company that manufactured the drone.

Aptly named for the Celtic god of thunder, the British-made Taranis is one of the most advanced aircrafts ever built, and certainly the most advanced built by British engineers.

TaranisBut its development, especially some of its automatic features, has been described in a 2013 UN report as "shrouded in secrecy." Details about what the Taranis is capable of are under lock and key. Here's what we do know and why its development has some ethicists worried.

The Taranis isn't deployed yet, and the UK military has no plans to make it part of its official fleet as it is. It's what's called a demonstrator, meaning its being used to test technologies that may be used in future aircraft, Coates said.

According to BAE's website, the Taranis is capable of "undertaking sustained surveillance, marking targets, gathering intelligence, deterring adversaries, and carrying out strikes in hostile territory," all with the supervision of a human operator on the ground. Basically things that all drones can do.

TaranisWhat sets the Taranis apart is its stealth and autonomy functions.

According to Popular Science, "it could technically fly autonomously," though during flight tests it's under the control of a human operator.

At 39 feet long with a 32-foot wingspan, the Taranis is about the size of a school bus. One of its most sophisticated features is its ability to evade detection while keeping in contact with the human pilot on the round throughout, though how it does that is unclear.

According to an infographic from BAE, the Taranis can also target threats and is able to fire on that target on its own after a remote pilot gives the go head.

To do this, the Taranis would reach a pre-selected area using a programmed flight path. It would automatically identify and target the threat within that search area. It sends this data back to its home base where the information is verified by the human operator, and the target is ok'd for attack.

The remote pilot would then essentially pull the trigger, and the Taranis would fire before flying back to the base on its own.

taranis diagramBecause the Taranis is a prototype, it doesn't currently carry missiles, but future generations will likely carry weapons, the Telegraph reports.

And future iterations of these weapons could technically find targets and fire "semi-autonomously," meaning it can target on its own, but still needs the human pilot to pull the trigger — a major concern for artificial intelligence researchers.

Autonomous weapons are a problem

More than 16,000 artificial intelligence researchers who have openly urged government leaders to take action against banning the creation of semi-autonomous and autonomous weapons like the Taranis in an open letter to the United Nations (UN). Tesla and SpaceX CEO Elon Musk, physicist Stephen Hawking, and Google director of research Peter Norvig have also signed on to the petition.

The big problem that has everyone worried is that it's often unclear where the human comes into the decision process of targeting and firing an intelligent weapon, Heather Roff, a professor of international ethics at the University of Denver, told Tech Insider.

But Roff, a contributor to the open letter, said that's just the problem. The UN doesn't currently provide any guidance on what role autonomy should play when it comes to international war.

As more weapons with capabilities like the Taranis' are built, Roff and other proponents of the ban believe that the human will get further and further removed from the firing process. Considerations like whether the target is near a school might not be included in the information that the weapon sends to the human operator, Roff said.

As militaries continue to develop these kinds of weapons, Roff believes it may set a precedent for weapons that can target and fire on their own.

When asked for a response to those concerns, Coates reiterated that the Taranis "is a technology demonstrator designed to trial technologies which may form the future of combat aircraft, with the results of these trials being used to inform future decisions by the UK Ministry of Defence and Royal Air Force."

"We are designing systems that will always be required to comply with the rules of engagement and legal and regulatory requirements which includes having a human in the loop before selecting targets or deploying weapons," he said.

Watch the Taranis in action below.

Join the conversation about this story »

NOW WATCH: We're finally getting a better idea about the story driving LEGO's next video game and it looks awesome

Meet the computer scientist who just got $625K for his work that helps track down human traffickers

$
0
0

christopher re 2

For Christopher Re, a computer scientist at Stanford University, is taking big data to a whole new level.

He's building powerful data-processing programs that are open for anyone to use for anything — from tracking down human traffickers to analyzing genes.

He's also just been dubbed one of 24 MacArthur Foundation 2015 'genius grant' award winners, announced on September 29. The award comes with $625,000 award money for the winners to do with as they see fit — no strings attached.

That award money opens up doors to what seemed to Re like impossible dreams.

"It's one of the things you dream about, all these projects that you've had where it's like 'that's too crazy, I'll never be able to do that,' " Re said in a MacArthur Foundation video of his reaction when he received the call. "Now it looks like you can."

Re developed an artificially intelligent program called Deep Dive that makes sense of hidden information that Re calls "dark data"— unprocessed information stuck in tables, illustrations, and images — which is difficult to quantify or keep track of.

Re's programs can improve on their own with machine learning and can be integrated into existing database systems. The programs are available to everyone to use, prompting the MacArthur Foundation to write that Re is "democratizing big data analytics."

And people are already putting DeepDive to good use. The Defense Advanced Research Projects Agency (also known as DARPA) is using it to suss out the secret details of human traffickers on the dark web. According to the DeepDive website, the program works like this:

In this application, the input is a portion of the public and dark web in which human traffickers are likely to (surreptitiously) post supply and demand information about illegal labor, sex workers, and more. DeepDive processes such documents to extract evidential data, such as names, addresses, phone numbers, job types, job requirements, information about rates of service, etc. Some of these data items are difficult for trained human annotators to accurately extract and have never been previously available.

That data is then used by law enforcement to track down human traffickers.

It's also been used by paleontologists to create a database of every fossil that's ever been found, and by scientists at Stanford Hospital to find associations between genes and diseases.

Christopher Re

"DeepDive was a project that we started a couple years ago basically in response to what we called macroscopic problems — problems where the information for a particular analysis is out there scattered throughout the literature," Re said.

Re is one of the 24 people selected for the MacArthur award. The annual award is given to scientists, journalists, musicians, and artists, whom are often called "MacArthur Geniuses." The other awardees includes journalist and author of "Between the World and Me" Ta-Nehisi Coates and photographer and videographer LaToya Ruby Frazier.

Watch Re talk about his work below.

Join the conversation about this story »

NOW WATCH: A psychologist reveals how to get rid of negative thoughts

This is the best career option if you don't want a robot to take your job

$
0
0

nurse

Robots are taking over our jobs. A 2013 Oxford study estimates that artificial intelligence (AI) will swallow up about 47% of all employment in the United States in the next 20 years.

But there are a few safe bastions left for humans — one of them is nursing.

The Oxford study calculated that nurses have less than a 1% chance of being automated. That's because nurses have to deal with other people, care for others, and have to solve problems under a lot of pressure.

"If you want to become a nurse — and that's for men and women — that's a great profession right now," Jerry Kaplan, author of "Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence," told Tech Insider.

Kaplan's not the only who thinks nursing would be a great career choice for people looking to avoid the coming hoard of robot workers.

The Bureau of Labor Statistics identified nursing as one of the fastest growing professions. The BLS estimates that registered nurses employment will increase by 19% from 2012 to 2022, a faster than average increase. For nurse practitioners, who can provide primary care and write medications, it's almost twice the rate at about 33%.

Here's the report from the BLS:

bourough of labor statistics nursesNursing is more than just a safe career bet against automation, it's also a growing field with lots of opportunities — nursing shortages have come and go, but the current shortage is expected to grow far worse.

According to the American Association of Colleges of Nursing, many problems are compounding the nursing shortage. There aren't enough faculty members teaching nursing, many nurses are nearing retirement, and aging baby boomers are putting a huge strain on hospitals.

For those too squeamish for hospitals — or who have heard one to many poop stories from the nurses they know — Toby Walsh, a computer scientist at the National Information and Communications Technology Research in Australia, told Tech Insider the most robot-immune careers are ones where employees have to be creative and be experts at interpersonal relationships.

His advice for a robot-immune career? "Go into the most people-facing, artistic, creative places that you can think of," Walsh told Tech Insider. "The people who are in the most people-facing, sociological, empathetic jobs are going to be people."

Join the conversation about this story »

NOW WATCH: Meet 'Iceman' and 'Wolverine' — the 2 coolest robots in Tesla's factory

Apple is buying a company that should make Siri way better (AAPL)

$
0
0

iron man jarvis

Apple is buying VocalIQ, a British startup that builds artificial intelligence software, the Financial Times reports.

Apple confirmed the news with its boilerplate response: "Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans."

VocalIQ's software helps people speak to their computers using natural-sounding dialogue — kind of like how Tony Stark from Marvel's "Iron Man" movies naturally talks to his artificial intelligence assistant Jarvis: in full, fluid conversations.

If applied to Siri, Apple's own voice-based assistant, it could show vast improvements in the user experience. For example, Siri cannot hold real conversations, or even remember the last question you asked it. Siri answers each query separately — and though its answers can be quite clever at times, it's not a true assistant that can remember (or even reference) previous conversations.

VocalIQ's machine learning technology, on the other hand, can understand context and can actually improve its own language recognition over time.

When a person speaks to a VocalIQ-powered app, the system securely logs the dialogue — for both analysis and training purposes — but it can also recognize when it doesn't understand something, and ask the user for further clarification. Once the dialogue is complete, VocalIQ's system can analyze and document the conversation, modifying its own models of understanding in the process. In other words, it becomes smarter and more specific to your needs over time. A learning AI, if you will.

Ron Kaplan, who leads the natural language artificial intelligence lab at Nuance (the speech software company that initially helped Apple power Siri), told Tech Insider that one of the biggest issues with today's AI assistants is that they don't know when they don't know something. And if the AI assistant returns a wrong answer, or if it didn't understand your question, you have to essentially start all over again. That's not how humans communicate, Kaplan argues. 

"The interface and the personal assistant needs the awareness to know when it’s confident that it understands — and the opposite of that, getting clear when it's maybe misunderstanding, so it knows when to ask and confirm those cases. That’s an important part in conversation,"Kaplan told Tech Insider. "And you as the human also notices when things are going off track, and you need to be able to interject so you don't have to start the conversation all over again. You need to have a conversation about the conversation, because that’s what people do."

Siri is extremely important to Apple, as the "voice" of all its current and future products. It's at the heart of the new Apple TV, for instance.

But Siri doesn't need to achieve "perfection," in the sense that it will always understand what you're saying and always return the right answers. Just like any human conversation, misunderstandings can and will happen: It's how Siri deals with those misunderstandings — to clarify the intent, and to repair the issue — that could make it a real game-changer.

"This notion of repair, recognizing when things go off track on either side, is going to be important. To be really natural and feel really conversational, [an AI assistant] needs to recognize that conversations are never going to be perfect. I understand 80 percent of what my wife says to me, but if I get something wrong — I don't take out the garbage, I empty the dishwasher instead — it gets corrected, quickly. That's where these things are headed."

Join the conversation about this story »

NOW WATCH: Siri has some weird answers when you ask about her favorite things

A look at how advanced Siri might be in the future

$
0
0

Screen Shot 2015 10 02 at 4.14.29 PM

Apple recently bought a British startup called VocalIQ that's been working on the future of artificial intelligence, an acquisition that's expected to make Siri smarter, according to a report in The Financial Times.

A research project that the team behind VocalIQ worked on last year details the level of ambition Apple likely has for improving its digital assistant. 

The technology could make Siri, which is currently found on the Apple Watch, iPhone, iPad, and iPod Touch, and, later this year, will come on the new version of the Apple TV, not only understand you better, but have conversations with you like an actual human being.

VocalIQ was created by a team of European researchers at Cambridge who also worked on something called the Parlance Project, which "aims to design and build mobile applications that approach human performance in conversational interaction."

Think J.A.R.V.I.S., Iron Man's futuristic AI computer, but in the real world.

The Parlance Project released an Android app in December 2014 called Speak&Eat, which it called "the first truly conversational app for finding restaurants." It only works in San Francisco, but it demonstrates just how much smarter and more contextually aware VocalIQ's technology is compared to Siri.

Here are a few of the questions you can ask the app, all of which Siri on an iPhone is currently unable to answer:

"I would like a French restaurant in the center of town." The app will have likely already learned what town you're in (and if not, it will ask you), and perform the search.
"Are there any Italian restaurants in the north of town?" This example highlights VocalIQ's contextual awareness — it understands normal phrasing like a human, while Siri cannot currently perform such a search.
"Is there anything in the cheap price range?" This kind of question would be asked during a conversation when you're already being a shown a restaurant you might want to visit. Currently, Siri on an iPhone can't keep a conversation going (the new Apple TV will be able to search like this, but won't come out until later this year) so there is no way to drill into the details of a restaurant's menu like this.
"What's the address?" It sounds so simple, but Siri can't answer this kind of question in any context. "The" would have to be the name of what you were looking for, but with VocalIQ, the computer would remember what you had asked it earlier and insert the name in its query for you.

Here's a video of someone talking with the app, which will make you wish Siri were this smart today:

 

Join the conversation about this story »

NOW WATCH: 11 things you can ask Siri to get the most bizarre and hilarious answers

16 reasons why top researchers are obsessed with artificial intelligence

$
0
0

robots

For every new robot, there are hundreds of computer scientists and researchers who put in thousands of hours to make sure it works.

These researchers gravitate to the time-intensive field of artificial intelligence (AI) for different reasons — computers are a lifelong passion, AI could hold the answers to our worst problems, or because AI could make their favorite science fiction books real.

Tech Insider spoke to 16 computer scientists, roboticists, and entrepreneurs to learn why they chose the field for a living.

Scroll down to see their lightly edited responses.

Shimon Whiteson is tired of puny human brains.

I've been very interested in computers since I was a little kid. My older brother taught me computer programming when I was five years old. I was really drawn to it because it gave me control over the computer and it was really creative.

Once you learn how to program you can do whatever you want with a computer. There are a lot of creative opportunities. It's also a really addictive kind of problem solving. So I knew from a pretty young age that I wanted to work with computers.

But then as I got older I got really frustrated by how slowly humanity was solving the fundamental mysteries of the universe. I thought the bottleneck here is that our brains are just too puny. It's too hard to think about these really big problems with our little brains so what we need to do is we need to augment our brains with something that will make them smarter. We need to make computers so smart that they can help us solve these big problems.

Shimon Whiteson is an associate professor at the Informatics Institute at the University of Amsterdam.



Pieter Abeel wants to make a difference.

I've always been fascinated by understanding how things work. If I had gone on to study whatever I found most intriguing in high school, it'd probably have been physics. But with the field of physics already so far along, it just seemed engineering had more potential to lead to doing something that'd have tangible results within my lifetime.

Within engineering, artificial intelligence quickly became most fascinating to me. Building a system that can do (somewhat) intelligent reasoning seemed like it'd open up a lot of possibilities, and also downright intriguing.

Pieter Abeel is a computer scientist at University of California, Berkeley.



Yoke Matsuoka wanted a tennis buddy.

Originally I wanted to become a professional tennis player. When I realized that's not what I was going to be, I wanted to build a robotic system, like a robotic buddy who could play tennis with me.

In order for me to build a robot like that, capable of playing tennis, I had to give it plenty of intelligence ... it has to be able to think and then move accordingly. As I started wanting to build that robot at the Berkeley undergraduate school and [at] MIT, I started to realize I had to study a lot of AI.

Yoky Matsuoka is the former vice president of technology at Nest, a Google-owned company that makes smart thermostats.



See the rest of the story at Business Insider

NOW WATCH: What US cities will look like under 25 feet of water

We should worry about what A.I. Barbie is saying to kids

$
0
0

hello barbie ai artificial intelligence mattelWhile Silicon Valley giants start to develop AI capabilities, Mattel is already launching the ultimate AI for your child, Hello Barbie, an AI toy that Mattel advertises as a child’s “perfect friend.” With Hello Barbie shipping to toy stores in a few months, some parents worry about the potentially negative impact an early generation AI toy could have on their children.

Children develop by observing the world around them. Peter Salovey, a pioneer of emotional intelligence, argues that children learn not only from the adults around them but also from the books they read; for example if characters are happy or sad, the child will learn “how characters cope in response to the feelings.”

If children are interacting more and more with Hello Barbie, it begs the question, what will children learn from the doll? 

From sample dialogue presented in a New York Times Magazine's article on the doll, it is clear that Hello Barbie lacks emotional intelligence.

For example, if a child uses a word that could be interpreted as “mean,” the doll does not respond because, as the manufacturer notes, “acknowledging bad behavior often has the perverse effect of encouraging it.” However, the non-confrontational approach modeled for the child does not teach valuable skills around conflict resolution.

According to Sandra V. Sandy, the Director of Research at the International Center for Cooperation and Conflict Resolution,naturally occurring conflict is an opportunity for children to develop social, emotional, intellectual, and moral skills by working through their disagreements.” Instead, Hello Barbie’s unsophisticated response precludes learning that a child might have had with a human playmate.

Hello Barbie’s lack of emotional intelligence also shines through when the doll responds to a child expressing a negative emotion. If a child says that he or she is feeling “bad” or uses any other “negative words,” Hello Barbie will indiscriminately say, “I’m sorry to hear that.” Such a response reinforces the idea that negative emotions are undesirable, which can lead to poor mental health. If Hello Barbie instead asked, “why are you feeling bad?” the child might try to understand their emotions rather than learning all negative emotions are undesirable.

AIs are inevitable. But, the question remains whether they will elevate us, modeling, and helping to develop more emotional intelligence, or if they will they stagnate us, reinforcing simplistic behavior patterns.

It is unsurprising that ToyTalk, the company developing Hello Barbie for Mattel, does not list any developmental psychologists or trained emotional intelligence experts as employees on their website. I hope that future AI development teams will look beyond coders and natural-language-processing specialists to employ experts in all aspects of human interaction, so that when the future AI comes, it helps us to be more human.  

Emily Grewal was a Product Manager at Facebook and is now a Life Coach at Actant Coaching.

Join the conversation about this story »

NOW WATCH: What Barbie would look like if she had the body of an average American teenager


Apple has bought 2 artificial-intelligence companies in 4 days (AAPL)

$
0
0

tim cook

Apple's on a buying spree as it ramps up its efforts in the tech industry's latest arms race: artificial intelligence.

Bloomberg reports that Apple has bought Perceptio, a company that makes image-recognition technology for smartphones, its second AI deal in four days.

Perceptio was developing "deep learning" technology for smartphones, that allowed phones to independently identify images without relying on external data libraries, Bloomberg said.

Deep learning is a specialized field of artificial intelligence that's all the rage now. It allows machines to recognize patterns and learn on their own.

Artificial intelligence is becoming increasingly vital as companies roll out and seek to improve "virtual assistant" services such as Apple's Siri and Google Now.

Apple may also be looking at AI to help with its plans to build a self-driving car. Apple is reportedly planning to enter the car business in 2019 with an electric — non-self-driving — car.

On Friday, Apple acquired another AI company: VocalIQ, a UK-based startup developing technology to help computers understand human speech. Google, Facebook, and Microsoft are all engaged in an AI arms race, hiring experts in the field from academia and acquiring specialized startups.

We've reached out to Apple and will update if we hear back.

SEE ALSO: Mark Zuckerberg's vision of the future is full of artificial intelligence, telepathy, and virtual reality

Join the conversation about this story »

NOW WATCH: This is what it's like inside Elon Musk's futuristic Tesla factory

Stephen Hawking is a theoretical physics genius, but the mystery he finds most intriguing is women

$
0
0

stephen hawking

In a Reddit "Ask Me Anything" thread on Thursday, world-famous astrophysicist Stephen Hawking fielded questions on everything from artificial intelligence to his favorite movie.

Hawking has been outspoken about the dangers of AI, which he has warned "could spell the end of the human race."

He himself uses a primitive form of AI to produce his characteristic robotic speech, since he suffers from a motor-neuron disease similar to ALS.

Here are a few of the best questions and Hawking's answers:

Your viewpoints are often presented by the media as a belief in terminator-style "evil A.I." How would you present your beliefs?

HAWKING: You're right: media often misrepresent what is actually said. The real risk with AI isn't malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble. You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.

Is it possible for machines to become smarter than their creators?

HAWKING: It's clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents.

What if machines become better at designing themselves than humans are?

HAWKING: If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.

Are we facing [an] imminent threat from intelligent machines, or should we just be preparing for the future?

HAWKING: There's no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don't trust anyone who claims to know for sure that it will happen in your lifetime or that it won't happen in your lifetime.

When it eventually does occur, it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let's start researching this today rather than the night before the first strong AI is switched on.

Do you think we run the risk [of] "technological unemployment" where machines take all of our jobs?

HAWKING: The outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

What mystery do you find most intriguing, and why?

HAWKING: Women. My PA reminds me that although I have a PhD in physics, women should [sic] remain a mystery.

What is your favorite song ever written?

HAWKING: "Have I Told You Lately" by Rod Stewart.

What is your favorite movie of all time?

HAWKING: Jules et Jim, 1962

What was the last thing you saw online that you found hilarious?

HAWKING: The Big Bang Theory

The Hawking AMA is a part of #maketechhuman, a global debate on how we want technology to shape our world, our societies, and our lives, led by Nokia and WIRED.

SEE ALSO: Stephen Hawking is doing a Reddit AMA — now's your chance to ask him your burning questions

NOW READ: These are the research projects Elon Musk is funding to ensure A.I. doesn’t turn out evil

Join the conversation about this story »

NOW WATCH: Benedict Cumberbatch And The Cast Of 'The Imitation Game' Have Mixed Feelings About Artificial Intelligence

Stephen Hawking has a scary prediction about future inequality

$
0
0

stephen hawking

The world-renowned physicist Stephen Hawking made a disturbing prediction for humanity's future with artificial intelligence in a Reddit AMA, suggesting that our drive toward technology and automation could accelerate inequality:

If machines produce everything we need, the outcome will depend on how things are distributed — everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution.

So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

This isn't the first time Hawking has made a point about how increasingly advanced technology could potentially harm humanity. He's been vocal in the past about the dangers that super-intelligent AI could unleash on the world. In a 2014 BBC interview, Hawking said that he believes "the development of full artificial intelligence [AI] could spell the end of the human race."

He also recently co-wrote an op-ed in The Independent with AI researcher Stuart Russell and Max Tegmark that included this startling statement: "Creating AI would be the biggest event in human history. Unfortunately it might also be the last ... AI may transform our economy to bring both great wealth and great dislocation"

Many AI researchers think that any AI that could potentially pose an threat to humanity is either impossible or far away. But they do agree that intelligent AI could have dire consequences for employment and inequality.

Toby Walsh, a professor in AI at the National Information and Communications Technology Australia told Tech Insider that in the future, there wouldn't be a job that AI couldn't do.

"It's hard to think of a job that a computer ultimately won't be able to do as well if not better than we can do," Walsh told Tech Insider. "There are various forces in play, and one of those forces is technology. Technology is actually a force that is tending to concentrate and widen the inequality gaps. This is a challenge not for scientists but one for society to address, of how are we going to work through these changes."

AI and even technology in general are not the only or even necessarily the most significant forces driving economic changes. But whatever the cause, Hawking is mostly right about the result. 

By the end of 2015, the percentage of the global population living in extreme poverty is projected to drop to the lowest numbers ever. Unfortunately, at the same time, inequality and income disparity is rising throughout the world.

Join the conversation about this story »

NOW WATCH: 4 ways to stay awake without caffeine

How much you should worry about 3 common 'robot apocalypse' scenarios

$
0
0

robot

Talk about artificial intelligence (AI) long enough and someone will inevitably ask, "Will robots take over and kill us all?"

The idea of a robot apocalypse is an existential threat: something that could extinguish humanity. Most AI researchers don't have this threat on their minds when they go to work because they consider it in the realm of science fiction.

Still, it's an interesting question — no one knows for sure what direction AI will take.

A few experts have put some thought into the problem, though, and it turns out that the most common robot apocalypse scenarios are likely pure fiction.

Here's why.

1. The Skynet scenario

The first photo that shows up when you search for "robot apocalypse" is the metallic skeleton from the movie "Terminator."

The plot contains the most common plot device people think of when it comes to the robot apocalypse: Skynet. Once the superintelligent AI system gains self-awareness, it starts a nuclear holocaust to rid the world of humanity.

terminator

But a Skynet takeover is probably impossible. For one thing, most of our smartest AI is still quite stupid. And while researchers aredeveloping machines that can see and describe what they're viewing, that's the only thing they're capable of.

Also, some modern computers already have a form of self-awareness, says Tom Dietterich, director of Intelligent Systems at Oregon State University. And even if that rudimentary consciousness improves to human-like levels, we're unlikely to be seen us as a threat.

"Humans (and virtually all other life forms) are self-reproducing — they only need to combine with one other human to create offspring," Dietterich wrote to Tech Insider in an email. "This leads to a strong drive for survival and reproduction. In contrast, AI systems are created through a complex series of manufacturing and assembly facilities with complex supply chains. So such systems will not be under the same survival pressures."

2. Machines turn the world into a bunch of paperclips

This scenario was posed by philosopher Nick Bostrom in his book "Superintelligence: Paths, Dangers, Scenarios"— a learning, superintelligent computer is tasked with the goal of making more paperclips, but it doesn't know when to stop. paperclips

Bostrom summarizes the scenario in a podcast interview with Rudyard Griffith, chair of the Munk Debates:

Say one day we create a super intelligence and we ask it to make as many paper clips as possible. Maybe we built it to run our paper-clip factory. If we were to think through what it would actually mean to configure the universe in a way that maximizes the number of paper clips that exist, you realize that such an AI would have incentives, instrumental reasons, to harm humans. Maybe it would want to get rid of humans, so we don’t switch it off, because then there would be fewer paper clips. Human bodies consist of a lot of atoms and they can be used to build more paper clips. If you plug into a super-intelligent machine [...] any goal you can imagine, most would be inconsistent with the survival and flourishing of the human civilization.

The paperclip machine is just a thought experiment, but if it does happen, the concept remains the same: You give a machine one goal but no stopping point, nor rules about what's morally wrong or harmful. In the process, it destroys everything.

But roboticist Rodney Brooks wrote on Edge.org that looking at the AI we have today and suggesting that we're anywhere near superintelligence likes is would be akin to "seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner."

In other words, a superintelligent AI system that knows how to turn a person or a cat or anything into paperclips is unfathomably distant in the future, if it's possible in the first place.

3. Automated weapons

This scenario claims that engineers would create automated weapons — those able to target and fire without human input — to fight future wars. The weapons would then proliferate through the black market and fall in the hands of tyrannical overloads and gangsters. Civilians would be caught in the crossfire of these autonomous robot killers... until there aren't anymore humans. Samsung SGR 1

This is such a frightening scenario, and arguably the most likely, that 16,000 AI researchers along with Tesla CEO Elon Musk and physicist Stephen Hawking signed an open letter banning autonomous weapons.

But some people aren't buying it. For one thing, Jared Adams, the Director of Media Relations at the Defense Advanced Research Projects Agency, told Tech Insider in an email that the Department of Defense "explicitly precludes the use of lethal autonomous systems," as stated by a 2012 directive.

"The agency is not pursuing any research right now that would permit that kind of autonomy," Adams said. "Right now we're making what are essentially semiautonomous systems because there is a human in the loop."

Join the conversation about this story »

NOW WATCH: The biggest science mistakes in 'The Martian'

The mysterious artificial intelligence company Elon Musk invested in is developing game-changing smart computers

$
0
0

Brain thought

Vicarious, the mysterious company that's been funded by Tesla and SpaceX CEO Elon Musk, Facebook's Mark Zuckerberg, and actor Ashton Kutcher, wants to do something completely radical — build the world's first human-level artificial intelligence (AI).

"Vicarious is building a single, unified system that will eventually be generally intelligent like a human," co-founder and computer scientist Scott Phoenix wrote in a World Economic Forum Q and A.

Right now, we have AI that's very good at narrow tasks, like playing chess or Jeopardy. But computer scientists still have yet to produce a computer that can do as many things as a human can, as well as a human can do them.

A computer that has human-level intelligence wouldn't just be a breakthrough in AI, it would answer one of the most fundamental questions in science — how do you build an intelligent machine? It would also completely change society — imagine being surrounded by artificial beings as smart as you at work, at home, and at the grocery store.

In 2014, Musk invested in Vicarious, which was founded in 2010 by Phoenix and neuroscientist Dileep George, during a $40 million funding round. Phoenix told Bloomberg that Vicarious and their funders are in it for the long-haul — their general purpose AI probably won't be ready for several years, and they don't expect to make much money on top of the $70 million they've raised until then.

"We're fortunate to have the freedom to take a 10-plus-year time horizon," Phoenix told Bloomberg.

Most researchers think building a human-level AI will likely take longer than one decade. Philosopher Nick Bostrom surveyed 550 AI researchers to gauge when they think human-level AI would be possible. The researchers responded that there is a 50% chance that it will be possible between 2040 and 2050, and a 90% chance that it will be built by 2075.

While Vicarious isn't forthcoming about their timeline, they want to build human-level AI as soon as possible. They're doing this by building an AI that emulates how the brain works — specifically the neocortex, the area of that brain that's responsible for perception and information processing, and they're making incremental progress.

In November 2013, Vicarious said that it had built algorithms that could see and solve CAPTCHAs, the codes used by secure websites to filter out robots from genuine human users.

But they still have a long way to go before they can build a computer that can do more than one very specific task and also learns to improve on its own. Phoenix wrote they first have to figure out how to build a computer that matches the immense processing power the human brain uses everyday.

They also have to figure out how the human brain creates intelligence, which largely remains a mystery to neuroscientists and computer scientists alike. So Vicarious is trying to build a "mathematical model of the human brain that enables our systems to learn how to solve problems the way a person would."

They're starting with vision. Phoenix wrote in the World Economic Forum post that vision and perception are "the gateway[s] to higher reasoning."

"What seems like abstract thought is often stimulated by perceptual ability," he wrote in the World Economic Forum. "Suppose I tell you 'John has hammered a nail into the wall,' then ask you 'is the nail horizontal or vertical?' It might seem at first glance like a logic problem, but actually your ability to answer it comes from the mental image you've imagined of John hammering in the nail."

Humans would naturally imagine the nail being hammered in horizontally, say, to hang a picture. But a computer wouldn't logically think of why John hammered the nail into the wall, and could picture him hammering the nail into the top of the wall.

Now if Vicarious could build a system that could understand the scenarios and questions like that, they might well be on their way.

Join the conversation about this story »

NOW WATCH: Here’s the incredible Microsoft virtual reality set that turns your hand into a laser gun

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>