Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

'Machine learning' is a revolution as big as the internet or personal computers

$
0
0

robot pet

We're in the middle of a historic moment. 

It used to be the case that you had to program a computer so that it knew how to do things. Now computers can learn from experience.

The breakthrough is called "machine learning." It's unimaginably important for understanding where technology is going, and where society is going with it. 

Netflix's movie recommendations, Amazon's product recommendations, Facebook's ability to spot your friends faces, dating app's matching you with potential dates — these are all early examples of machine learning. 

And Google's self-driving car is becoming the classic case study. 

"A self-driving car is not programmed to drive itself," says University of Washington computer scientist Pedro Domingos, author of "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World."

"Nobody actually knows how to program a car to drive," he says. "We know how to drive, but we can’t even explain it to ourselves. The Google car learned by driving millions of miles, and observing people driving." 

That's the key: machine learning allows algorithms to learn through experience, and do things we don't know how to make programs for. 

Machine learning had a major public breakthrough in March, when Google made artificial intelligence history by creating an algorithm that mastered Go, the ancient Chinese game with more possible board configurations than there are atoms in the universe. Google's AlphaGo program beat Lee Sedol, perhaps the greatest human Go player alive.  

But Google couldn't program an algorithm to conquer Go. It had to create a sophisticated algorithm that could process 80 years' worth of publicly available Go games, and learn what good moves look like from studying them. 

To Domingos, machine learning is as big of a breakthrough as personal computers, the internet, or electricity itself. 

"There were two stages to the information age," Domingos says. "One stage is where we had to program computers, and the second stage, which is now beginning, is where computers can program themselves by looking at data." 

Perhaps that's why Google's Eric Schmidt says that every big startup over the next five years will have one thing in common: machine learning.

Join the conversation about this story »

NOW WATCH: Consumer Reports just rated Samsung's new Galaxy phone better than the iPhone


Microsoft's incredible new app helps blind people see the world around them — take a look (MSFT)

$
0
0

microsoft seeing ai app

Artificial intelligence is helping people do extraordinary things. At Microsoft, it's helping blind people see the world around them like never before.

Last Thursday, Microsoft showed off its Seeing AI app for the first time. It's still under development, but it looks extremely promising.

Using a smartphone camera or a pair of camera-equipped smart glasses, the Seeing AI app can identify things in your environment — people, objects, and even emotions — to provide important context for what's going on around you.

Take a look.

SEE ALSO: Microsoft CEO: The secret to a harmonious life is to stop obsessing over your smartphone

Meet Saqib Shaikh.



Shaikh lost the use of his eyes when he was just seven years old.



Shortly after, Shaikh was introduced to talking computers at a school for the blind. This inspired him to become a programmer.



See the rest of the story at Business Insider

Amazon quietly acquired a Californian AI startup that can tell what's in your photos (AMZN)

$
0
0

Kelly Osbourne camera taking photo golden globes

Back in Autumn 2015, Amazon quietly acquired a Californian artificial intelligence startup that specialises in photo-recognition technology, according to a new report from Bloomberg.

The publication bases its story on an unidentified source "familiar with the matter"— as well as the fact the startup's website is now owned by an Amazon subsidiary.

If you visit the website of Orbeus— the company Amazon has apparently acquired — now, you're greeted by by a short message saying it "is no longer taking new customers. Thank you very much for your interest and support. But we're up to new/exciting things."

But last year, its website boasted that its"revolutionary image recognition technology helps computers to see like human beings."

Using neural networking AI, its software worked out what was in photos, and was implemented in a consumer-facing app called PhotoTime, and as well as an API called ReKognition.

This kind of photo-recognition tech is increasingly finding its way into consumer apps. Google's Photos app automatically detects what's in your photos and categorises them appropriately, and just this week Facebook announced an update to its app that would detect the contents of photos using AI to help blind people use its social network.

Amazon, like Facebook and Google, is putting increasing focus on AI. In March, it held an invite-only conference for the machine learning and robotics community.

Amazon did not immediately respond to a request for comment.

Join the conversation about this story »

NOW WATCH: Here’s what happens when you pour hot molten tar on an iPhone

The 6 craziest robots Google has acquired

$
0
0

Google went on a robot shopping spree in 2013.

That year, Google acquired seven robotics companies. There was so many to keep track of that Google spun all of its robot projects into a department called Replicant, which is run under Google X — the branch of Alphabet responsible for "moonshot projects."

We got to see a new Alphabet-owned robot this week that we think is the craziest we've seen yet. So we decided to pause and look at the tech giant's other robots.

Here they are:

We'd be remiss not to start off with SCHAFT, which just showed off its new bipedal robot at the New Economic Summit in Japan.

RAW Embed

That long-legged, bipedal robot doesn't have a name as of yet. But we know it's capable of carrying up to 132 pounds and can tackle uneven terrain. You can read more about it here.



But SCHAFT has created some other wild robots as well, like this one that can drive a car on its own.

RAW Embed

This robot is what gave SCHAFT first place in DARPA's robotic challenge in 2013. It was this robot's outstanding performance that encouraged Google to acquire the Tokyo-based company.



Google bought Meka, a company that makes humanoid robots like the Meka M1 Mobile Manipulator that's capable of performing everyday tasks.

RAW Embed

Check out those dexterous arms and fingers! 

 



See the rest of the story at Business Insider

Google and Microsoft are making gigantic artificial brains

$
0
0

broad institute brain synapses

Computers have long been good at carrying out assigned tasks but terrible at learning things on their own. 

Thus all the excitement around "neural networks," a breakthrough artificial intelligence technique that mimics the structure of the human brain and allows machines to learn things independently.

Tech giants are using neural networks to do some pretty impressive things. 

Microsoft is using them to make instant translation real for Skype. Google's artificial intelligence learned Atari video games and then mastered the ancient game of Go, with its AlphaGo program beating the human champion Lee Sedol 4 to 1.

The first artificial neuron was created in 1943, but it's only been in the last few years that neural networks have taken off.

Neural networks are a part of an artificial intelligence revolution that's as important as the invention of the Internet itself, says University of Washington computer scientist Pedro Domingos, author of "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World." 

With neural networks, AI can learn from experience; a programmer doesn't have to write prescriptions of how to behave within the code. 

"One of these artificial neurons is to a real neuron a little bit like an airplane is to a bird. At a certain level of detail, they’re very different, but the important point is that the do the same job, they both fly," Domingos says. "In the same way a neural network and a brain, they’re very different. One is made of silicon, one if made of cells, but they do the same job, which is to learn from experience." 

Like the human brain, neural networks can learn by association — the Skype translator gets better at translating German to English after has done done German to Chinese. 

Beyond driving cars, Google is using neural networks to create surreal electronic paintings. The pattern recognition is so advanced that Google's trippy algorithms can see the silhouette of a tree and turn it into a building or find a leaf and make it look like a bird. Meanwhile, Microsoft's neural networks are better at recognizing images than humans.

These artificial brains are only getting more important: Google chief Eric Schmidt says that they'll be behind every meaningful tech IPO over the next five years.

SEE ALSO: Google's new cloud boss is doing something that no company exec has done for almost a decade

Join the conversation about this story »

NOW WATCH: Consumer Reports just rated Samsung's new Galaxy phone better than the iPhone

Facebook is still 'a long way' from broadly releasing M, its own supersmart chatbot (FB)

$
0
0

stan chudnovsky

Prepare for the invasion of the chatbots.

Facebook just released new tools that will let any business build "smart" chatbots that users can interact with and even buy things from while using the Messenger app.

But what about Facebook's own supersmart virtual assistant, M?

The bot, which Facebook first introduced to the world in the fall, got few on-stage mentions during the company's F8 developer conference on Tuesday.

And the third-party chatbots that Facebook now wants outside businesses to create might seem to obviate the need for Facebook to maintain its own bot.

But Facebook told Business Insider that the company has not given up on M, even if the bot is still "a long way" from being broadly released.

Stan Chudnovsky, head of product for messaging at Facebook, assures Business Insider that the company does still foresee M as being its own product.

"The capabilities of M are way beyond the capabilities that you can build with a simple bot," he says. "On M, we're working on automating as many capabilities as possible."

A super-networking alpha bot

Facebook launched a test version of M last fall as a Messenger-based, human-assisted artificial-intelligence bot that could help users do everything — from booking a date with their spouse to buying flights for their vacation.

Facebook is now letting other businesses use some of the same tools that it has created to develop M. That will help companies make their bots smarter. But each of those chatbots will still focus only on functions specific to their business.

"On M, we're trying to focus on everything," he says.

While Facebook is experimenting with "a gazillion different things," Chudnovsky said that M could eventually integrate with other bots, acting as a kind of super networker between them. For example, if M were helping you plan a vacation, it could talk you through dates and destinations, but then shoot you over to an airline's bot to actually close the loop on a purchase.

'We're automating more and more'

Right now, M is still available only to a "few thousand" users in Northern California, and there are a "few dozen" people working behind the scenes. That's the same, vague number of so-called trainers that Facebook said it had when only a few hundred people were using M.

"We keep increasing the size of the dataset — the rollout is getting broader every day, and all of that's happening while the number of trainers stays the same," Chudnovsky says. "We're not expanding the human presence. From there, you can derive that we're automating more and more."

Despite all the progress, though, don't expect to get your invite to try M anytime soon.

"We have a long way to go to automate as much as we'd like to automate in order to open it up to everyone," he says. "So that's not happening anytime soon."

SEE ALSO: Facebook just showed us its 10-year road map in one graphic

Join the conversation about this story »

NOW WATCH: How to access the addictive hidden Facebook game that's driving the internet insane

Here's how Facebook is going to make your photos and videos so much better (FB)

$
0
0

Mark Zuckerberg Facebook

At Facebook F8 developer conference on Wednesday, the social network's Applied Machine Learning (AML) lab gave an update on how its artificial intelligence is giving people "superpowers."

Going forward, Facebook's going to be applying that same technology to pictures and video, the service's lifeblood. And it has the potential to make Facebook so much better — and give it a leg up on the wildly popular Google Photos, which uses artificial intelligence to help you sort your pictures. 

This is clearly a little ways off. But Facebook points to that translation service as a clear example of how artificial intelligence and machine learning are already getting put to work helping people communicate. And Facebook says that its developers are doing 50 times more AI experiments now every day than they were a year ago.

Still, Facebook is thinking on a ten-year roadmap. And today's AI experiments are tomorrow's Facebook app improvements.

Here's how Facebook is showing off artificial intelligence.

SEE ALSO: Facebook is playing a dangerous game with Apple

RAW Embed

Currently, Facebook's AI is helping translate status updates, comments, and other posts — learning on the fly from the slang and vernacular that people actually use in conversation, versus the more formal speech that traditional systems can learn, the company says.

Given that Facebook claims to have 800 million non-English-speaking users, that's an important tool.

 



RAW Embed

Looking forward, Facebook highlights the forthcoming ability to search images by what's in them. Search for "pictures of hamburgers" and there they are. It's similar to Google Photos. 

To use Facebook's own example, you could simply say "search for a picture of five skis in the snow, with a lake in the background and trees on both sides" and boom, there it is. 

Similarly, Facebook is working on systems that can analyze video in real-time and automatically tell you what's in it. Given its ongoing foray into live streaming video with Facebook Live, that could be an important technology. 



RAW Embed

Facebook also says that it's working on something called "talking images," where the AI can understand everything that's in a picture and their relations to each other, at a pixel-by-pixel level.

These "talking photos" have potentially huge implications for the blind and visually impaired. Just put your fingers over a photo, and Facebook could, in the future, literally speak out loud to tell you what's in that part of the photo. 



See the rest of the story at Business Insider

Facebook explains why it cares so much about artificial intelligence (FB)

$
0
0

Mike Schroepfer

Why does Facebook care so much artificial intelligence?

The company gets that question a lot, CTO Mike Schroepfer said on stage at Facebook's F8 conference on Wednesday. 

The key is your News Feed. 

When you scroll through your feed, past photos, article links, or videos, you might not realize how much effort Facebook puts in to deciding what kind of content you want to see.

Any time you log on, you only see maybe 15 or 20 stories of the thousand or more that Facebook could have offered you. 

It uses artificial intelligence to figure out the content of each post, which then influences what its algorithm will serve you (for example, figuring out that a photo someone shares is of a baby). 

"We do it for billions of people a day, billions of times a day, with billions of pieces of content a day," he said.  

In total, there are 6 million predictions per second across Facebook's AI platform.

Facebook needs you to keep coming back to Facebook so it can sell ads, and to keep you coming back to Facebook, it needs to servce you content that you want to see. 

 "You spend your time seeing the content you want to see with the people you want to see it with," he said. "That's AI."

SEE ALSO: The funny way Facebook praises the power of apps and predicts their irrelevancy

Join the conversation about this story »

NOW WATCH: Here's what happens to your brain when you check your phone — and why it's so addicting


'Neural bypass' has given a paralyzed patient the use of his arm — using only his thoughts

$
0
0

03 Ian_guitar

Six years ago, Ian Burkhart was an Ohio State University student on a beach vacation with his friends when a diving injury injured his spine, leaving him paralyzed in his arms and legs. 

With a new "neural bypass" technology, Burkhart is able to use his hand for functional movements like pouring liquid out of a bottle, swiping a credit card, or playing a guitar video game in rhythm.

As reported in the journal Nature, it's the first time a neural-computer interface has given precise movement to a human limb using only the patient's thoughts. 

The interface, developed by the laboratory group Batelle, is comprised of a chip implanted into Burkharts brain, which is connected via cable to a computer, then to a stimulator box that sends electronic signals to a sleeve on his wrist, which in turn stimulates his muscles. 

Previously, researchers have used neural implants to have patients control a robotic arm, but in this case the neural prosthetic lets Burkhart regain partial use of his paralyzed limb — no bulky robotic attachment required.

Burkhart first had the chip — which reads the electric signals in his brain — implanted in April 2013. He's been coming into the lab at The Ohio State University Wexner Medical Center for two to three times a week since then. 

ian neural bypass

"These new findings are the first demonstrations where it's now possible for the study participant to move individual fingers," says Chad Bouton, the lead technologist in the study. 

"The problem that we started with is the fact that the brain can still generate all the neural activity around movement, but those signals try to descend the spinal chord, to the point where they encounter the injury, and they’re blocked for the most part," Bouton tells Tech Insider. 

Bouton and his team developed software that could learn the brain patterns associated with specific movements, using an artificial intelligence technique called machine learning — similar to how Amazon learns what products to suggest, Netflix learns what movies to recommend, and how Google's self-driving car learns. 

"That machine learning element actually improves itself every couple minutes," Bouton says. "Then the patient or user sees the improvement in his movement, and can then learn and improve those movements over time. The machine and the patient are learning together. After 10 to 15 minutes the performance goes up significantly." 

Burkhart, the patient in the study, says that movement with the neural bypass feels surprisingly natural. He concentrates on making the movement and the machines take care of the rest. As of now, the technology doesn't provide any sensory feedback, though Bouton says he sees that coming in the future. 

It would require FDA approval to take the device outside of the lab, but Burkhart says he gladly would if he could.

"If I could use that in my everyday life it would des crease the amount of assistance I need from other people," Burkhart says. "With the movements I can do today, I would take the system home in a heartbeat if they offered it."

SEE ALSO: New drugs that could save the US billions just got an approval that will change the face of Big Pharma

MORE: The CIA just invested in a skincare line that collects your DNA

Join the conversation about this story »

NOW WATCH: Bottlenose dolphins’ sonar is more advanced than scientists thought

This robot startup is trying to win the $5 trillion race to automate corporate jobs

$
0
0

GettyImages 89212015

In 2008, Max Yankelevich was in India, visiting the cubicle farms where big banks and insurance companies outsource business processes  the invoices, memos, and other papers pushed to keep organizations humming. 

Globally, it's a $27 billion industry.

The employees were smart, says Yankelevich, who was running a cloud computing startup at the time. There was good money in doing this sort of back office work — but it was mind numbing. You'd fall asleep at your desk in the middle of a loan document. 

Companies were trying to figure out the back office work with "the brute force of human power," says Yankelevich, who studied artificial intelligence while getting his MIT computer science degree in the 1990s.

"I started thinking ... there’s gotta be a way where artificial intelligence can be used generically enough to learn some of these things that these people are doing," he says.

That wish became WorkFusion, the startup Yankelevich cofounded in 2010.  

His thinking: If the world's most powerful corporations outsourced their repetitive knowledge work — much like what Yankelevich saw in India — to his team's algorithms rather than overseas, WorkFusion could land a chunk of what McKinsey has described as a $5 to $7 trillion opportunity for the automation of knowledge work.

WorkFusion does this by combining crowdsourcing with artificial intelligence. The company has made deals with freelance labor markets around the world (think gig economy platforms like Amazon Turk or Craigslist) to take care of the business processes that corporations want to outsource.

The 35 million people that have worked on the WorkFusion platform generate tons of data around how to do business-related tasks. WorkFusion's algorithm study those tasks using machine learning, the artificial intelligence technique that Google heavyweight Eric Schmidt says will be behind every significant tech IPO over the next five years.

With machine learning, algorithms learn from experience, rather than having to be programmed to execute prescribed actions. It's how self-driving cars recognize pedestrians and how algorithms can learn how to play video games

WorkFusion's algorithms look over the shoulder of workers, gathering data on what they do, and selecting the best work to add to the data set. Then, over time, a given task become less human-executed and more computer-executed. If the algorithm runs into a problem it doesn't understand, it brings in the human worker, like how a "driver" in a driverless car can use the steering wheel and pedals if anything goes awry. 

An ecommerce company like Amazon might come to WorkFusion for product catalog cleansing, Yankelevich says. With hundreds of millions of products listed, some of those items might have faulty data that prevents the appropriate result from coming up in search (like a misspelling of i-Pad instead of iPad). That job would be split up between WorkFusion's human freelance labor force and algorithms. 

"Over time, robots take over more and more and automate more and more of that work," he says, with the hope that the people mired in these tasks would be able to tackle more creative work.

WorkFusion isn't the only company in the automating work. IBM has its Watson "cognitive computing" initiative, IPSoft is creating a "virtual employee" that can interact with customers in 20 languages, and Nuance is on its way to automating call centers.

In the same way that factory machines took over repetitive physical labor during the Industrial Revolution, algorithmic machines are on their way to taking over repetitive cognitive labor. 

For WorkFusion, it could be the start of something big.

"Because of all the inputs being submitted from different customers, the underlying AI brain is starting to be able to get smarter and smarter and developing new neurons and being able to connect the dots much faster," Yankelevich says. "That happens just because if you service a lot of customers in a lot of different areas, AI can find parallels and eventually be able to be proactive about things."

As WorkFusion's AI gets more and more experience across a range of tasks, it will get better at figuring out how to do various jobs. While Yankelevich is careful to say that it won't be like Skynet from Terminator gaining consciousness and taking over the world, there's a real possibility that WorkFusion's AI will gain "a level of self-awareness"— and take over lots of business.  

SEE ALSO: Google's new robot is the craziest one we've seen yet

Join the conversation about this story »

NOW WATCH: Consumer Reports just rated Samsung's new Galaxy phone better than the iPhone

Microsoft's latest AI experiment is refusing to look at photos of Adolf Hitler (MSFT)

$
0
0

hitler

Microsoft is taking no chances with its latest artificial intelligence (AI) experiment.

After its last AI chatbot turned into a genocide-advocating, misogynistic, holocaust-denying racist, the company's latest project — a bot that tells you what's in photos — refuses to even look at photos of Adolf Hitler.

CaptionBot is the latest in a series of periodic releases from Microsoft's AI division to show off its technical prowess in novel ways.

You can upload photos to it, and it will tell you what it thinks is in them using natural language. "I think it's a baseball player holding a bat on a field," it says in response to one example photo.

microsoft ai bot captionbot skateboarder

But the bot appears to have a block on photos of Adolf Hitler. If you upload a photo of the Nazi dictator to the bot, it displays the error message: "I'm not feeling the best right now. Try again soon?"

This error message popped up multiple times when we tried uploading photos of Hitler — and at no point did it appear when we tested other "normal" photos — suggesting there's a deliberate block in place. (Interestingly, it's not the same error message that appears when you upload pornographic content. Then it just says: "I think this may be inappropriate content so I won't show it.")

(If you're curious, you can try it for yourself with the photo at the top of this page.)

captionbot ai hitler microsoft colour bot

This caution is likely a response to Microsoft's last AI bot, which was a catastrophic PR fail. In March, it launched "Tay"— a chatbot that responded to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

The aim was to"experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But the experiment went monumentally off the rails when Tay proved a smash hit with racists, trolls, and online troublemakers — who persuaded Tay to use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

For example, here was Tay denying the existence of the Holocaust.

tay holocaust microsoft

And here's the bot advocating for genocide.

tay genocide microsoft twitter

In some — but by no means all — cases, users were able to "trick" Tay into tweeting incredibly racist messages by asking it to repeat them. Here's an example of that.

tay microsoft genocide slurs

It would also edit photos users uploaded — but unlike CaptionBot, Tay didn't seem to have any filters in place on what it would edit. It once labelled a photo of Hitler as "swagger since before internet was even a thing."

microsoft tay ai hitler swag

Microsoft ultimately shut Tay down and deleted some of its most inflammatory tweets after just 24 hours. Research head Peter Lee issued an apology, saying "we are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay."

With CaptionBot, the block appears to affect most of the most "iconic" and recognisable photos of Adolf Hitler. But some other less clear or wider-focused shots still yield results.

hitler ai captionbot microsoft blurry

Microsoft did not immediately respond to a request for comment.

Join the conversation about this story »

NOW WATCH: JOHN MCAFEE: Why downloading free apps is dangerous

Someday soon, software will learn your habits and be able to look out for you

$
0
0

woman in bar texting

If you think chatbots are hot right now — with how they're being used in psychotherapy, turning into racist trolls, and presenting an existential threat to Apple— just wait until they turn into full-fledged personal assistants. 

In five years time, digital personal assistants will even more important than your smartphone, says University of Washington computer scientist Pedro Domingos, author of "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World."

"What you have right now on your smartphone is dozens of apps," Domingos tells Tech Insider, "with each app doing it's own thing." 

On any given Friday night, you use one app to find a restaurant, another to buy a movie ticket, another to figure out how to get to where you're going, and another to find a date to take out with you. 

"It's incredibly annoying," he says, since the apps "don't talk to each other and you have to learn all these different interfaces." 

The personal assistants of the future, whether they're based on Siri, Cortana, Alexa, or a yet-to-be-named upstart, will take care of that for you. According to Domingos, the assistant will work behind the scenes, juggling these apps so you don't have to, plus the zillion other apps available.

Based on your eating history, for example, it could find you a restaurant that suits your taste that's next to the movie theater with the film you've been talking about seeing, then give the quickest route from your office or home.  

Underlying the personal assistant revolution is machine learning, the artificial intelligence technique that powers Netflix's movie recommendations, Amazon's product recommendations, and Facebook's ability to spot your friend's face. 

Machine learning is a radical departure from traditional computing because it relies on algorithms learning about material and learning how to behave really than from following the prescriptions of a programmer. Domingos says the Google self-driving car is a prime example: nobody knows how to program a car to drive; it learns from human drivers. 

Similarly, the next wave of personal assistants will learn your habits — and look out for you. 

 "Part of what your assistant is going to do for you," Domingos says "is if you’ve just gone to a bar and you’re starting to get pretty drunk and you’re probably going to stagger out in 15 minutes, it calls an Uber car so it will be there when you stagger out." 

What a time to be alive.

SEE ALSO: 'Machine learning' is a revolution as big as the internet or personal computers

Join the conversation about this story »

NOW WATCH: Consumer Reports just rated Samsung's new Galaxy phone better than the iPhone

We have entered the age of the computer chip brain implant

$
0
0

brain david eaglemanIan Burkhart can play Guitar Hero with his thoughts. 

Though he was left paralyzed from the neck down after a freak accident six years ago, a computer chip in his brain has enabled him to play along with the rhythm of songs.

The process was detailed in a paper published on April 13

"Once we decipher the signals in the brain, we can read those thought patterns, and we translate those signals into a language muscles can understand," Chad Bouton, the lead technologist on the project, tells Tech Insider. "We send electrical impulses to the forearm through the skin, so no second surgery is needed. His muscles contract, and then the movement begins." 

Burkhart can also pour liquid from a bottle into a glass, swipe a credit card, and pick up a cell phone. It's the first time that a paralyzed person has been able to do such precise motor control movements with their own hand. 

It's a sign of how brain implants are moving out of science fiction and into reality. 

ian neural bypass

A 2009 study found that 5.6 Americans live with paralysis, or about 1 in 50 people. While attempts to repair the damage of spinal cord damage have come up short, Burkhart's "neural bypass"— which sidesteps the barrier presented by spinal injury with electronics — gives hope that they could recover movement.

Bouton, a vice president at The Feinstein Institute for Medical Research, says the success of Burkhart's implant shows how "the sky is the limit" for this direction of inquiry. 

03 Ian_guitarThe chip in Burkhart's brain has learned the way that his brain activates when trying to move his hand, and has tuned into those electrical patterns, meaning that his brain can interact directly with a computer. Since so much of human life is dependent on the electrical signaling within our brains, the potential applications of brain implants are huge. 

In five or ten years, an improved version of the implant that Burkhart has could provide sensory feedback: a sense of touch and a sense of where the limb is in space. Prosthetics are already getting there

The chip could also be used in more brain-specific cases. 

Bouton says that brain chips could be used in cases of stroke, helping patients to re-learn the use of their hands. For now, systems like this are restricted to a lab setting: there's a long road of FDA approvals before domestic use is a possibility. 

But as the technology matures, there's the potential for restoring memory loss and enabling the formation of new memories for people with brain damage.

"It's taking what we've learned in deciphering brain signals and learning how to decode those signals and being able to bypass or reroute signals in the nervous system," Bouton says. 

It's a prospect that the Pentagon is investing a reported $80 million into. At a conference last September, the US Defense Advanced Research Projects Agency (DARPA) revealed that implants were able to detect brain activity associated with forming and recalling memories.

The DARPA Restoring Active Memory initiative will build computational models of how memories are formed and understand "how targeted stimulation might be applied to help the brain reestablish an ability to encode new memories following brain injury," says DARPA biotech head Dr. Justin Sanchez.

In other words, a brain implant could help people form new memories, just as Burkhart's chip is helping him play Guitar Hero.

Join the conversation about this story »

NOW WATCH: These striking images show just how overcrowded China's population really is

A $30 billion hedge fund had staff program a robot to play air hockey to help make them better at making money

$
0
0

Two Sigma, the $30 billion hedge fund that uses advanced technologies to find investment opportunities, just hosted its annual artificial intelligence competition.

The fund asks its staff to program AI systems and then have them compete with each other for the TS Cup.

This year the game was air hockey. That's right. A hedge fund gave staff time off to build an AI system to play air hockey. 

The competition has a serious side, of course. In hosting the competition, Two Sigma hopes that staff are able to experiment with emerging technologies, which in turn may make them better at their own job, which is to make money.   

"When you give people some time away from their desks to be creative, a lot of innovation can happen," Mark Roth, head of architecture at Two Sigma, explains in a video of the event. 

"When they are working on these activities, they are using programming languages they've never used before, they are using libraries they've never experimented with before."

The competition had a human bracket and a machine bracket, with the two respective champions meeting in the final. You can see what happened next here:

 

Join the conversation about this story »

NOW WATCH: Broadway’s biggest hit ‘Hamilton’ is making over $2 million a month — here’s why the producer thinks it could be making a lot more

Facebook's grand plan to simplify your life is off to a rough start

$
0
0

Poncho bot

One week ago, I asked Facebook's Messenger to send me the weather forecast every morning. It has yet to do so.

Messenger is supposed to deliver me the weather through a free service called Poncho, one of the first "chat bots" to live inside Facebook’s messaging app behemoth. Instead of checking the weather through an app like Dark Sky or even Poncho’s own iPhone app, Poncho’s playful Messenger bot is designed to chat with me about the weather like a human being. 

Except it's a really undependable, fake human.

Poncho’s bot has not only failed to send me the weather like I asked, but its bot has so far proven to be the most complicated method of getting the weather imaginable. And Messenger’s other chat bots aren’t any better.

Browsing for clothes to buy in Spring’s shopping bot is a bizarre, convoluted experience involving a multitude of finger taps that leaves me scratching my head, not wanting to pay $150 for a shirt. By the time I’m able to pay for a flower delivery through the 1-800 Flowers bot, I could have placed the same delivery over the phone twice.

Facebook and an increasing number of major tech companies think that these kinds of chat bots are the future of apps and how people will talk to businesses.

"We think that you should just be able to message a business in the same way you message a friend,” Mark Zuckerberg said onstage during Facebook’s developer conference last week. "You should get a quick response. And it shouldn’t take your full attention like a phone call would. And you shouldn’t have to install a new app.”

That’s a fine vision, but the reality is that Messsenger’s initial batch of chat bot partners feel half-baked at best and completely pointless at worst. They could make customer support better, help you quickly buy things, and complete menial tasks for you. But that’s going to take a mix of artificial intelligence and thoughtfulness that Messenger’s bots don’t have yet.

The promise of bots

Mark Zuckerberg F8 Messenger

The novelty of chat bots, or talking to companies like actual people, is the latest Big Idea to take Silicon Valley by storm. Companies like Microsoft and Slack, prominent investors, artificial intelligence experts, and countless startups are investing millions of dollars into creating bots and virtual assistants that imitate human interactions.

The gold rush mentality that exists around chat bots is because of an increasingly prevalent theory in the tech industry that normal people are experiencing “app fatigue.” So the theory goes: Our phones have become inundated with countless apps we don’t need and don’t have time for. Research has shown that people download zero new apps per month on average and regularly use only a small handful of apps, most of which are social networks like Facebook and Snapchat.

The promise of bots is that one app like Messenger can not only keep you in contact with friends, but also check the weather in the morning, manage your calendar, get you an Uber ride to work, order a burrito for lunch, and make a reservation for dinner. 

A picture illustration shows a WeChat app icon in Beijing, December 5, 2013. REUTERS/Petar Kujundzic

There’s a precedent for this kind of behavior that Facebook clearly wants to tap into. WeChat, a messaging app with hundreds of millions of primarily Chinese users, has already pioneered this one-app-to-rule-them-all approach. People in China use WeChat to communicate with friends, send money to each other, pay rent, hail taxis, and much, much more.

1538890_10156841780220195_8970100994147505666_nFacebook knows that it can't experience WeChat's success overnight. Even if Messenger bots worked perfectly right now, it'll take time for people to realize why they’re useful.

"Some people are [not going to] be excited about a conversational UI at first," Peter Martinazzi, Messenger's director of product management at Facebook, told Tech Insider. "They’re going to be used to the way they do things. And I think for some of the use cases it’s not going to be something that someone does right away. It’s going to be an over-time kind of behavior."

Martinazzi thinks that travel and shopping services will make the most compelling kind of bots at first. Once of Messenger’s first bot partners is Expedia, although its bot didn’t appear to be functional in Messenger at press time. The app's first airline partner, KLM, lets passengers view their boarding passes, contact customer support, and receive updates to their itinerary in Messenger. Online retailer Everlane has offered customer support in Messenger for over a year, which includes update on deliveries and the ability to message human support reps about orders.

AI could unlock the true potential of chat bots

crazyman zuckerberg wants to build iron man jarvis like assistant 498404 2

The main problem with Messenger bots and the bot landscape in general right now is a lack of artificial intelligence. It's the same reason Siri still isn't as smart as you'd probably like her to be.

Everlane, for example, offers great, responsive customer support in Messenger. I've used it several times to track or make changes to an order, and I always get a response within seconds. But Everlane relies on humans to offer its customer support, which is costly to scale for even a small fraction of Messenger’s 900 million user base.

img_6391AI could give a bot like Everlane's the intelligence of a human with the speed and cost efficiency of a computer. If a bot can learn about you and respond to your questions on the spot without needing pre-programmed answers that are written by humans, it becomes something closer to Iron Man’s J.A.R.V.I.S. AI assistant.

Facebook has been quietly building its own J.A.R.V.I.S. with M, a digital assistant in Messenger that’s currently available to a closed group of beta testers in California. M will eventually be able to complete all kinds of tasks on its own, but right now it relies on a human team to help it field requests and fill in the cracks where AI falls short.

When M is eventually available to Messenger users around the world, it could facilitate interactions with other bots like an AI overlord. Messenger product manager Seth Rosenberg suggested as much during a session on building bots at Facebook's developer conference last week, although he declined to go into specifics.

facebook messenger m assistant

Facebook is letting Messenger bots tap into the same AI tech that powers M so that other bots can understand natural language on the fly, remember things about you, and hold a conversation with you like a human can. Its initial bot partners didn't have access to that AI before everyone else, which could have to do with why they're so underwhelming.

It’s these potential AI advancements, especially the possibility of M facilitating how bots interact, that could usher Messenger’s bots into the mainstream. But right now the few bots already out are difficult to find and significantly dumber than the apps we already use. 

Messenger bots are too young to call a fad at this point, but until they grow up I’ll continue to get the weather elsewhere.

Join the conversation about this story »

NOW WATCH: Facebook is about to let you do a bunch of cool new things in Messenger


This 'virtual employee' is proof that the robot takeover is upon us

$
0
0

Screen Shot 2016 04 25 at 6.44.39 PM

When you first meet Amelia, you got the impression that she's all business: white Oxford shirt underneath a blazer, blonde hair seemingly pulled back in a ponytail. 

"She," in this case, is an avatar created by IPsoft, the global information technology services company. In a demo shown to Tech Insider, she shifts her weight from side to side when waiting for someone to speak, and smiles in between questions. If you tell her you're upset about something, she'll frown in empathy.

If all goes according to plan, Amelia will be the customer service agent of the future, an "employee" who can field customer support questions for people without needing to bringing a human in.

IPsoft solutions manager Benjamin Case tells Tech Insider that these low-level tasks are remarkably consistent across industries. Whether it's banking or insurance or cable, much of customer support is helping people get into locked out accounts, resetting passwords, and checking in on the status of documents, like whether the payment on a bill has gone through. 

"There are people around the world in care centers doing rote, routine, and mundane tasks," Case says. "They require a small set of subject matter expertise to really get that job done. work reasonably well documented, able to be understood in short order. It’s the low-hanging fruit."

The bot is already being piloted in banking, insurance, and telecom companies in the US and in Europe. IPsoft tells Tech Insider that by the end of the 2016, Amelia will have 100 large scale deployments in client companies, either supporting employees internally or facing customers or suppliers externally.

With Amelia, IPsoft is hoping to get in on what McKinsey called the $5 to $7 trillion opportunity for the automation of rote "knowledge work" tasks like taking customer support calls. Deloitte has estimated that in the US alone, the automation of knowledge work will expand from $1 billion to $50 billion in the next few years. 

Customer service "tasks are fairly narrow and easy to codify, which makes them good candidates for automation," says University of Washington computer scientist Pedro Domingos, author of "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World."

In one of IPsoft's pilot programs, Amelia is being deployed with a large multimedia company that "may or may not" provide Internet services in New York. The company gets over 65,000 requests a month. After a six months, Amelia was able to handle 64% of requests successfully. The impact on speed was huge: the average time of a call dropped from 18.2 minutes to 4.5 minutes, and the average speed of answering a call dropped from 55 seconds to 2 seconds. 

amelia episodic memoryAmelia works through "machine learning," an artificial intelligence technique that Google chief Eric Schmidt says will be behind every meaningful tech IPO over the next five years. With machine learning, developers don't have to program every single task that an algorithm carries out. Instead, the algorithm can study a data set and make its own inferences about what the best decisions would be in a given situation.

With Amelia, companies can feed in the transcripts of their highest quality calls if they have a database, or Amelia can listen in on on-going calls. This allows Amelia to build up an "episodic memory" of conversations that she can reference in future conversations. 

That knowledge of language is paired with her knowledge of business processes. Companies will input a task — like checking for whether you can get home insurance for the kind of home you have — and Amelia can carry it out. 

ipsoft amelia business process

If Amelia doesn't know what you're talking about — in our demo, Case said that he lived in a cubicle — then the bot will bring in a person to complete the call. The same goes if she detects that you're angry. 

Case says that Amelia's sentiment analysis (or the way that the bot is able to read people's emotions by the words they use) is literally one of the bot's selling points: if it appears that customer satisfaction is going well, then Amelia will pitch the customer on a promotion or deal. 

He's also careful to note that Amelia is (and will be) supervised by managers. You don't want “Tay-like experience happening inside of your call center,” Case says, referencing the Microsoft chatbot that quickly went from bubbly teenager to racist troll thanks to Twitter users' dark impulses and an apparent lack of supervision. 

Like other companies in the automation space, Case says that Amelia will be a way to automate as much of these low-level tasks as possible — which lead to high turnover since they're so routine — so that employees can move onto more creative, strategic tasks. 

Nuance, one of IPsoft's competitors, told TI that 10 million people around the world work in call and care centers. In a real way, Amelia and her peers are coming after them. 

"Many [call center employees] will lose their jobs," Domingos, the University of Washington computer scientist, says. "But others will see their jobs transformed for the better, in the sense that the more routine parts of the job will be done by machine learning, and the human workers will be able to focus on the parts that really require a human." 

SEE ALSO: 14 surprising jobs that robots are doing

Join the conversation about this story »

NOW WATCH: The US Navy is catapulting trucks off aircraft carriers

We just got our first glimpse of what Elon Musk's AI company is working on

$
0
0

elon musk

Tesla CEO Elon Musk announced OpenAI, a non-profit artificial intelligence research company, way back in December.

We're just now seeing the results of it.

The company released the beta of OpenAI Gym, a toolkit for comparing reinforcement learning algorithms, Wednesday.

The toolkit allows researchers to test their algorithms by having them play games, control a robot simulation, or complete tasks, OpenAI said in its blog post.

Researchers have been using games to test the strength of their AI for years. We saw that in 1996 when IBM's Deep Blue computer beat world chess champion Garry Kasparov. More recently, Google's AlphaGo beat a world champion at the ancient game of Go for the very first time — something many AI experts, as well as Musk, thought was a decade away from being possible.

Researchers can use the OpenAI Gym to have their AI play games like Atari or Go, control two-dimensional and three-dimensional robot simulations, complete small tasks, and perform computations. The platform will allow researchers to compare their results and work with others.

OpenAI's co-founder and CTO Greg Brockman told Tech Insider earlier this month that the company is working on two types of AI: reinforcement learning (when machines learn to conquer tasks through repeated trial and error) and unsupervised learning (teaching machines to think like humans).

The OpenAI Gym was initially designed so the company could test its own algorithms, but it was made open source to help speed up reinforcement learning research.

Since it was formed, OpenAI has remained committed to making its projects open source to democratize AI research so everyone can have a slice of the pie, not just big names like Google and Facebook. Open sourcing the projects also allows OpenAI to stick to its mission of advancing "digital intelligence in the way that is most likely to benefit humanity as a whole."

OpenAI CTO Greg Brockman previously told Tech Insider that the company hopes doing so will prevent the "global warming type of effects or outcome that no human really wants."

"It’s about first saying these systems are becoming capable and a part of our life and we should be thinking about this," he said. "And there’s not much that can be done to think of safety standards for tech doesn't exist yet, but acknowledging that this is going to become important."

Musk has not been shy about his concerns over artificial intelligence turning evil. He's even compared AI to "summoning the demon."

"Our goal is to maximize the probability of things turning out well," Brockman previously told Tech Insider. "So obviously the flipside is making sure whatever issues are minimized and avoided."

Join the conversation about this story »

NOW WATCH: Microsoft's artificial intelligence is so smart it can tell blind people what's going on around them

Once this breakthrough happens, artificial intelligence will be smarter than humans

$
0
0

ex machina

There's no way of knowing when the machines will take over, but scientists have a prediction about the breakthrough that would have to occur in order for that to happen: the development of an artificial intelligence (AI) system that rivals our own brains.

There's an interesting reason why such a system would almost certainly overtake human intelligence and precipitate the rise of machines than are smarter than us — not just equally smart.

As director of the Search for Extraterrestrial Intelligence Institute (SETI), Seth Shostak ends up thinking a lot about AI.

He predicts we'll find AI in the universe before we will be able to find the biological beings that might have created it, since computers and various devices can travel great distances much more easily than living beings (just think of the rovers we've sent to Mars).

Here on Earth, as well as on other planets, Shostak thinks the exponential rise of computers will eventually allow them to outsmart us.

"We're inventing our successors," he said at the Smithsonian's Future Is Here Festival on April 24.

Today, Shostak said, we can build computers that can beat humans at specific tasks (like winning the game Go). The machines can't beat us at everything we do — yet.

"But the assumption is that that will happen in this century. And if it does happen, the first thing you ask that computer is: Design something smarter than you are," he said at the conference. "Very quickly, you have a machine that's smarter than a human. And within 20 years, thanks to this improvement in technology, you have one computer that's smarter than all humans put together."

broad institute brain synapsesHis idea is that we'll eventually design AI that is as complex and intelligent as a human brain.

Companies like Google and IBM are already developing machines with neural networks that function like our brains do, jumping from thought to thought in a web rather than in a straight line like traditional computers do.

Once we make AI as smart as humans are, we can tell it to make a smarter AI. Then that machine will be smarter than us, and so on.

What we have to decide once we reach this breakthrough, however, is: Should we make a machine that can outsmart us?

Join the conversation about this story »

NOW WATCH: Microsoft's artificial intelligence is so smart it can tell blind people what's going on around them

A Google exec explained why the company's AI lab is handling millions of British patient records (GOOG)

$
0
0

doctor nurse hospital prince william mask blue scrubs gown

A Google executive defended the NHS's decision to give Google access to patient records on Thursday.

Last week, New Scientist reported that the NHS, the British national healthcare system, has given Google access to approximately 1.6 million patient records in order to help the internet giant develop an app to monitor possible kidney failure.

Thomas Davies, head of Google Enterprise in Northern and Central Europe, said at the AI Summit conference in London that Google was given access to the data because of "trust."

A member of the audience asked Davies: "Why is Google a good fit to be determining what happens to patients and whether they might be susceptible to certain diseases?" They added: "How did that come about and what exactly is DeepMind doing with those patient records?"

To the surprise of several people in the audience, Davies replied. He said:

That relationship was formed for us to try and help frontline staff. Go and have a read, go and have a look at the DeepMind website. It explains what they’re trying to do.

A lot of the work that they’re doing, and we’re doing as a company, is to try and give people the infrastructure, the computational power, and the intelligence to go and do this sort of data analysis. The key thing is how do you expose that. Most importantly for me, is to expose it with a mobile device. It’s very much around putting data into an application that allows frontline staff to do their job better.

Why Google? I honestly don’t think it’s any different to the discussion I’ve had thousands of times in the past decade. It’s to do with trust. Who do you trust with your data?

Our core business is security. You may not know this … but you know who our head of security is? It’s Sergey Brin. Right. We take this pretty seriously. We have seven services with over a billion users. If we get that wrong it is going to be damaging to us.

Believe me, trust, security, and privacy actually go hand in hand. Privacy, especially in Europe, is a fluid environment. It changes almost daily. All I can say is security is our core business. We’ve been doing this a long time. We’re trying to do things to make things better. That is the underlying principle.

Through the data-sharing agreement with the NHS, first obtained by New Scientist, Google will be able to see, for example, information about people who are HIV-positive as well as details of drug overdoses and abortions, according to New Scientist.

Critics have questioned why Google is sucking up vast amounts of medical data in secret just to build a kidney monitoring app.

The NHS has approximately 1,500 data-sharing agreements in place with non-NHS organisations. Patients are not told about each of these agreements before they are signed "because it's not practical,"MailOnline reported.

DeepMind StreamsDaniel Nesbitt, research director of privacy and civil liberties pressure group Big Brother Watch, said in a statement earlier this week: "With more and more information being shared about us it's becoming clear that in many cases members of the public simply don't know who has access to their information.

"All too often we see data being shared without the informed consent or proper understanding of those it will actually affect.

"It's vital that patients are properly informed about any plans to share their personal information."

Dominic King, a senior scientist at Google DeepMind, told the BBC: "Access to timely and relevant clinical data is essential for doctors and nurses looking for signs of patient deterioration. This work focuses on acute kidney injuries that contribute to 40,000 deaths a year in the UK, many of which are preventable.

"The kidney specialists who have led this work are confident that the alerts our system generates will transform outcomes for their patients. For us to generate these alerts it is necessary for us to look at a range of tests taken at different time intervals."

DON'T MISS: A major medical group just threw its weight behind E-cigs

SEE ALSO: Government-funded research teams had insane plans to build artificial hearts fueled by radioactive decay

Join the conversation about this story »

NOW WATCH: Here’s what scientists think aliens could actually look like

This monorail using magnets to hang from the track could reduce 2-hour commutes to 10 minutes

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>