Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

A DeepMind investor hinted that they were seriously worried about the potential future of AI when making a joke about the CEO (GOOG)

$
0
0

Google DeepMind CEO Demis Hassabis

Artificial intelligence (AI) is being developed at a staggering pace by companies like DeepMind and a number of high-profile scientists have raised concerns about the future of the technology.

But scientists aren't the only ones that fear the worst, according to a long read on stopping the AI apocalypse in Vanity Fair on Sunday.

One of DeepMind's own investors allegedly joked that they should have shot AI guru Demis Hassabis in order to save the human race. It's important to stress that this was a *joke* and the investor obviously didn't mean it. But the joke highlights an interesting point. Some people are genuinely concerned that machines could end up outsmarting humans and doing away with them altogether.

Together with an army of neuroscientists and computer programmers, Hassabis is looking to create forms of superintelligence that can learn and think for themselves.

Some, including Elon Musk and Stephen Hawking, believe that one day these superintelligences could pose a threat to humanity if they decide that humans are no longer necessary. This sci-fi scenario could, however, be avoided if tech companies and governments take the right steps when developing AI. It's also worth noting that superintelligences could also find cures for cancer and reduce the world's energy consumption. No one really knows.

Despite that, one unknown investor is alleged to have joked that they should have "shot Hassabis on the spot" after they had a meeting with him. At least, that's what Peter Thiel reportedly told Vanity Fair's Maureen Dowd.

Thiel "told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race," Dowd wrote in her piece.

Hassabis is confident that scientists will develop superintelligences at some point but he's less clear on the time frame that this will happen in. It could be within the next few decades or it could take more than 100 years. He's also made it clear that he wants it to happen.

Shane Legg, who cofounded DeepMind (now owned by Google parent company Alphabet), has also admitted he has concerns about advanced forms of technology. He said in an interview in 2014: "I think human extinction will probably occur, and technology will likely play a part in this."

Last October, Oxford philosopher Nick Bostrom said that DeepMind is winning the race to develop human-level AI. The company, which employs approximately 400 people in King's Cross, is perhaps best known for developing an AI agent that defeated the world champion of the ancient Chinese board, Go. However, it's also applying its AI to other areas, including healthcare and energy management.

Once human-level AI is developed, many in the field believe that machines will quickly go on and develop forms of superintelligence.

"I think it partly depends on the architecture that end up delivering human-level AI," said Hassabis earlier this year. "So the kind of neuroscience inspired AI that we seem to be building at the moment, that needs to be trained and have experience and other things to gain knowledge. It may be in the order of a few years, possibly even a decade."

DeepMind did not immediately respond to Business Insider's request for comment.

Join the conversation about this story »

NOW WATCH: Forget the iPhone 7 — here are 13 reasons the next iPhone will blow everyone away


Samsung's gorgeous new phone repeats some of the same mistakes

$
0
0

samsung galaxy s8 bixby home

Aside from its exploding Note 7 last year, Samsung has been on a roll with unique phone designs and innovative hardware features.

The new Galaxy S8 continues that tradition.

Like its last few predecessors, the S8 beats the iPhone when it comes to design and must-have extras like wireless charging. It's a drop-dead gorgeous device.

But based on my short time spent with the phone last week, the Galaxy S8 also appears to have the same drawbacks as previous Galaxy phones. The software is loaded with Samsung-made extras that were built on top of Android, resulting in a needlessly bogged-down user interface when the stock version of Android is amazing on its own. And now Samsung is adding its digital assistant, Bixby, to the excellent Google Assistant that ships with all the latest Android phones.

It's the same story from Samsung we've seen since 2015: beautiful, powerful hardware running on iffy software. It's not horrible, but it shows the benefits of Apple's control over the iPhone's software and of Google's decision to make its Pixel Android phones.

Bixby

One way around the confusion is supposed to be Bixby, the digital assistant built by Samsung that'll debut on the Galaxy S8. Samsung promises that Bixby will let you use your voice to control everything you normally do on the phone.

But in a controlled demo of an early version of Bixby that Samsung showed me last week, I didn't see much promise. Bixby was slow to respond to commands to adjust brightness and flubbed a few times when asked to beam a video from the phone to a nearby connected Samsung TV, for example. I'm also not convinced that talking to your phone is always better than using the controls on the screen, and I'm definitely not convinced this is the solution to Samsung's confusing user-interface problems.

Samsung says it's working on Bixby behind the scenes so that its servers are ready to go by the time the Galaxy S8 launches on April 21. It's possible the bugs I saw will be ironed out in time for launch.

Still, Bixby will be extremely limited at first and work only with Samsung apps at launch, with more functionality added over time thanks to Samsung's recent acquisition of the artificial-intelligence company Viv. I'm also doubtful that Samsung can rally a significant number of third-party developers to adopt Bixby controls for their apps.

Samsung's hardware and design are ahead, but Apple is about to catch up

Samsung has enjoyed a nice couple of years staying ahead of Apple's iPhone hardware thanks to its bigger screens and svelte designs that improve with each generation. And the Galaxy S8 is the best-looking phone the company has ever made without compromising on key features like water resistance, expandable memory, and wireless charging.

But Samsung's position on top may also be short-lived. Just about every leak or rumor about the next iPhone points to major changes in design and features as Apple gears up to celebrate the device's 10th anniversary, and a lot of the ideas, such as an OLED screen and no home button, sound like what we've seen in Galaxy phones recently.

Apple sounds like it's about to catch up with hardware, and the strength of its iOS ecosystem will give it an opportunity to leapfrog Samsung.

Join the conversation about this story »

NOW WATCH: A mathematician gave us the easiest explanation of pi and why it’s so important

There are a lot of red flags with Samsung's AI assistant in the new Galaxy S8

$
0
0

samsung galaxy s8 bixby camera

There's Siri. And Alexa. And Google Assistant. And Cortana.

Now add another one of those digital assistants to the mix: Bixby, the new helper that lives inside Samsung's latest phone, the Galaxy S8.

But out of all the assistants that have launched so far, Bixby is the most curious and the most limited.

Before we dive in though, here's a quick recap of what Bixby is and how it works.

Samsung's goal with Bixby was to create an assistant that can mimic all the functions you're used to performing by tapping on your screen through voice commands. The theory is that phones are too hard to manage, so simply letting users tell their phone what they want to happen will make things a lot easier.

When the Galaxy S8 launches, Bixby will be able to control system functions like brightness, WiFi connections, and so on. It'll also let you control a handful of Samsung's preinstalled apps for basic stuff like reminders and messaging.

Outside of the voice controls, there will be an intelligent camera feature that can identify real-world objects and point you to relevant information like links to purchase stuff on Amazon or cool places to visit near the landmark you just snapped. Finally, there's a new Bixby home screen that provides cards of information that the assistant thinks will be relevant to you, like weather, news updates, and suggested contacts.

But there are also a lot of limitations to Bixby, and it risks confusing users now that it's shipping on a device that comes with Google Assistant, which is now included on all newer Android phones.

samsung galaxy s8 bixby home

Third-party support

The biggest challenge for Bixby will be convincing third-party app developers to add Bixby voice controls to their apps. Based on how developers historically adopt such features, this doesn't seem likely to happen at scale.

Bixby only works on one phone for now, and most developers don't have the bandwidth to add Bixby support for a phone that only a negligible percentage of the entire Android user base will have. We've seen this over and over again with smartphone features unique to just one model. For example, Apple is still having trouble getting apps to add support for 3D Touch on the iPhone and Samsung was never able to convince developers to make widgets for the curved portion of the screen on its Galaxy Edge phones.

And without enough third-party support, Bixby will fail to fulfill its core promise: full control of your phone with just your voice. If most of your apps aren't Bixby-compatible, you're going to find yourself using your phone the old-fashioned way more often than not. That alone would be enough to kill Bixby.

Bugs

I've only had short time with Bixby so far, and it was a controlled demo given by a Samsung employee. Most of its features won't light up until the phone goes on sale April 21.

That said, Bixby didn't work very well from what I saw. It took several seconds for the voice command to raise the phone's brightness to register. And it had difficulty when asked to beam a video from the phone to a nearby Samsung TV. The image recognition worked pretty well, but there's no way to gauge how robust and accurate it is until I use it in the real world.

Nothing I saw convinced me that talking to your phone was better or easier than using the screen.

More Android fragmentation

Google Pixel assistant

Digital assistants are one of the hottest categories right now, so it's not surprising Samsung is giving its own take a whirl. But it's also falling into the same trap it has before: Android fragmentation.

Last fall, Google introduced the excellent Google Assistant on its own Pixel phones. Now, Google Assistant ships with all Android devices running the newer versions of the operating system. That means your new Galaxy S8 will have two digital helpers battling it out for your attention. It's bad for Google, which is pinning its future on AI and voice controls, and it's bad for you since it causes unnecessary confusion.

Google Assistant will continue to gain third-party support since it'll be on just about every new Android phone moving forward, along with others released in the last few months. That's likely where a lot of the developer attention will go. Bixby will be an afterthought at best.

Promises to improve

Samsung says it's still early days for Bixby. More functionality is coming over time, and it eventually plans to add support from Viv, the AI startup Samsung bought last year from the same people who built Siri. By most accounts, Viv is some very impressive tech. Maybe that'll fill in a lot of the holes Bixby has one day.

But based on what Samsung has shown so far, Bixby feels incomplete. The Galaxy S8 will almost certainly be a great phone on its own without another assistant mucking things up.

SEE ALSO: Full details on the Galaxy S8

Join the conversation about this story »

NOW WATCH: Everything you need to know about the Samsung Galaxy S8

Google and Mark Zuckerberg's investment fund are backing a $150 million AI institute in Toronto (GOOG)

$
0
0

Geoffrey Hinton

Google is one of several multinationals backing a new $150 million (£120 million) artificial intelligence (AI) research facility in Canada as it looks to make further breakthroughs in the field.

Launched on Thursday, the Vector Institute is based in the city of Toronto, which already has a reputation as one of the strongest AI hubs in the world, largely as a result of the research efforts at the University of Toronto.

The institute wrote in a press release that it intends to produce more "masters, applied masters, PhDs and post-doctoral graduates in deep learning and machine learning AI than any other institution in the world."

Working with academic institutions, incubators, accelerators, and startups, the institute hopes that its efforts will result in new breakthroughs, jobs and economic growth.

Mark Zuckerberg and Priscilla ChanThe Province of Ontario has committed $50 million (£40 million) to support the institute but the majority of the funding —$80 million (£64 million) of the $150 million (£120 million) — is coming from the private sector.

Google is one of several "platinum" partners that have pledged to give $5 million (£4 million) to the institute. Others platinum partners include IT consultancy giant Accenture and chip maker Nvidia.

The Chan Zuckerberg initiative, the investment fund of Facebook CEO Mark Zuckerberg and his wife Priscilla Chan, has also pledged to support the institute with $20,000 (£16,000) a year.

"The opportunities for new discoveries in the field of deep learning are very exciting, and the applications are endless," said Google engineering fellow Geoffrey Hinton, who will serve as Vector's chief scientific advisor, in a statement.

Hinton, a British-born Cambridge University graduate, added: "Now is the time for us to lead the research and shape the future of this field, putting neural network technologies to work in ways that will improve health care, strengthen our economy and unlock new fields of scientific advancement. And with the Vector Institute collaborating with institutes in Montreal and Edmonton we can do that here in Canada."

Ed Clark, chair of the Vector Institute board of directors, added: "The Vector Institute will confirm Canada’s world-leading position in the field of deep learning artificial intelligence.

"Consequently, it will spur economic growth in Canada by attracting talent and investment, supporting scale-up firms and enabling established firms to be best-in-class adopters of artificial intelligence."

Join the conversation about this story »

NOW WATCH: Hackers and governments can see you through your phone’s camera — here’s how to protect yourself

Facebook built an internal database of 'revenge porn' pictures to prevent repeat sharing (FB)

$
0
0

Mark Zuckerberg

Facebook is adding tools on Wednesday to make it easier for users to report so-called "revenge porn" and to automatically prevent the images from being shared again once they have been banned, the company said.

"Revenge porn" refers to the sharing of sexually explicit images on the internet, without the consent of the people depicted in the pictures, in order to extort or humiliate them. The practice disproportionately affects women, who are sometimes targeted by former partners.

Facebook has been sued in the United States and elsewhere by people who said it should have done more to prevent the practice. The company in 2015 made clear that images "shared in revenge" were forbidden, and users have long had the ability to report posts as violating the terms of service.

Beginning on Wednesday, users of the world's largest social network should see an option to report a picture as inappropriate specifically because it is a "nude photo of me," Facebook said in a statement.

The company also said it was launching an automated process to prevent the repeat sharing of banned images. Photo-matching software will keep the pictures off the core Facebook network as well as off its Instagram and Messenger services, it said.

Users who share "revenge porn" may see their accounts disabled, the company said.

Facing criticism, the company last year met representatives from more than 150 women's safety organizations and decided it needed to do more, Antigone Davis, global head of safety at Facebook, said in a phone interview.

A specially trained group of Facebook employees will provide human review of each reported image, Davis said.

The process to prevent repeat sharing requires Facebook to retain the banned pictures in a database, although the images are blurred and only a small number of employees have access to the database, the company said.

Prosecutors and lawmakers have also sought ways to prevent the spread of "revenge porn," seeking additional penalties for a practice that they said has ruined careers and families and even led to suicide.

SEE ALSO: Nearly 10 million Americans are victims of revenge porn, study finds

Join the conversation about this story »

NOW WATCH: A Navy SEAL explains why you should end a shower with cold water

REPORT: 1 in 4 people have fantasised about Alexa, Siri, and other AI assistants

$
0
0

Her

The blockbuster sci-fi movie "Her" might not be as farfetched as some people thought.

A study of more than a thousand voice technology users found that 26% of them have had a sexual fantasy about a voice assistant, which include the likes of Alexa, Siri, and Cortana. What those sexual fantasies were isn't clear, but it's an alarming number of people all the same.

The "Speak Easy" study — based on the responses of over 1,000 UK smartphone owners aged 18+ and 100 Amazon Echo owners — was published on Wednesday by advertising agencies JWT and Mindshare. It's not clear what percentage of respondents were men and what percentage were women.

The study also found that 37% of voice technology users "love their voice assistant so much that they wish it were a real person." Clearly some humans are finding themselves very attached to their voice assistants.

The relationship between humans and voice assistants that are powered by artificial intelligence (AI) is poised to become even stronger in the coming years as Silicon Valley tech giants like Amazon, Google, Apple, and Microsoft spend hundreds of millions making their voice assistants as human-like as possible.

Zooey Deschanel Siri iPhoneTech companies have a tendency to focus on female voices and names in their personal assistants, possibly in a bid to make them more appealing to young, geeky men.

Voice assistants are expected to start taking on a more prominent role in people's lives. From managing their diaries to filling their fridges, they're quickly going to turn into "digital butlers," according to the JWT and Mindshare study. Almost a third of the survey's respondents said they are excited by a future where their voice assistants anticipate what they need and takes action or makes suggestions.

Elizabeth Cherian, UK director at JWT's Innovation Group, said in a statement: "We are on the cusp of a new era in technology where voice is set to become mainstream. Our research shows that 88% of UK smartphone users have used voice technology or would consider doing so in the future.

"To successfully integrate voice into their offerings, brands need to understand how the technology can simplify everyday tasks by adding value and removing friction from their experience. This is not about tech for tech’s sake. Thoughtful and helpful interactions which genuinely enhance the experience will drive engagement and deeper relationships between consumers and brands."

Join the conversation about this story »

NOW WATCH: Your neighbor's WiFi is ruining yours — here's how to fix it

Slack's AI boss explains its secret weapons in the coming chat wars with Microsoft, Facebook, and Google (MSFT, FB, GOOG, GOOGL)

$
0
0

Noah Weiss Slack SLI

Three years ago, a startup named Slack came out of nowhere to become what's believed to be the fastest-growing business software company ever, hitting a $1 billion valuation within the first 8 months of its existence.

Today, Slack is a $3.8 billion business with over 800 employees, and a growing roster of paying customers that include IBM, Walmart's Jet.com, and, yes, Business Insider. For many companies, chat is the way people are getting things done. Slack wasn't the first company to do chat at work, but it's certainly the most visible. 

And so, tech titans like Microsoft, Google, and Facebook have all launched their own work chat apps, trying to crush Slack before it's too late. Under that pressure, Slack is doubling down on its biggest advantage: Generally speaking, people actually enjoy using it, which isn't often the case with software meant for use in the office.

Enter Noah Weiss, former Googler and Senior VP of Product Management at Foursquare, who now leads Slack's Search, Learning and Intelligence team out of the company's spiffy new New York City offices. It's his team that's going to be making much of the "magic" that keeps users glued to Slack, even as the alternatives proliferate. 

"Slack is a very bizarre enterprise software company," says Weiss. "And I mean 'bizarre' in the best possible way."

'Connective tissue'

Because people are using Slack all workday, every workday, there's an astonishing amount of information that's already flowing through the app. Not just text, but the photos and links and documents that your coworkers are sharing with each other.

"All of that data exhaust is something that we think we can tap into to make Slack itself smarter," says Weiss.

Slack already integrates some of that, with smarter search suggestions. And users can opt-in to receive regular suggestions for chat rooms within their company — whether that room is for a special sales project populated by the people they talk to every day, or for "Game of Thrones" fans.

The data that's feeding the beast is largely coming from outside apps, says Weiss. For example, he says, teams at Slack itself use Google Docs, Microsoft Word, Dropbox Paper, and Quip, just for the creation and sharing of documents. Regardless which app employees use to create the file, they use Slack to share it with coworkers. 

slack ceo stewart butterfield

"Slack ends up being pretty good connective tissue for this increasingly diverse set of tools," says Weiss. 

Over time, Weiss says, Slack can really start to "learn" things about users based on the things they send around. If you share a lot of spreadsheets with "expenses" in the title, you probably work in accounting. If you post a lot of links about Elon Musk, you're probably into space. And so on.

All of which comes back to helping people love Slack more, explains Weiss. All of this data will help its users sift through the noise that comes with everyday conversation, and find the answers they're looking for, more quickly. Once Slack "understands" the relationships between people and files, it can subtly point you towards the right answers, without requiring you to change the way you work.

"There is tremendous collective knowledge building up in Slack," says Weiss.

Versus Microsoft and Google

Microsoft, in particular, is hard at work on similar principles. In 2016, the company introduced the Microsoft Graph, which, as you may guess from the name, traces the relationships between documents and coworkers the same way that Facebook's famed social graph does for your friends. 

Still, Weiss believes that Slack has a few advantages. First and foremost, it's been doing it longer, and claims a leading 5 million daily active users. That means lots more data flowing in, which means it can release new, smarter, time-saving features faster.

slack channel recommendations

"We kind of created this category; we have the most usage in this category," says Weiss.

Furthermore, Weiss says, Slack is way more focused than Facebook, Google or Microsoft, all of whom support a plethora of messaging apps beyond whatever it is they offer to office workers. For Slack, messaging is literally all they do. Weiss' team has one mission, and it's not powering a search engine or whatnot — it's building a chat app that helps people be more productive.

"When you think about how Slack itself is wired, we are here on this Earth to do this one thing," Weiss says.

SEE ALSO: A former Microsoft intern got his Yale classmates to apply for jobs at his hilariously fake startup

Join the conversation about this story »

NOW WATCH: Stewart Butterfield, co-founder of Slack and Flickr, on two beliefs that have brought him the greatest success in life

Google just launched a new website that can turn your doodles into surprisingly good artwork (GOOG)

$
0
0

AutoDraw 1

It's time for terrible artists everywhere to rejoice — Google wants to make your indecipherable scrawls a thing of the past. 

On Tuesday, the tech giant announced AutoDraw, a web-based tool that will turn your sloppy drawings into recognizable images. AutoDraw uses artificial intelligence to determine what you are trying to draw, and then offers professionally-made images to replace your rough sketch. 

The site uses the the same technology found in Quick Draw, a web game that Google released late last year designed to help an AI program recognize doodles. 

AutoDraw is free to use, and can be accessed from computers, phones, and tablets. Currently, the system can recognize hundreds of drawings, though Google says that they are looking to add more over time. 

If you are artistically inclined, there is a submissions page where you can upload your own designs and drawing for use in AutoDraw.

See AutoDraw in action below.

SEE ALSO: Here's what you need to do to find a Nintendo Switch in stock

Join the conversation about this story »

NOW WATCH: 5 power user tips to get the most out of your iPhone


There's a dark secret at the heart of artificial intelligence: no one really understands how it works

$
0
0

mj17 aiblackbox1

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time. 

mj17 aiblackbox2

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

mj17 aiblackbox2b

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

Apr 15 2017 12 35 56

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” Barzilay says.

How well can we get along with machines that are unpredictable and inscrutable?

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Join the conversation about this story »

NOW WATCH: Here’s what a computer is thinking when it plays chess against you

How London startup Thread uses artificial intelligence and machine learning to help men buy clothes

$
0
0

Thread CEO Kieran O'Neill

Kieran O'Neill, CEO of London fashion startup Thread, is building a new way of shopping for clothes.

His site uses online stylists, as well as artificial inteligence (AI) and machine learning, to create a personalised way to shop.

But that can cause problems when he visits standard clothes shops.

"We went to Liberty for a bit and I went to the men's section," O'Neill said in an interview at Thread's London office. "[I] went to the rack and was browsing and I looked at it and it wasn't my size. I felt this rage, this offence that why would you bother showing it to me if it's not actually in my size?"

That's the problem that Thread is trying to solve. Walk around a clothes shop and chances are only a small proportion of the clothes on sale will both fit you and look good on you (not to mention fit your budget). But if you had a personal shopper with you, they could learn your style and size, taking you straight to the clothes that work best.

Thread is using artificial intelligence to help people buy clothes

Thread, which was founded in London in 2012 by O'Neill, CTO Ben Phillips, and creative director Ben Kucsan, pairs users with stylists who can provide shopping advice through the site.

Sign up to Thread and you'll be asked to provide some photographs of yourself, along with your measurements, what's currently in your wardrobe, and your budget. The company then uses that information and assigns you a virtual stylist who will start suggesting clothes for you to buy.

Thread

Obviously, Thread can't afford to hire personal shoppers to learn all they can about every single one of the site's users. So it uses a mixture of artificial intelligence and machine learning to help its stylists out.

"We have a pre-screen step where stylists go through and remove everything from our partners which they don't want to personally endorse," O'Neill explained to Business Insider. 

Thread's stylists then look up what you want to buy, and they suggest individual items as well as full outfits. But after that initial human involvement, Thread uses its algorithms, known as "Thimble" internally, to do some of the heavy lifting.

Once a stylist has decided on an olive green T-shirt, for example, the algorithm looks to find the best olive green T-shirt for the customer. O'Neill said that would be "a really hard problem for a human to do" because of the volume of clothes to sort through.

"So that's a really good place for machine learning where you pull in lots of data from all the different partners you have. We have about 200,000 items from our partners. It's the combination of curation plus AI which has worked really well for us."

Another reason Thread uses AI is that it doesn't forget anything, unlike humans. "If you have a relationship with a stylist here for four years and you mentioned something four years ago that you liked or disliked, it’s likely the stylist would forget," said CTO Ben Phillips. "Whereas a computer never forgets."

Thread Founders Ben Phillips (CTO), Kieran O'Neill (CEO), Ben Kucsan (Creative Director) 3 copy

A fashion site that just showed everyone exactly what they wanted all the time might not actually be the best idea. It's important to vary suggestions so that new trends develop and people discover new styles. Spotify, for example, proactively suggests new music that you haven't heard before.

Thread does something similar, said Phillips. "Every so often [we] send you something that's a bit not accurate or in an area of your data that we don't really have, so 'here's some skinny jeans.' We wouldn't blindly give those skinny jeans to everyone, it would be people that it would make sense, but maybe not an explicit thing that we think you would like and then we get feedback on that and that improves your personal data."

It wants to add new types of data

Why stop at knowing your budget or size? The more data that Thread has, the better. Phillips said that "weather is super interesting." Thread could learn upcoming weather conditions and use that in the text of the messages it sends to users, Phillips said.

O'Neill has another idea for data to bring into Thread: Your social media accounts. He mentioned Facebook, Pinterest, Instagram, and Twitter by name, speculating that if the data was properly sorted then it could help his site to send better recommendations. "You can maybe learn that someone is really into rock music," he suggested, "and you kind of bring in darker aesthetics into the recommendations."

Thead is also keen to add deep learning to its site so that its algorithms can understand what's actually in photos. "Say you wear baggy trousers," Phillips said. "Maybe you should think about slimming them down or something."

Thread wants to bring back women's clothing but 'there's no set date yet'

Thread used to sell women's clothes as well as men's clothing, but it eventually decided to focus on men's clothing only. "It became very clear after three or four months that it would be impossible to make something that was a breakthrough experience for both genders at once," O'Neill said.

"So we took the really tough decision to do just UK men for a while until we nailed the experience. We picked men because we started the business because we have this pain that we want to solve in our own lives and it would be inauthentic to do women because it was a bigger market, for example."

Thread style director Shaunie Brett

O'Neill explained that men and women shop in different ways. Women, for example, often shop for clothes for specific events, but men are more likely to make do with what's in their wardrobe. "You would design the experience slightly differently," he said.

"We'll definitely do womenswear, there's no set date yet. It's not something that we're actively working on right now, but I'm super keen to do it. We had at least 400, 500 women who were on the platform when we took them all out. And a lot of them are friends of mine who have bugged me."

Thread would 'love' to have physical stores

O'Neill said that the idea of Thread opening physical stores is "super interesting." "I think there's a really powerful thing about expressing the brand in a physical way that is just not possible online," he said. "I can see us having a few stores in London or in a major city. But I don't see us having like 500 stores in the UK or anything."

"I would love to work on it right now, it's just we're 35 people, there's only so much you can do at once. And our approach is to do a really small number of things, try and do it really well, rather than do 25 things all a bit average."

Join the conversation about this story »

NOW WATCH: Forget the iPhone 7 — here are 13 reasons the next iPhone will blow everyone away

Google Home is getting a significant update (GOOG)

$
0
0

Google Home

Google's connected speaker is getting an anticipated update Thursday.

Google Home will now support multiple Google accounts and tell the difference between users' voices. That means you'll be able to get customized answers for every Google user in your house.

It's a small, but significant update. One of the big weaknesses with connected speakers like Google Home and Amazon Echo is that it's difficult to switch accounts. That means just one person's calendar, email, messages, music playlists, and more on a device the entire household is supposed to share.

Google's new update solves the problem.

Here's how it works

After setting up individual accounts using the Google Home companion app for iPhone or Android (update coming soon), the Home speaker will link each account to a user with voice recognition. Then, when you use the wake phrase "OK, Google" and ask a question or give a command, Home will deliver personalized responses for that account.

Google says the voice recognition happens on the device, which should help alleviate concerns that Google is keeping tabs on your voice patterns.

Why this matters

Google Home is an important piece of Google's overall strategy to leverage its expertise in voice control, machine learning, and artificial intelligence. Home is powered by Google Assistant, the new digital helper that can be found on Google's Pixel phones and new Android phones like the Samsung Galaxy S8.

Google CEO Sundar Pichai is especially bullish on the prospects of voice and AI as keys to the future of computing, but hasn't detailed how the company plans to monetize those ambitions as more people move to screen-free computing where they can't see the digital ads the company makes most of its money from.

Here's a video demo:

SEE ALSO: Why the iPhone 7 is still better than the Samsung Galaxy S8

Join the conversation about this story »

NOW WATCH: 7 Google Maps tricks only power users know about

We sat down with Microsoft's CEO to discuss the past, present and future of the company (MSFT)

$
0
0

Satya Nadella took over as CEO of Microsoft in February of 2014, and in the three years since he has succeeded in turning around what was then a stumbling, aimless company. He recently stopped by Business Insider's Poland office and spoke with us about the past, present and future of Microsoft. 

satya nadella

Krzysztof Majdan, Michał Wasowski, Business Insider Poland: You have redefined the original mission established by Bill Gates: a computer on every desk. So how does it refer to modern digital economy?

Satya Nadella, CEO of Microsoft: One of the major things I think about is: What is the sense of purpose and identity at Microsoft? In fact, I take inspiration from the very foundation and forming of the company when Bill and Paul created Microsoft and the very first product they built was BASIC interpreter for the Altair. Of course, a lot of technology has come and gone since the Altair, but what was true then and what is true now is that we create technology so others can create more technology.

What was true then and what is true now, is that we create technology so that others can create more technology

And that’s who we are: a toolmaker, and a platform provider. Our mission of empowering every person and every organization on the planet to achieve more is really a look back to the very creation of Microsoft.

BI: Going back to the roots of Microsoft’s business, how is it relevant to the modern world? It’s been over 40 years — the world has changed a lot.

Nadella: When you ask why it is relevant today, I would say it is even more relevant now than 42 years ago, when Microsoft was formed, because then it was for a bunch of hobbyists working in technology. Today, take Poland for example, every industry, every walk of life, the entire society, is digital — whether you’re in retail, manufacturing, healthcare, education or even the public sector. So even more important for the business of creating technology is that every other organization right here, locally in Poland can create technology.

BI: How did employees react to the mission shift and culture change you’ve made?

Nadella: One of the things I’ve come to realize, and I think all of us at Microsoft have come to realize, is that there are two most important things determining long-term success. The first is the sense of purpose and mission that is enduring. Technologies will come and go, so you need to be able to both ask and answer the question: What do you do as a company, why do you exist? That’s exactly what is captured in our mission.

Technologies will come and go, so you need to be able to both ask and answer the question: what do you do as a company, why do you exist?

The other one is culture. These are the two bookends to me. In fact, I went on a lookout for what’s the right metaphor for the cultural dialog. Putting up a poster in a conference room with some attributes of a new culture never works. You read it once and never remember it again. My inspiration came from the book I had read couple years before becoming a CEO — “Mindset” by Stanford professor Carol Dweck.

BI: One book affects your business approach so much?

Nadella: I was reading it not in the context of business or work culture, but in the context of my children’s education. The author describes the simple metaphor of kids at school. One of them is a "know-it-all" and other is a "learn-it-all", and the "learn-it-all" always will do better than the other one even if the "know-it-all" kid starts with much more innate capability.

Going back to business: If that applies to boys and girls at school, I think it also applies to CEOs, like me, and entire organizations, like Microsoft. We want to be not a “know-it-all” but “learn-it-all” organization.

BI: What, except recommending the books, can you do as a CEO to empower employees to be part of that culture?

Nadella: This is an interesting question and one of the fundamental issues: What can a leader do to empower people and at the same time what you can do to empower yourself? I think it is to ascribe more power to others than to ourselves. This is how I have approached my own work at Microsoft, from the day I joined in 1992. It’s not like since I joined Microsoft to the day I became CEO in 2014 I was sitting around and thinking that someday I have to become a CEO to change the culture.

Satya Nadella with Microsoft Surface logoI felt that every job I did was the most important job and that I was in control of my own destiny based on what I did. To me, that’s an answer to your question: Create an environment, where everyone in the company can feel that they can bring their A-game and be respected for who they are.

For example, diversity and inclusion is a major agenda for us, because first of all, we don’t want to be just successful, we want to really empower every person and organization on the planet. And if that is the goal, you have to look like the planet: have genders represented, have ethnic minorities represented, so we can create products that are really built by people for themselves. That’s a lot of what I think is what is "driving culture".

BI: Is it hard to shape that culture? There is this story about how you asked Skype and Microsoft Research teams to work together on implementing real-time translations to Skype — and they had to do it in 3 weeks because you wanted to show it to the world on the conference. Quite a deadline.

Nadella: Most people think that culture is something that happens to them, but what is important is that you take responsibility for culture — after all it’s just a reflection of all of us, our every behavior and every action of ours. It’s shaping Microsoft, it’s an organic thing, not static. So it’s our own ability to recognize a fixed mindset that is most important. Honestly, that’s what really helped us as a company to have a richer dialog around culture, invoke the personal philosophy and passion of every Microsoft employee to do this change. Not for Microsoft’s sake, but for their own sake.
nadella ballmer gates

So when you ask me about the Skype translate project, it’s not about me coming and forcing teams to work together. It’s about being able to inspire the AI team, that is doing amazing work, to build a deep neural net that brings together speech synthesis, machine translation and speech recognition — which is magic. They have done a fantastic job of bringing these three branches of computer science together in one deep neural net.

But this magic could be realized only if you brought it together with Skype data. It’s really their organic understanding of the opportunity to have impact, to solve one of human challenges present since the beginning of time: How do I transcend language boundaries?

BI: When we’re talking about the company culture, what’s your most important principle when it comes to managing business with huge ambitions?

Nadella: One of the key things in the tech business in particular is that you need to be able to push boundaries. In other words, for example, when you’re a successful company, you have this amazing lock between the idea, concept, your capability and culture. If you are successful, that means you’ve gotten the idea right, you’ve built right capability to go after that idea and your culture reinforces it.

One of the key things in the tech business in particular is that you need to be able to push boundaries

Now, the challenge is that at some point, the concept needs to be replaced by a new idea. Otherwise, how do you renew yourself? And in order to build a new concept, you need to build a new capability and that’s where your culture can get in the way. So one of the key challenges of leadership is to be able to recognize: When is the time to push for new ideas, and how do you build new capability long before you even have a new idea?

BI: How can you build such capability with no idea of how you’re going to use it?

Nadella: Taking our own example, we have amazing silicon capability. Take the FPGA (Field-Programmable Gate Array) work or the holographic processing unit in HoloLens. We hired silicon engineers many, many years ago, in order to now dream the dream that we can even create the assets able to do the holographic computing.

That is capability building: having that foresight, forcing yourself to do things that are not the easy to do. It’s not about the next day, the next quarter, next year. It’s one of the fundamental challenges of leadership and you got to get it right: you can’t be too far ahead, you can’t be too far behind. To be able to yet see those corners is all it is about.

BI: How do you approach failure in this context?

Nadella: You embrace it. If you are going to have a risk-taking culture, you can’t really look at every failure as a failure, you’ve got to be able to look at the failure as a learning opportunity.

If you are going to have a risk-taking culture, you can’t really look at every failure as a failure, you’ve got to be able to look at the failure as a learning opportunity

Some people can call it rapid experimentation, but more importantly, we call it "hypothesis testing." Instead of saying "I have an idea," what if you said "I have a new hypothesis, let’s go test it, see if it’s valid, ask how quickly can we validate it." And if it’s not valid, move on to the next one.

There’s no harm in claiming failure, if the hypothesis doesn’t work. To me being able to come up with the new ways of doing things, new ways of framing what is a failure and what is a success, how does one achieve success — it’s through a series of failures, a series of hypothesis testing. That’s in some sense the real pursuit.

BI: What hypothesis then are you testing for individual consumers right now? What’s the next big thing?

Nadella: What we are excited about is this new category of personal computing. Today we think that the form factor used by us most is the mobile device. It’s the case today, so was the PC for a long time. The question is: What happens next? What are the new categories?

We were excited to create the 2-in-1 category, which is the fastest-growing amongst PCs. We are very excited about Surface Studio and what it means to reimagine what the desktop computer is. We are also very excited about Surface Hub as a computer for meeting rooms, of course also about HoloLens and the whole mixed-reality world. So for me the new forms of computing is what we want to build for consumers. But it is important that, instead of thinking that each one of these works as an independent computer, we think they have to form a fabric of devices for you.

It’s about your mobility, your ability to get work done as an individual or as a team, when you have lots and lots of screens and computers around you. So when we talk about Windows 10, it’s not about a device operating system anymore, it’s an operating system for all of your devices. That’s how we’re trying to not only tackle the innovative challenge of bringing new things to life, but also deal with the social complexity of a lot of devices in your life.

BI: During your keynote, you were talking about “world’s fastest AI supercomputer” at Microsoft. What is that exactly?

Nadella: One of the most fascinating technological breakthroughs in Azure was that we cubed pretty much every node of Azure, which is a million-plus machines, with FPGA (Field-Programmable Gate Array).

Microsoft AzureThe characteristics of FPGA is that now it’s possible for us to take essentially any AI algorithm and run it at silicon speed versus the speed of software. Of course, we already had CPUs, GPUs, and now we even have FPGA as a way for you to be able to create intelligence.

Sometimes people make kind of a mainframe error, when it comes to AI, saying "here are companies with AI." It’s not about celebrating others with AI, it is about democratizing it, providing tools so every developer and every organization can create its own AI solutions.

BI: As this democratization of AI evolves and organizations are empowered through technology, is there a place for next big innovation and tech player?

Nadella: Every time you think all the technology that had been created already, all you have to do is look around and then there’s someone new who’s born with a new idea.

Every time you think all the technology that had been created already, all you have to do is look around and then there’s someone new who’s born with a new idea

I can say one thing for sure: that there’s going to be more innovation in our lives than what has happened in the past. And to try to hazard a guess as to who that is and where that is… maybe it’s the next Polish startup?

BI: We hope so. And if it happens, will Cortana congratulate this startup in Polish language?

Nadella: She doesn’t know Polish yet, but I’m sure she will in time. If she could translate in the real time, it wouldn’t be a challenge to teach her Polish. Which, by the way, is an idea you just gave me — maybe that’s what we should do next?

SEE ALSO: The rise of Satya Nadella, the game-changing CEO of Microsoft

Join the conversation about this story »

NOW WATCH: People are outraged by a Pepsi ad starring Kendall Jenner — here's how the company responded

A British tech unicorn is trying to cure Alzheimer's and ALS with artificial intelligence

$
0
0

ice bucket challenge, amazingthings

There is no cure for Alzheimer's, even though it's one of the most common diseases in older people.

It takes a lot of time and money to produce even the most everyday medicines, let alone a cure for something like Alzheimer's.

According to the Association of the British Pharmaceutical Industry, it costs £1.12 billion and up to 12 years to develop a single drug.

Imagine if artificial intelligence could speed that process up. We might start seeing cures for Alzheimer's and even rarer diseases being developed much faster.

That's what British startup BenevolentAI wants to do. The company uses artificial intelligence (AI) to help researchers identify possible cures more quickly.

Eventually, the company wants to sell AI-developed drugs directly, founder Ken Mulvany told Business Insider in an interview. He predicted BenevolentAI would be selling its own drugs "in the next four years."

BenevolentAI uses artificial intelligence to make connections that humans can't

BenevolentAI CEO Ken Mulvany

Drug discovery is a complicated process, but it begins when researchers try to understand what factors (or "mechanisms") cause a disease.

They then use existing research to hypothesise how a particular compound might affect that mechanism and, in turn, cure the disease. Once they've tested their hypothesis, those compounds may be taken into development.

BenevolentAI's tech comes in at that educated guess stage.

Its AI processes academic literature, studies, and other data about particular diseases, and uses this to help researchers come up with a hypothesis. 

"Today, we have so much information," Mulvany told Business Insider. "90% of the world's information has been produced in the last two years, it's a tremendous amount of data. As humans we cannot assimilate that. There's so much out there, you need a system that can relate all of these things together."

BenevolentAI's technology can identify correlations that a human researcher would never think to look at, Mulvany said. Researchers who aren't necessarily specialists in a particular area can also use Benevolent AI's systems to interrogate a disease, he added.

The company claims its processes mean researchers can come up with new hypotheses that were "previously impossible" because of the sheer amount of data its AI can process.

BenevolentAI has drugs in development

Unlike other AI companies like DeepMind, BenevolentAI doesn't publish academic papers about its technology. But there are signs it's serious.

Mulvany was previously CEO of another drug development company Proximagen, which he sold for around £223 million to US drugmaker Upsher-Smith.

At Proximagen, he said, it took 10 years to get 15 drug candidates to development. At BenevolentAI, the company has 24 drug candidates in just four years.

And the startup signed an $800 million (£624 million) deal in 2014 to hand over two Alzheimer drug targets to an unnamed US company for development, Mulvany said. This means Benevolent AI isn't working on Alzheimer's drugs itself any longer, but will take a cut from the profits if its drugs are developed and sold.

Mulvany said Benevolent AI would develop and sell drugs for rare diseases itself, but partner larger firms for more common illnesses like Alzheimer's. It's already working on drugs for ALS, the neurological disease that became well-known after the Ice Bucket Challenge.

Ice bucket challenge gates bush

Along with SAP, BenevolentAI became one of the first companies to use Nvidia's DGX-1 supercomputer, which helps companies train AI software.

The company told Business Insider it has raised $87.7 million (£68 million) in total from investors including Woodford Investment Management, Lundbeck, Lansdowne Partners, and Upsher Smith, at a valuation of $1.78 billion (£1.4 billion).

Recent filings with Companies House show that Horizons Ventures advisor Bart Swanson sits on the company's board, and a long list of shareholders including Bruce Castle, founder of Canadian investors StoneCastle, and Richard Farleigh, a high-profile angel investor. The filings also show a £12 million loss for the 13 months to December 2015, on turnover of £1.3 million.

BenevolentAI's CEO said DeepMind didn't have a business model

Google DeepMind CEO Demis Hassabis

Mulvany sits on government advisory boards about AI ("It's mostly about jobs," he said) along with Demis Hassabis, the cofounder of DeepMind.

Mulvany has strong opinions about DeepMind, its acquisition by Google, and the London tech scene more widely.

"I like them," he said."I like the group. The acquisition has done a lot for London and the UK in general to bring the academic piece into focus. It's easier for us to attract [talent] to London than it would have been if Google hadn't made the acquisition."

In short, DeepMind made AI sexy.

But he added: "With no disrespect to DeepMind, they didn't have a business model. There are few places you can be acquired with no business model, and just [by saying] 'We do cool stuff.'"

He said DeepMind faced a big challenge by trying to "solve general intelligence," whereas BenevolentAI is "a very narrow AI company."

"This is what we're good at, and we're able to apply [our tech] today," he said. "We look forward to what they publish, when they publish it. But I don't know, and they don't know either, how long that's going to take. It could be five, 10, or 50 years."

Mulvany added that DeepMind isn't a rival. "DeepMind does healthcare, we do drug development," he said.

When it comes to talent, BenevolentAI more often competes against Calico, Google's life sciences division, and Human Longevity Inc., the human genotypes company cofounded by biotechnologist Craig Ventner.

BenevolentAI isn't going to sell like DeepMind

Mulvany said he doesn't want to sell BenevolentAI, and is looking to expand the company into material sciences.

He said British startups should stop selling out early, and said it was "necessary" to build sustainable companies in the UK.

"It's necessary to do that," he said. "I was very outspoken [at Proximagen] about UK technology ending up in the hands of US companies. And I remember saying, 'I'm not going to sell this company, because I'll end up selling it for far less than it's worth.' I was outspoken publicly, to the government, and then I ended up selling it. I had malaise of the infrastructure I needed to grow the business in this country."

He added: "I'm not going to do it again! There's a real opportunity to build a business in this country, and there's an opportunity for the country to lead this sector, and all the pieces are in place. We've got incredible scientists, incredible mathematicians, we're able to draw all these people into London."

Join the conversation about this story »

NOW WATCH: I won't trade in my iPhone for a Samsung Galaxy S8 — here's why

Investors backed an AI startup that puts a doctor on your smartphone with $60 million

$
0
0

dr ali parsa babylon health

UK artificial intelligence (AI) startup Babylon has raised $60 million (£47 million) for its smartphone app which aims to put a doctor in your pocket.

The latest funding round, which comes just over a year after the startup's last fundraise, means that the three-year-old London startup now has a valuation in excess of $200 million (£156 million), according to The Financial Times.

Babylon's app has been downloaded over a million times and it allows people in UK, Ireland, and Rwanda to ask a chatbot a series of questions about their condition without having to visit a GP.

The medical chatbot provides feedback on the patient's symptoms and recommends a paid-for video call with a human doctor when the occasion calls. In the UK, one-off calls with a doctor start at £25, while calls with a specialist cost more. Alternatively, Babylon users can pay £5 a month for a subscription to the service. In Rwanda, where Babylon has 450,000 users, people pay 50p for a consultation.

"The new funding will be used to accelerate the development of our technology and expanding geographically," Ali Parsa, founder and CEO of Babylon, told Business Insider. "We are looking at 11 to 12 countries," he added, saying that South East Asia and Africa are regions of focus for Babylon. The company is also in talks with a Middle Eastern government, Parsa said.

Babylon babylon_lifestyle2

Babylon currently employs around 170 people but Parsa said that this number is expected to grow "significantly" by the end of 2017. "There are 60 vacancies [at Babylon] right now," he said.

Babylon's aim is to build the world's most advanced AI platform in healthcare, support medical diagnosis, and predict personalised health outcomes globally.

"Cutting edge artificial intelligence together with ever increasing advances in medicine means that the promise of global good health is nearer than most people realise," said Parsa in a statement.

"Babylon scientists predict that we will shortly be able to diagnose and foresee personal health issues better than doctors, but this is about machines and medics co-operating not competing. Doctors do a lot more than diagnosis: artificial intelligence will be a tool that will allow doctors and health care professionals to become more accessible and affordable for everyone on earth. It will allow them to focus on the things that humans will be best at for a long time to come."

babylon_lifestyle25

In the UK, Babylon employs around 100 doctors and pays them roughly the same as they'd get paid if they were working for the NHS. Many of them are busy mums and dads who don't want to work full time at a surgery or in a hospital, he said. 

Last January, Babylon raised $25 million (£19.5 million) from a range of investors, including DeepMind cofounders Demis Hassabis and Mustafa Suleyman. That round valued the company at over $100 million (£78 million), according to The Financial Times.

The latest funding round reportedly includes Egyptian billionaire business family, the Sawiris, as well as several other new investors.

The company teamed up with the NHS in January on a trial project that saw its AI doctor used to power the NHS 111 app, which is available to over a million north London residents. The partnership means that Londoners will be able to type their symptoms into an app instead of calling a human to describe their health problems over a phone. The app will then provide advice to the person on what to do next.

Join the conversation about this story »

NOW WATCH: 6 things the Samsung Galaxy S8 can do that the iPhone can’t

Amazon's new Echo camera is weird and a little creepy — but it hints at Amazon's master plan to rule tech (AMZN)

$
0
0

Amazon Echo Look, Model

Today, Amazon unveiled the Echo Look— a $199 voice-controlled camera designed for the fashion-forward. The sleek device listens for your command and quickly takes photos and videos of the outfit you're wearing. It'll even use AI to judge your outfit. 

In a lot of ways, it's at least a little creepy. You're basically paying Amazon for a microphone and camera to put in your bedroom. And Amazon confirms that the photos are stored on its computers indefinitely, until you manually delete them.

And of course, with Amazon's fashion algorithms still largely unknown, I'm not sure how much you'll trust Alexa, the name of the Echo's built-in virtual assistant, to dress you in the morning.

Still, it's clear that Amazon knows its niche for this new product: In the era of the professional Instagram influencer, the Echo Look offers buyers a personal fashion photographer and, perhaps, the opportunity to up their fashion game. 

Maybe it'll work, maybe it won't. Maybe it'll be a surprise hit like the original Amazon Echo, maybe it will end up more like the Amazon Fire Phone, a notorious flop. In the end, it doesn't really matter: It's a sign of how Amazon is willing to take any risk and try anything to conquer the next wave of computing, as the smartphone starts its slow, decade-long march to the grave.

The camera is the composer

The new hotness (or at least, a new hotness) in Silicon Valley is the idea that, as Mark Zuckerberg puts it, "the camera is the composer."

Increasingly, we're using our cameras and our voices to work with our devices, using them to do the same stuff that you do now with a keyboard, mouse, and/or touch on a PC or phone. If you've ever taken a picture of a product label at the grocery store or of your parking spot to help you remember it later, you're already on the journey to the next big phase of computing

Now, with the rise of artificial intelligence, cameras are getting way smarter. The Echo Look is a hyper-specialized version of so-called "computer vision" technology — where Snapchat uses it for its famed selfie filters, and FaceApp uses it to flip your gender, Echo Look enlists computer vision tech to be a fashion coach. 

But it's important to also remember that when the Amazon Echo first burst onto the scene, it was sold as a smart speaker, with some basic voice commands — and only later grew into a smart-home-controlling, voice-shopping powerhouse. Echo Look, similarly, is a focused device with a very specific sales pitch, belying grander ambitions. 

Amazon Echo Look

Because once you hook a camera up to the internet, there's all sorts of things you could do. The same vision systems that drive its fashion advice could be used to track your pet through your home, or to tell you where in the house you left your keys, or even act as a security system like Alphabet's Nest Cam.

Just like the Echo opened up the world of voice assistants, the Echo Look could open up a market for in-home cameras all its own. Again, that's so long as you can get past the creepy factor. But the technology is here.

We don't know if Amazon is working on any of this, to be totally clear. What is clear is that Amazon sees some kind of future in an Alexa gadget with a camera, and that it's willing to try weird stuff until it works. 

"The size of your mistakes needs to grow along with" the company, Amazon CEO Jeff Bezos said in 2016, following the flop of the Fire Phone. "If it doesn't, you're not going to be inventing at scale that can actually move the needle."

SEE ALSO: What you need to know about the privacy of the new smart camera Amazon wants you to put in your bedroom

Join the conversation about this story »

NOW WATCH: Everything you need to know about Echo Look — Amazon's new device that will judge the way you dress


Google's CEO isn't worried about making money on its most futuristic products yet (GOOG)

$
0
0

google event sundar pichai machine learning AI

Some of Google's biggest efforts in computing are developing better artificial intelligence and voice control. In fact, the company is implementing those efforts into every product it can, from search to YouTube.

The biggest example of this is Google Assistant, the new digital helper that lives inside the Google Home speaker and newer Android phones like the Google Pixel and Samsung Galaxy S8.

But if we start living in a world where we do more and more of our computing through voice, how can Google make money if there aren't any eyeballs on screens to look at the ads it serves?

This has been a budding theme at Google, and one an analyst asked about again during the company's earnings call on Thursday. In short, Google CEO Sundar Pichai doesn't have a clear answer yet. Instead, he's focused on perfecting voice control and the Google Assistant before figuring out a good way to monetize those products.

"We are very focused on the consumer experience now ...I think if you go and create these experiences that work at scale for users, the monetization will follow," Pichai said on the earnings call.

He also brought up successful products from Google's history, saying that at first the company didn't have the answers to monetization for products like YouTube and search.

We did get one hint at how Google may monetize voice though. In March, some Google Home users heard a promotion for the new "Beauty and the Beast" movie when they asked the speaker for an update. Google later removed the ad after the internet lit up with complaints, and later claimed it wasn't even a real ad in the first place.

SEE ALSO: Alphabet beats earnings expectations

Join the conversation about this story »

NOW WATCH: Waymo is now letting ordinary people sign up to test its self-driving cars in Phoenix

5 ways to get the latest AI news

$
0
0

GumGum1

“The world's first trillionaires,” Mark Cuban told a SXSW crowd in March, “are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of.”

The famously brash billionaire and Shark Tank star may or may not be right about that, but he’s hardly alone in addressing the artificial intelligence renaissance going on right now (according to IDC, the AI market will grow from $8 billion in 2016 to $47 billion in 2020), and the flood of news and information about it all.

How can you even begin to keep up? Our suggestion: Rely on the natural intelligence of the editors, writers, and AI experts whose newsletters we’re spotlighting here. They’ll help you cut through all the noise — the repetitive headlines and hype-laden press releases that the average Google alert or news app spits out ad nauseam — to focus on what is real, what is cool, and what matters.

1. AI Weekly

The AI Weekly email newsletter bills itself as a “collection of the best news and resources on Artificial Intelligence and Machine Learning.” It’s curated by Plume Labs founder David Lissmyr and features most of the big news in the AI sector every week. A recent summary of a Bloomberg report, for instance, read “Baidu spent $2.9M on AI over 2 years, has 1,300 AI researchers.” The same newsletter linked to an MIT Technology Review story titled “Apple’s AI Director: Here’s how to supercharge Deep Learning.”

Who it’s for: Readers keen on keeping track of how the major tech players are deploying AI.

2. The Visionary

Putting the ART and the INTEL in Artificial Intelligence, The Visionary newsletter skips the overly insider and straightforward takes of many a tech site to focus on a curated mix of the novel and the newsworthy, with a particular emphasis on the computer vision side of AI — you know, the cool stuff like self-driving cars, robots, and augmented reality — and a small dose of pop culture. Expect pithy takes on everything from self-driving race cars and movies made with AI to exclusive features on image recognition and original infographics explaining the connections between videogames and the GPUs that power deep learning.

Who it’s for: Anyone scared off by all the ones and zeroes and math that inevitably come into any discussions of AI, as well as the those interested in the more visual aspects of AI (computer vision, augmented reality) and how they connect to our daily life.

3. Inside AI

Formerly known as Technically Sentient, a cool name we hope sees the light of day in another form down the road, this weekly newsletter is published by Inside, a newsletter publisher known for its eclectic range of deep dives (e.g., Inside Automotive, Inside Streaming). Per its curator Rob May, CEO of Talla, a company that applies AI to chatbots in workplace messaging platforms such as Slack and Microsoft Teams.  “We cover interesting AI links around the web, the latest research, the startups that matter, and we mix it in with original commentary on these topics and interviews with industry experts.” May also offers useful “In Plain English” explainers (“Sometimes you will hear neural network engineers discuss ‘Hyperparameters.’ What are they?”), exclusive interviews with AI experts and business leaders, as well as the occasional invites to exclusive events on rising hot AI topics such as neuromorphic chips.

Who it’s for: Readers who want to know where the heat in AI is from the point of view of an industry insider. As May wrote recently, “I've made 23 angel investments now, 17 of which are in the AI space, and much of my deal flow comes from this newsletter.”

4. Machine Learnings

Machine Learnings has its own mascot — a cute little line drawing of a smiley robot typing at a keyboard — and an overall friendly, accessible vibe, thanks to the sensibility of its writer-curator Sam DeBrule. The San Francisco newsletter-preneur puts a refreshingly accessible, conversational spin on the news he serves up, sticking, for instance, labels like “#Awesome” and “#Not Awesome” on news picks as well as curating more evergreen collections of links to help readers get up to speed on A.I. and machine learning in general. Another nice touch: He serves up “Links from the community” — posts written by members of the 11,000-plus Machine Learnings subscriber base — and in April he started inviting guest experts (e.g., Dennis Mortensen, x.ai CEO) to “share his/her thoughts on how AI will shape the way we work and live.”

Who it’s for: Newbies, experts, and everyone in between.

5. WildML

Curated by computer science engineer Denny Britz, WildML, also known as The Wild Week in AI, kicks off with a “TLDR” (too long didn’t read) summary of its contents. Befitting his position as a Google Brain resident, Britz has a take that’s headier than that of the other newsletters on our list, given its “Code, Project & Data” and “Highlighted Research Papers” sections. That said, if you want to find out what AI developments a Google insider is paying very close attention to — from “DeepMind open sources Sonnet library for Tensorflow” (from the DeepMind blog) to “Mythic raises $8.8 million to put AI on a chip” (via VentureBeat) — WildML is for you.

Who it’s for: Techies, engineers, and academics.

This post is sponsored by GumGum.

Join the conversation about this story »

Apple reportedly acquired an AI startup focused on 'dark data' for $200 million (AAPL)

$
0
0

Tim Cook

Apple has acquired artificial intelligence startup Lattice Data for $200 million, according to a report in TechCrunch

The deal closed a few weeks ago and roughly 20 engineers from Lattice have joined Apple, according to the report, which cited an anonymous source. 

Lattice's technology focuses on "dark data," the mass of unorganized information stored in computer networks that is not in a proper format for companies to analyze or tap into. Lattice uses AI to make sense of all that data. 

Apple and Lattice did not immediately return a request for comment. 

AI has become a key technology for future products envisioned by tech companies like Apple, Google, Facebook and Microsoft, which are all racing to build up big artificial intelligence teams.

You can read the full TechCrunch story here.

SEE ALSO: Apple isn't working on a car — it's working on something much bigger

Join the conversation about this story »

What to expect from Google's biggest event of the year (GOOG)

$
0
0

Sundar Pichai

Get ready for Google's AI show.

Google I/O, the company's annual developers conference and biggest event of the year, is usually an Android event where we get a look at the latest features for the smartphone operating system. But when I/O kicks off this Wednesday, expect Google's latest artificial intelligence efforts to be what everyone ends up talking about.

If you've been paying attention over the last year or so, you've noticed that Google seems especially passionate about AI, injecting it into everything from search results to chat apps to the new Google Assistant for Android phones and the Google Home speaker. CEO Sundar Pichai has sounded especially bullish on the prospects for AI on recent company earnings calls and public interviews.

That was just the beginning. Internally, Google sees AI as its next major platform after search and Android — and it wants to give developers a way to get in early.

Here's a quick preview of what to expect from Google during the opening keynote at I/O. The event kicks off Wednesday at 10 a.m. Pacific.

Building for AI

Google Home

At last year's I/O, we got our first look at Google Assistant in the Allo messaging app and Google Home speaker. Now that Assistant has had time to mature and grow into other devices like the new Galaxy S8, expect Google to talk about how developers can build for the platform.

This is very similar to the third-party "skills" that are part of Amazon Alexa that let you do everything from play games to order a pizza from Domino's. Google's ambition is to build an entire platform based on AI and voice control, and Wednesday's speech will be all about getting developers excited about the prospect.

Google is also expected to announce that its Assistant will be coming to the iPhone and other third-party appliances like GE home appliances, Bloomberg reported Tuesday.

The stakes are pretty big. Amazon already has a big head start with Alexa, and the digital assistant is already showing up in a bunch of third-party devices like thermostats and even cars. It also has thousands of third-party skills. Microsoft is also working on expanding its Cortana assistant, and announced connected speakers from Harman Kardon and HP coming later this year.

But to borrow Pichai's favorite phrase, it's still "early days" for the category, and everyone is still trying to figure out the best way to grow their respective AI platforms and eventually turn them into real businesses. For Amazon, it's a bit easier since Alexa encourages you to buy more stuff from the company's massive online store. Google and the others will have to figure out how to stuff ads into screenless devices in a non-intrusive way. Expect to see a clearer look at that vision during the keynote.

Android

android nougat text reply shade

Don't think Google forgot about Android though. Although the company gave an early look at the next version (called Android O for now), it mostly focused on developer-friendly features. We'll probably get a better looking at the user-facing features Wednesday. Android O will likely debut this fall with the sequel to Google's Pixel phone.

We might also see an update on Google's mission to let Chromebooks run Android apps, turning them into full PCs. But the early models of these Chromebooks are littered with bugs, and developers haven't modified their apps for a laptop-sized screen yet. There's still a lot of work to be done.

Cars

No, we're not talking self-driving cars, but cars running Android for their infotainment systems and other controls.

On Monday, Google announced that Audi and Volvo will make cars that run on Android so you can run apps like Maps and Google Play Music without plugging your phone into the car like you have to do with Android Auto or Apple's CarPlay. It also features Google Assistant so you can ask the car for directions or give commands while driving. Finally, it can control basic functions like the air conditioning and seat positioning. Expect to see some demos at I/O this week.

Virtual reality and augmented reality

google daydream view vr

Google's VR platform Daydream debuted at last year's I/O, and has since launched on the Google Pixel and a few other devices. The company also bought VR gaming studio Owlchemy Labs, a sign that it wants to start building more VR content in house. Expect an update on how Daydream is going.

Other stuff

Since Google I/O is a developers conference, there will be a lot of wonky developer talk on top of all the flashy consumer stuff. We won't bore you with that.

You can also expect some updates on Google's chat apps Allo and Duo, Instant Apps (which let you "stream" a snippet of an app when you click a related link in search), and maybe a demo from one of Google's sister companies like Nest or Verily.

Flops

Google has a long history of announcing stuff at I/O that either fail or the company never follows through on. Google TV. Google Glass. The Nexus Q. The list goes on and on. If you see something from Google on Wednesday that looks too good to be true, it just might be.

SEE ALSO: Google's CEO isn't worried about making money on AI yet

Join the conversation about this story »

NOW WATCH: Hands-on with Microsoft's newest laptop that's taking on Google and Apple

Elon Musk's $1 billion AI startup has developed a system that trains robots in VR

$
0
0

Elon Musk

OpenAI, the artificial intelligence research company set up by Elon Musk, has come up with a new method for teaching robots — giving them a demo in virtual reality.

The non-profit, which is funded to the tune of $1 billion, trained a self-learning algorithm to complete a task after a human demonstrated it once in virtual reality.

In this case, the task was stacking coloured blocks.

The team got a programmed robot to reproduce the behaviour shown during the demonstration in the virtual environment.

"We've developed and deployed a new algorithm, one-shot imitation learning, allowing a human to communicate how to do a new task by performing it in VR," OpenAI wrote in a blog post on Tuesday.

OpenAI VR

The company explained that the algorithm is powered by two neural networks, which are computer systems based on the human brain. One of the neural networks is a vision network and the other is an imitation network.

The vision network is trained with hundreds of thousands of simulated images with different combinations of lighting, textures, and objects, while the imitation network "observes a demonstration, processes it to infer the intent of the task, and then accomplishes the intent starting from another starting configuration."

While the algorithm was successfully taught how to stack blocks on this occassion, OpenAI said the same technique could be applied to other tasks.

Join the conversation about this story »

NOW WATCH: Chinese inventors show off the gladiator robot they want to use to challenge the US' 'Megabot'

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>