Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

The weird way video games are paving the road to the future of technology (GOOG, NVDA, GOOGL)

$
0
0

robot

Here's something interesting you might not know:

The graphics processors, or GPUs, that make possible the eye-poppingly realistic graphics of games like "Quantum Break" are also really well-suited to powering artificial intelligence and other high-intensity tasks.

The world of high-performance computing measures power in "FLOPS," or "floating point operations per second."

It turns out that as video game graphics have gotten better, the hardware used to produce them is increasingly well-suited to powering the AI future envisioned by companies like Google and Facebook.

"[After] 2007, all the big advances in FLOPS came from gaming video cards designed for high speed real time 3D rendering, and as an incredibly beneficial side effect, they also turn out to be crazily fast at machine learning tasks," wrote Stack Overflow founder Jeff Atwood in a March 2016 blog entry.

In fact, when the Google DeepMind AI won its history-making Go series against Lee Sedol, it was sporting 1,202 CPUs, or traditional processors, and 176 Nvidia GPUs under the hood.

quantum break windows 10

Nvidia and Google are actually partners on artificial intelligence, dating back to the Google Brain image recognition system, as detailed in an Nvidia blog entry. Long story short, Google Brain needed 2,000 CPUs, plus all of the server infrastructure to support them. That's a tall order. But they found that 12 Nvidia GPUs could deliver "the deep-learning performance of 2,000 CPUs."

Which is to say, depending on how the DeepMind team set all of those chips up in the real world, the 176 Nvidia GPUs in DeepMind could well have been as powerful for this specific task as 29,333 regular old computer processors — insanely efficient. 

Video games are the future

What this means for you, the non-AI, non-high performance computing expert, is that whenever you buy a new video game console, or upgrade your PC with a new graphics card, you're subsidizing Nvidia and the other graphics card companies.

With those advancements, those companies and their customers can apply the technology in all kinds of interesting ways. For Google, that's a great way to boost its AI ambitions. Nvidia itself is pitching an on-board computer for self-driving cars that it claims is as powerful as 150 MacBook Pros.

mapd demo

I recently spoke to Todd Mostak, CEO of MapD, a startup backed by Google Ventures and Nvidia. MapD uses the ludicrous performance of these GPUs to analyze immense amounts of data, like political campaign contributions in a geographic area, and display it on a maps or charts in real-time, interactively.

Mostak told me that the technology would simply not be possible at the scales it's dealing with if it weren't for the fact that gamers, ever-hungrier for better graphics, were driving massive demand for cheaper, more powerful GPUs. That translates into better performance and better margins for MapD.

"They wouldn't have gotten there if gamers hadn't wanted to game on bigger screens at higher resolutions," Mostak says. "We definitely have people playing Quake to thank for most of the technology."

SEE ALSO: Apple should definitely copy Microsoft's incredible finger-sensing smartphone technology

Join the conversation about this story »

NOW WATCH: Business Insider’s riveting documentary takes you into the high-pressure, cutthroat world of professional video gamers


There's an 'AI backbone' that over 25% of Facebook's engineers are using to develop new products (FB)

$
0
0

Facebook employees

Over 25% of Facebook engineers are using a piece of software to help them leverage artificial intelligence (AI) and machine learning (ML), according to a blog post by one of the company's engineers.

FBLearner Flow, as the software is known, is filled with algorithms developed by Facebook's AI/ML experts that can be accessed by more general engineers across the company to build different products.

"FBLearner Flow [is] capable of easily reusing algorithms in different products, scaling to run thousands of simultaneous custom experiments, and managing experiments with ease," wrote Facebook software engineer Jeffrey Dunn, in a blog post on Monday titled "Introducing FBLearner Flow: Facebook's AI backbone."

AI involves creating computers and computer software that are capable of intelligent behaviour, while machine learning can be defined as a field of study of that gives computers the potential to learn without being explicitly programmed.

"FBLearner Flow is used by more than 25% of Facebook's engineering team," wrote Dunn. "Since its inception, more than a million models have been trained, and our prediction service has grown to make more than 6 million predictions per second."

It's not clear exactly how many engineers Facebook employs but the social media giant had 12,691 people in full-time employment across the company last December, according to statistics portal Statista.

The FBLearner Flow platform is similar to Microsoft's Azure Machine Learning service and Airbnb's open source Airflow platform, according to VentureBeat, which spoke to Hussein Mehanna, director of Facebook's Core Machine Learning Group.

Facebook has been working on FBLearner Flow since late 2014 and has spoken to LinkedIn, Twitter, and Uber about the system, according to Mehanna, suggesting it could one day be open-sourced.

AI is the most important technology anyone in the world is working on today, according to Dave Coplin, Microsoft's chief envisioning officer, so it's not all that surprising Facebook wants to put the technology into the hands of developers.

Dunn described in his post exactly where machine learning is being applied across Facebook's platform. "When you log in to Facebook, we use the power of machine learning to provide you with unique, personalised experiences," he said. "Machine learning models are part of ranking and personalising News Feed stories, filtering out offensive content, highlighting trending topics, ranking search results, and much more.

"There are numerous other experiences on Facebook that could benefit from machine learning models, but until recently it's been challenging for engineers without a strong machine learning background to take advantage of our ML infrastructure. In late 2014, we set out to redefine machine learning platforms at Facebook from the ground up, and to put state-of-the-art algorithms in AI and ML at the fingertips of every Facebook engineer."

Dunn's full post can be read in full here.

Join the conversation about this story »

NOW WATCH: Virtual reality could help the stock market reach all-time highs in 2016 and 2017

Google's newest software is named 'Parsey McParseface' — no, seriously (GOOG, GOOGL)

$
0
0

BOATY MCBOATFACE

The spirit of the "Boaty McBoatface" phenomenon lives on at Google. 

Today, Google introduces Parsey McParseface — a free new tool, born from Google's research division to help computers better parse and understand English sentences.

"We were having trouble thinking of a good name, and then someone said, 'We could just call it Parsey McParseface!' So... yup," says a Google spokesperson. 

Parsey McParseface is a piece of a larger framework released today called SyntaxNet, itself a big part of Google's popular home-built TensorFlow software for building artificial intelligence, as explained in a blog entry. With this release, any developer anywhere can download, use, and even start to improve Google's tools in their own software.

One of the biggest problems in artificial intelligence, today, is that speech recognition by computers may be better than ever, but they still have trouble understanding exactly what we mean. After all, language is complicated: Consider that "Buffalo buffalo Buffalo buffalo buffalo buffalo" is a 100% gramatically correct sentence in American English.

It's an issue that titans like Google, Facebook and Microsoft have thrown themselves into, as artificial intelligence and the ability to talk to a computer like a human continues to become an important part of the future of tech.

Back to school

To understand how Parsey McParseface and SyntaxNet tackle this problem, it may be helpful to flash back to your grade school English classes, where you were taught how to diagram a sentence, identifying verbs, nouns, and subjects. 

Parsey McParseface does those diagrams automatically. Like so:

google parsey mcparseface

Alice is the subject, Bob is the direct object, "saw" is the verb. Boom.

That's simple enough. But to use Google's own example, things can get messy. Consider another longer, but still straightforward sentence like "Alice drove down the street in her car." To us normal humans, there's no possible way to misinterpret that, because we know how cars work and where they drive.

But if you're an average computer, just following instructions, and you're doing sentence diagrams, it is totally gramatically correct to parse that sentence as saying the street was located in Alice's car. Obviously, that's not right or really even physically possible, but it is correct by the laws of grammar.

"Humans do a remarkable job of dealing with ambiguity, almost to the point where the problem is unnoticeable; the challenge is for computers to do the same," writes Google in a blog entry.

Parsey McParseface uses neural networks, kind of like the one that let Google DeepMind outsmart Go champion Lee Sedol, to scan each sentence and vet it for "plausibility," as in how likely it is that it's what a human would use. It means a lot of saved time and a mega-boost to efficiency, since it doesn't have to look at implausible sentence constructions.

It means ParseyMcparseface can correctly diagram out and understand longer, more complex sentences, like so:

parsey mcparseface google

In Google's own tests, running SyntaxNet and Parsey McParseface against random data drawn from the web, it was about 90% accurate in understanding sentences — a good start, but with lots of room to grow, Google says. For starters, that means going beyond English. But it also needs to teach SyntaxNet to learn more about the real world.

"The major source of errors at this point are examples such as the prepositional phrase attachment ambiguity described above, which require real world knowledge (e.g. that a street is not likely to be located in a car) and deep contextual reasoning," Google writes.

SEE ALSO: The weird way video games are paving the road to the future of technology

Join the conversation about this story »

NOW WATCH: How to see everything Google knows about you

Accenture and IPsoft partner on AI virtual agent business

$
0
0

AI Forecast

This story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

Consulting firm Accenture has announced a partnership with artificial intelligence software company, IPsoft, aimed at spurring the use of AI in the enterprise, ZDNet reports.

The deal will see Accenture integrate IPsoft's AI program, Amelia, into a new business unit called Accenture Amelia. Amelia is similar to other virtual assistants such as Apple’s Siri or Amazon’s Alexa, however, IPsoft’s program is reportedly more expressive and capable of showing empathy than other programs.  

Accenture will use the platform to offer a suite of new strategy and consulting services. Amelia will act as a virtual agent to businesses within the insurance, banking, and travel industries.  Companies can enlist its help to automate a wide variety of service desk roles, such as helping customers open new bank accounts or process insurance claims. 

The partnership comes as Accenture seeks to ensure its position in the burgeoning AI enterprise market. The company recently built a new R&D lab in Dublin, Ireland, where Accenture’s head office is. The lab will focus on the development and integration of AI systems into Accenture Operations, including customer support, procurement, supply chain, and warranty services.  

The global market for content analytics, discovery and cognitive systems software is projected to reach $9.2 billion by 2019, according to IDC. Other research firms still believe that AI will be the catalyst behind a $5 trillion to $7 trillion economic impact by 2025,  notes Forbes.

Every subscriber to the BI Intelligence Apps and Platforms Briefing received this story first thing in the morning, along with other insightful and informative content. To learn more and subscribe, please click here.

Join the conversation about this story »

Google is out of original ideas (GOOG, GOOGL)

$
0
0

sundar pichai google io 2016

Google kicked off I/O, its biggest event of the year, on Wednesday.

And yet, not a single product announced was new.

At I/O, we saw Google chasing the trends of the moment instead of leaving us with the feeling that it has anything original left in the works that aren't just Google-fied versions of what we've seen before from its rivals.

(To be clear, I'm just talking about Google, not its parent company Alphabet, which is working on crazy projects ranging from internet-spewing drones to ways to cheat death.)

First there's Google Home, a smart WiFi speaker launching later this year that has Google's new virtual assistant living inside. It responds to voice commands and can do everything from telling you your flight is delayed to playing your favorite music.

google home

Sound familiar? That's because it's exactly the same as Amazon's Echo, the smart speaker that has turned into a unexpected hit for the company.

Then there's Allo, a messaging app infused with Google's intelligence that can guess what you want to type next and suggest things to do when you're chatting. 

It's the same stuff we're seeing in Facebook Messenger, WhatsApp, Telegram, and the slew of other messaging apps out there clinging to the idea that people want to send texts to a virtual helper to buy flowers and book tickets. (They don't.) Allo looks fine, but good luck getting hundreds of millions of people already locked into other messaging platforms to switch over.

Next, there's Duo, a video chat app, which inexplicably competes with Google's other video chat app, Hangouts, on top of Microsoft's Skype and Apple's FaceTime.

Finally, we saw Daydream, the new virtual reality platform built into Android, with the hopes that people will build headsets that you can slot your smartphone into. Its interface and overall concept is almost a direct copy of Samsung and Oculus' Gear VR.

Here's Gear VR's main menu:

gear vr home screen

And here's Daydream's:

Google Daydream

Everything Google showed us this week is an iteration on something one of its competitors has already done, but with the promise it can do it better. And that's assuming the products even make it past the initial "oooo and ahhhhh" phase that has the tech press whipped into a frenzy this week.

google nexus QAfter all, Google has a history of announcing flashy projects at I/O that either never launched or totally bombed, like Google Glass, Google TV, and the Nexus Q media streaming orb. All of those appeared to be pretty cool when they were first unveiled... until people actually tried to use them.

Two CNET writers got to talk to Google CEO Sundar Pichai ahead of I/O and asked why Google's new products appear to be following the competition, not leading it.

His answer was the one we often hear from tech executives: From the outside, it may appear like Google is chasing its competitors' ideas, but it's all fueled by the belief that Google has a superior product. Pichai rightfully pointed to web search, web browsing, and email as all areas that existed before Google got involved and perfected them.

But the only project Pichai's optimism could apply to is Google Home. Assuming the speaker sounds good and the microphones are just as accurate as the ones on the Echo, Google Home has an immediate advantage over Amazon thanks to Google Assistant, the next iteration of the already excellent Google Now.

Google Now has bested Siri since it first debuted four years ago, and no one has beaten it simply because it's so good at mining all the data Google already has on you from your search history to Amazon package deliveries. Putting Google Assistant in a can inside your home sounds like a fantastic idea, and it's going to be tough for Amazon to match it.

As for everything else? 

It's just Google chasing stuff the rest of the industry already invented.

SEE ALSO: Here are the most exciting things Google announced at its giant conference

Join the conversation about this story »

NOW WATCH: Google just launched a video calling app that does something FaceTime can't

Science fiction can tell us a lot about our problems with artificial intelligence

$
0
0

artificial intelligence robot

Given that the reality of AI may be fast approaching, it’s of the utmost importance that we work out what might a future with artificial intelligence might look like.

Last year, an open letter with signatories including Stephen Hawking and Nick Bostrom called for AI to be of demonstrable benefit to humanity, or risk something that exceeds our ability to control it.

AI, as conceived of in popular culture, does not yet exist, even if autonomous and expert systems do.

Smartphones might not be supercomputers, but they are called “smartphones” for good reason, in terms of how their operating systems function. Equally, we are happy to talk about a computer game’s “AI”, but gamers quickly learn to take advantage of its limitations and inability to “think” creatively.

There is an important difference between these systems and what is termed Artificial General Intelligence (AGI) or “strong AI”, an AI with the general intelligence and aptitudes of a human.

Both the US and British governments' exploration of the significance and implications of AI research has focused on potential economic and social impacts. But politicians would do well to consider what science fiction can tell them about public attitudes – arguably, one of the biggest issues concerning AI

Culturally, our understanding is informed by the ways in which it is represented in science fiction, and there is an assumption that AI always means AGI, which it does not. Fictional representations of AI reveal far more about our attitudes to the technology than they do about its reality (even if we sometimes seem to forget this). Science fiction can therefore be a valuable resource from which the public view of AI can be assessed – and therefore corrected, if need be.

I, Robot: the greater good

In Alex Proyas’s adaptation of Isaac Asimov’s stories, I, Robot (2004), there is a heart-to-heart scene in which we learn of the reason for a detective’s mistrust of robots. He recounts a car crash in which two cars end up in a river, and that a robot determined that it was better to save the detective than it was to save a child because it the detective had a higher percentage chance of survival. The scene serves to demonstrate the inhumanity of AI and the humanity of the detective, who would have opted to save the child. This scene, for all its Hollywood gloss, is indicative of the core ethical issues concerned with AI research: it denigrates AI as not being “moral” but merely a pattern of encoded behaviours.

But is the robot in this situation actually wrong? Isn’t it better to save one life than lose two? Here, emergency triage is not seen as “inhuman” but necessary. “Greater good” arguments have been going on for centuries and, in this situation, the “greater” good, saving the policemen or the child, is debatable, especially as the detective later saves humanity from the ravages of VIKI, an AI gone rogue.

The context in which this decision is made, the parameters through which the robot reached its percentage conclusion, could also factor in any number of concerns, albeit limited by those programmed into it. Is the emotional response, if saving the child is a fundamentally emotional approach, the correct one?

One of the problems we face as a society engaging with an AI-future is that machine intelligences might actually demonstrate the contingency of our own moral codes, when we want to believe them to be universally applicable. Is the problem not that the robot was wrong, but that in fact it might be right?

Interacting with AI

The ways in which AI have been represented lead to pretty much the same conclusion: any AI is inhuman(e) and therefore dangerous. Just as VIKI in I, Robot turns against humanity, as she finds another “logical” interpretation of Asimov’s three laws (designed to protect humans), there are a plethora of stories and films in which AIs take over the world (Daniel H Wilson’s Robopocalypse and Robogensis, the Matrix and Terminator franchises). There are many more about how they are insidious and will directly control humanity, or enable factions to take more complete control of society (Daniel Suarez’s Kill Decision, Neal Asher’s Polity stories, the TV series Person of Interest).

But there are relatively few about how they might cooperate with humanity (Asimov got in here early on this and remains one of the few, although Ann Leckie’s Ancillary trilogy is also of interest).

The hypocrisy is that this trend suggests that it’s fine for governments to monitor its citizens and for corporations to analyse social media feeds (even using software bots), but an AI shouldn’t. It’s like saying that you’re happy being screwed over, but only by a political system or another mammal, not a computer.

One solution, therefore, is to consider how to limit AIs and teach them human ethics. But if we “train” AIs to have ethical behaviours, who do we trust to train them? To whose ethical standards?

Given the recent issues Microsoft had with Tay (members of the public tried to “trick” an AI into making potentially offensive statements), it is clear that if an AI learns from humanity, what it learns might be precisely that we’re not worth the time it takes to tweet back to us.

We don’t trust robots to think for themselves, we don’t trust ourselves to program them or use them ethically, and we can’t trust ourselves to teach them. What’s an AI to do?

Public perceptions of AI will be governed by just this sort of mistrust and suspicion, fostered by such public debacles and by the broadly negative view evident in much science fiction. But what such examples perhaps reveal is that the problem with AI is not that it is “artificial”, nor that it is immoral, nor even in its economic or social impact. Perhaps the problem is us.

Join the conversation about this story »

NOW WATCH: Apple’s new ‘true tone’ iPad feature changes the display color based on the light around you

Here's what everyone got wrong about the latest Apple doomsday scenario

$
0
0

Zooey Deschanel Siri iPhone

The tech world was in a frenzy this weekend, arguing over whether or not Apple would collapse if it failed to catch onto the budding trend of artificial intelligence, which some think could be the next major computing platform after the smartphone.

Marco Arment, a well-respected developer and observer of the tech industry, wrote a post on his Friday highlighting how Apple appears to be behind on AI compared to its biggest competitors like Facebook, Amazon, and especially Google, which showed off some interesting AI-powered gadgets and apps at the I/O developers conference last week. (Arment left out Microsoft, but I'd say it's wrong to exclude it. Microsoft is doing a lot of cool stuff with machine learning and AI too, racist chatbots excluded, of course.)

The argument goes that if AI turns out to be a major platform and we end up communicating with our gadgets through voice more than tapping and touching, then Apple could be in big trouble and suffer a similar fate as BlackBerry did when it failed to adapt to modern touchscreen smartphones.

Here's Arment:

Where Apple suffers is big-data services and AI, such as search, relevance, classification, and complex natural-language queries. Apple can do rudimentary versions of all of those, but their competitors — again, especially Google — are far ahead of them, and the gap is only widening.

That's totally true today.

Apple may have been the first to popularize the concept of a digital assistant when it brought Siri to the iPhone 4s in 2011, but Google was able to surpass Siri less than a year later with its own digital assistant, Google Now. At I/O last week, we learned Google Now is evolving into Google Assistant, and will become even more useful, potentially leaving Apple in the dust.

Google Home

Things may look bad for Apple's AI efforts today, but it's naive to think the company is sitting idly by while Amazon and Google put their digital assistants in home speakers and give us a taste of a world where we a lot of our computing is done through voice.

In recent months, Apple has snapped up two impressive companies working on artificial intelligence. Even though Apple hasn't said how they'll use those new technologies, it doesn't take a genius to see how they'll fit in.

Let's take a look:

VocalIQ

In October of last year, Apple bought a company called VocalIQ, which has a technology that helps digital assistants learn every time it interacts with a person's voice commands. 

Here's how VocalIQ described its product on its website:

Every time your application is used it gets a little bit smarter. Previous conversations are central to [its] learning process — allowing the system to better understand future requests and in turn, react more intelligently. As a developer, you have the ability to change your system’s interpretation or behavior directly in your app.

Now imagine Siri having the power to learn the more people use it. Over time, it can provide better answers for what you're looking for and understand context a lot better.

Perceptio

Around the same time Apple bought VocalIQ, it also bought Perceptio, a startup that made it possible to power AI assistants without having to mine a user's personal data.

This has a huge benefit for Apple, which unlike Google and Facebook, doesn't collect data on its users. Part of the reason why Google Now is so good is because it mines data from your search history, Gmail, calendar, and more to figure out what you want to do.

Apple's history of keeping that kind of data private may be great for your own peace of mind, but it also means Apple's AI services suffer. If Perceptio works as well as it sounds on paper, this could be a way for Apple to balance a user's privacy while still providing them with an excellent AI assistant.

Too late?

As many have said already, it could be too late for Apple to essentially buy its way into AI, assuming the platform ever takes off.

I don't think so. There's massive potential in voice-powered assistants, but so far none of them have become useful and ubiquitous enough to pull people away from their smartphones. Apple doesn't have to be better than Google, Microsoft, or Amazon at AI. It's so far ahead of them in everything else that it just has to match them by baking great AI into the iPhone and other future products.

If AI does take off, it appears Apple is building up its technology to keep up.

SEE ALSO: No, Apple isn't the next BlackBerry — it's the next Microsoft

Join the conversation about this story »

NOW WATCH: This is Google’s answer to the Amazon Echo

Apple is planning its own Amazon Echo killer (AAPL, GOOG, AMZN)

$
0
0

Tim Cook

Apple is working on its own smart speaker, similar to Amazon's Echo device, The Information's Amir Efrati reports. 

Sources tell Efrati that the company is building a voice-activated device powered through its virtual assistant, Siri, that could let people ask questions or complete activities like setting a timer. 

The market for such home appliances, powered by artificially intelligence-enabled virtual assistants is heating up.

Amazon's Echo has been a surprise hit for the company and Google just announced its own plans last week to make a similar gadget called Home. 

At the same time, Apple is also reportedly preparing to start letting outside developers make their services available through Siri. For example, imagine asking Siri to find you a restaurant, which it can already suggest via Yelp, but then also asking it to book a reservation, too, which might happen through OpenTable. 

Sounds pretty similar to Viv, a new digital assistant service made by Siri's original creators which shines because of its ability to seamlessly integrate with a bunch of different services like Venmo, Hotels.com, and Uber. 

This "conversation as a platform" format is all the rage in Silicon Valley right now, with Google, Facebook, Microsoft, and a slew of startups all rushing to find new ways to make it easier for users to connect to all the services they need through a single interface or chat. It would make sense that Apple would want to make sure Siri kept up with the competition. 

The company declined to comment. 

SEE ALSO: Google's new products prove it has still the best tech chops — but it might not matter

Join the conversation about this story »

NOW WATCH: Here’s where Elon Musk, Bill Gates, and Steve Jobs started as interns


Here's how artificial intelligence could solve the biggest problem in education

$
0
0

Charter School Students Classroom Teacher Merrimack College

Ashok Goel wants to expand high-quality education to "millions" more people over the internet.

That idea isn't new. It's the same goal that's pushed universities to make more and more courses and degree programs available over the internet, making it possible for students living on the far sides of the word to get degrees from American universities — and vice versa.

But online education has a problem: Of the hordes of students that sign up for massive open online classes (MOOCs), an average of less than 7% finish.

Goel thinks artificial intelligence can change that.

"There are many reasons" students don't finish, he told Tech Insider.

"But one reason is that these MOOCs do not provide any teaching assistants. So you can sign up for a course, say in mathematics, or computer science, or web design, or whatever. But you cannot ask anyone a question like 'So how do I download this material?' or 'How do I find this material?' or 'How do i find this video?' You know, just basic simple things. And people get frustrated and they drop the course."

He thinks AIs could pick up that slack:

"If automated, artificially intelligent teaching assistants could just address the basics it could raise the retention rate from say 7% to 15%," he said.

On the scale at which MOOCs operate, even a retention bump of a few percentage points would impact thousands of students.

An AI researcher at Georgia Tech, Goel has taken a big first step down this path. In January 2016 he secretly introduced "Jill Watson," a digitally intelligent teaching assistant, into an online course for masters students in computer science. The AI answered questions in the class discussion forum, with her inhuman status going largely unnoticed.

The 300 grad students in the class made up a smaller sandbox for Jill to learn in, because the eight other human TAs and Goel could moderate her responses. But Goel expects he can scale up the technology — and find other applications for AI in online classrooms.

"In the same online class we have developed tutors, intelligent tutors, that give you a [learning] exercise," he said. "There about 100 such 'nano-tutors' that we have developed. The student does an exercise, the tutor immediately provides feedback on that exercise. And if the student doesn't do it right, then the tutor keeps on encouraging you to do it again and again."

That said, Goel acknowledged there are limitations to what AI educators can accomplish as compared to human being.

Watson jeopardy

"Teachers and teaching assistants play a lot of roles. We not only answer questions, we provide examples, we provide models, we act as mentors, we act as coaches. And Jill so far simply answers some questions. She doesn't do any of the other things."

Goel built this version of Jill Watson with internal funding from Georgia Tech. Now that he has a definitive result showing her success, he plans to apply for a National Science Foundation grants.

And of course, as he said, "We are trying to spawn a startup."

This is the second article in a three-part series on Jill Watson, her training, and her implications for the future of education. Read part one, about how Jill functioned in Goel's classroom, here.

SEE ALSO: Once this breakthrough happens, artificial intelligence will be smarter than humans

Join the conversation about this story »

NOW WATCH: Apple’s new ‘true tone’ iPad feature changes the display color based on the light around you

Apple is working on an AI system that wipes the floor with Google and everyone else

$
0
0

cookie monster apple siri commercial

Siri is due for a big upgrade.

Apple now has the tech in place to give its digital assistant a big boost thanks to a UK-based company called VocalIQ it bought last year.

According to a source familiar with VocalIQ’s product, it’s much more robust and capable than Siri’s biggest competitors like Google Now, Amazon’s Alexa, and Microsoft’s Cortana. In fact, it was so impressive that Apple bought VocalIQ before the company could finish and release its smartphone app. After the acquisition, Apple kept most of the VocalIQ team and let them work out of their Cambridge office and integrate the product into Siri.

Before Apple bought the company, VocalIQ tested its product against Siri, Google Now, and Cortana, and the results were impressive. Users asked each AI questions using normal language, not the robotic commands you’re used to using with digital assistants. Those commands can be long and complicated, and the other assistants had trouble catching everything.

For example, imagine asking a computer to “Find a nearby Chinese restaurant with open parking and WiFi that’s kid-friendly.” That’d trip up most assistants, but VocalIQ could handle it. The result? VocalIQ’s success rate was over 90%, while Google Now, Siri, and Cortana were only successful about 20% of the time, according to one source.

How VocalIQ works

After writing the program, VocalIQ hired contractors through Amazon’s Mechanical Turk to feed the program queries normal humans would ask and help it learn how people talk. These contractors would ask VocalIQ questions from a list of prompts to train the system. After about 3,000 dialogues, VocalIQ already started to get much more accurate. Once the process was finished, VocalIQ had recorded about 10,000 dialogues from Mechanical Turk contractors.

To put that in context, Siri brings in 1 billion queries per week from users to help it get better. But VocalIQ was able to learn with just a few thousand queries and still beat Siri.

phil schiller introduces Siri

VocalIQ may sound similar to Hound, a new digital assistant app that launched on iPhone and Android recently, but Hound only works one session at a time. VocalIQ remembers context forever, just like a human can. That’s a massive breakthrough.

Let’s go back to the Chinese restaurant example. What if you change your mind an hour later? Simply saying something like “Find me a Mexican restaurant instead,” will bring you new results, while still taking into account the other parameters like parking and WiFi you mentioned before. Hound, Siri, and any other assistant would make you start the search session over again. But Vocal IQ remembers. That’s more human-like than anything available today.

Because VocalIQ understands context so well, it essentially eliminates the need to look at a screen for confirmation that it’s doing what you want it to do. That’s useful on the phone, but could be even better for other ambitious projects like the car or smart speaker system Apple is reportedly building. (VocalIQ was being pitched as a voice-controlled AI platform for cars before Apple bought the company.) In fact, VocalIQ only considers itself a success when the user is able to complete a task without looking at a screen. Siri, Google Now, and Cortana often ask you to confirm tasks by tapping on the screen.

It acts like a real assistant, not just voice search

VocalIQ’s platform is also malleable enough to be programmed for anything you want to do. One example a source gave was teaching it to successfully manage email while a user’s phone was in their pocket. (Just like Joaquin Phoenix's character controls his phone in the movie "Her.") In theory, Apple would be able to train Siri to do everything much better using VocalIQ.

VocalIQ can also filter out extraneous noise to figure out exactly what you’re saying, thus making it more accurate than Siri is today. It’s able to take in all the noise in an environment — the TV, kids shouting, whatever — and determine with a high probability which sound is actually the user’s query. It can even learn to adapt to different accents over time to improve accuracy. If you’ve ever had trouble getting Siri to understand you, then you know how important this is.

amazon echo

It’s still unclear when Apple plans to implement more of VocalIQ’s capabilities into Siri. One source speculated that it may happen slowly over time, so as not to throw off users with a radical change. But it sounds like Apple is arming itself for a significant shift in how Siri works.

Apple declined to comment.

Meanwhile, Siri is about to get some other improvements this year. According to Amir Efrati of The Information, Apple will open up Siri to developers, similar to the way Amazon has opened up its Alexa assistant. That means third-party apps will let you start using your voice for some tasks. (“Siri, call me an Uber,” for example.)

Recently, there have been doubts about Apple’s artificial intelligence efforts. At its big annual conference in May, Google showed off some intriguing new uses for its AI, including Google Home, a smart speaker with its digital assistant built inside. Marco Arment, a well-known developer and big voice in the tech community, wrote on his blog that if Apple fails to keep up with AI and voice-powered platforms take off, the company risks suffering the same fate as BlackBerry.

But it sounds like Apple isn’t sitting still while its competitors go all in.

Join the conversation about this story »

NOW WATCH: This is Google’s answer to the Amazon Echo

Facebook will start scanning 10,000 posts a second to make comments less terrible (FB)

$
0
0

zuckerberg

While Facebook has lot of noise around using artificial intelligence to better sort photos and videos, text is still an huge part of the Facebook experience. 

Today, we got a look behind the scenes on how AI is helping Facebook sift all that text and improve the Facebook experience with "Deep Text"— a system developed by Facebook's AI labs that scans 10,000 posts every second in 20 languages.

People post more than 1 billion items — statuses, links, photos, whatever — to Facebook every day, says a company spokesperson.

Deep Text is "deep learning-based text understanding engine," as Facebook puts it. Its ability to understand text like a human would is already being put to work in Facebook Messenger. And Facebook plans to take it a lot further.

It's the technology behind the scenes that can distinguish the words "I need a ride," in which case Messenger might prompt you to order an Uber, versus "I was going to get a ride," in which case the moment has already passed. Not to mention "I like to ride a donkey," which is a whole different thing entirely.

More intriguingly, Mehanna says Deep Text can be used in busy Facebook comment threads.

Say for example when Mark Zuckerberg does a live Facebook Q&A, and the top few comments are often in another language, off-topic, or both: 

facebook zuckerberg comments

"It becomes really hard to find interesting comments," says Mehanna.

Mehanna says that there's a lot of potential for Deep Text to scan those comments and rank them for you, personally. If Facebook's AI has figured out that you speak English and Farsi, it'll prioritize relevant comments in those languages so you see them first — while pushing annoying things down.

It's an elegant solution for Facebook: It maintains the platform's very public commitment to global free speech, since Deep Text isn't actually deleting the comments.

But it also means that you'll see relevant comments in your own language first, making for a better Facebook experience. And if the comments do contain spam or straight-up hate speech, Deep Text could one day be empowered to automatically regulate. 

Another cool thing is that Deep Text will be able to understand when your Facebook status is offering an item for sale, and offer to cross-list it automatically to your regional networks.

facebook deep text sale

Going forward, Mehanna says that they want to train Deep Text to understand more of Facebook's 40 officially-supported languages. But he says that it's also almost inevitable: One Facebook engineer took Deep Text's "brains" and trained it how to speak Indonesian in a weekend. 

That ability to learn will also behoove Facebook's future artificial intelligence efforts, especially around chatbots. Unlike Google, Facebook's users are speaking the way that real actual humans speak. It'll make for more humanoid bots.

"I think with Facebook data, we can make AI far more human, and social, and conversational," says Mehanna. 

SEE ALSO: Mary Meeker thinks Apple is the past, Amazon is the future

Join the conversation about this story »

NOW WATCH: Hidden Facebook tricks you need to know

Apple has a chance to finally make Siri good (AAPL)

$
0
0

siri on iphone

Siri may be the first mainstream digital assistant, but it's still the butt of many jokes now that it's been bested by just about all of Apple's biggest competitors.

It's fair criticism too. Siri's intelligence and responsiveness often lags behind Amazon's Alexa or Google Now. And the competition shows no sign of giving up its lead. Google made a bunch of splashy AI-related announcements at its I/O conference last month. And Amazon's CEO Jeff Bezos said at the Recode conference this week that there are more than 1,000 employees working on Alexa and the Echo speaker.

So, how can Apple take Siri to the next level?

One way is to open it up to third-party developers, the way Amazon did with the Echo/Alexa. ("Siri, what's my Bank of America balance?" or "Siri, call an Uber." And so on.) It sounds like that'll likely happen this year, as Amir Efrati of The Information reported last week. We'll know for sure on June 13 during Apple's WWDC event.

The other way is boosting Siri's intelligence and making it better able to understand context. Right now, interacting with Siri happens one session at a time, and you often have to look at your iPhone to confirm it's doing what you want it to.

For example, this is what it looks like when you want to send a text message using Siri:

send a text message in siriEven though I just told Siri I want to send my colleague Cadie a text message that says "Hi, how are you," it still makes me confirm the action by looking at the screen and either tapping "Send" or telling Siri "yes" to confirm. That's not much easier than texting the old-fashioned way. It's also an indication from Apple that Siri isn't reliable enough to consistently get it right the first time.

It's the same case for a lot of things you ask Siri to do. Even though Apple added the ability to activate Siri with just your voice, you still need to look at your phone a lot for confirmation that it's doing what you want. On the other hand, assistants baked into speakers like the Echo or the upcoming Google Home don't have a screen, so the they have to be smart enough to complete a task without asking for confirmation. That's the biggest area Siri needs to improve.

So, how does Apple fix it?

Luckily, Apple acquired an AI company last year called VocalIQ that's really good at understanding context and completing tasks without requiring you to look at a screen. A source familiar with VocalIQ's technology told me that the product is able to give you complete control without having to look at your phone. In one test, VocalIQ's team was able to get the AI to manage someone's email while the phone stayed in the user's pocket the whole time. Yes, just like you see in the movie "Her."

VocalIQ is also able to remember context "just like a human" according to the source, which means you never have to remind it what you asked for when you start a new session. No other digital assistant can do that. That alone should give Siri the boost it needs to surpass its AI competition.

Now the question is when and how Apple will implement VocalIQ's technology into Siri. (Apple declined to comment.) But there's no doubt the company is working with some impressive AI technology.

SEE ALSO: Apple's Siri was entered as a speaker in an official Australian Parliament transcript

Join the conversation about this story »

NOW WATCH: How to control your iPhone by just talking to it

This 75-year-old NASA legend has been working in secret for 10 years building a startup that wants to outdo Intel and Google

$
0
0

Dan Goldin nasa knupath

From 1992 to 2001, Dan Goldin served as the longest-tenured administrator of NASA, overseeing projects like the launch of the Space Shuttle Endeavor and the redesign of the International Space Station.

After leaving NASA, Goldin spent some time bouncing around and studying robotics before accepting a position as the president of Boston University in 2003. He never officially held the position, however, because the school terminated his contract a day before he was slated to start (though he still got a $1.8 million payout).

And then Goldin mostly vanished from the public eye for over 10 years.

Today, the 75-year-old Goldin has reemerged to reveal what he has been working on for the past decade: KnuEdge, a top-secret startup based in San Diego, with a mission to one-up Google, AMD, and Intel with the "fundamental invention" of the next-generation computer processor.

"I'm not an incrementalist — I wanted to wait for the grand slam," Goldin tells Business Insider.

KnuEdge is also releasing its first product to the broader business technology market: KnuVerse, an artificial-intelligence-assisted tool that helps identify and clarify voices, even in the noisiest of situations. With that foothold established in the market, Goldin hopes that KnuEdge will come to be the foremost provider of technology for the neural-network-powered artificial brains of the future.

"We don't want to be on the football field," Goldin says. "I want to define where the football field is."

Space Shuttle Endeavor on its last launch

Companies like Google, Intel, and AMD are racing to optimize existing processors, especially graphics processors, to better run the neural networks that underpin artificial intelligence. But Goldin and KnuEdge say they are working to leapfrog them entirely.

"I'd like to be, as an American, on top of the pile," Goldin jokes. "I've never done anything easy — I love to suffer."

Over that 10-year quiet period, Goldin says, KnuEdge racked up $100 million from investors who would prefer to stay unnamed, while also racking up $20 million in lifetime revenues from unnamed customers, many of whom come from the worlds of military, defense, and aerospace.

Science trek

With both NASA and Boston University in the rear-view mirror, Goldin says, he went on kind of a trek around the country, trying to figure out what to do next.

Goldin decided that the way forward was to go back to an early fascination he had with building computers that could learn the way humans do. His fundamental insight, or "shazam moment," from this time of pondering: Humans don't learn by having things explained to them; "we learn by making mistakes and we have to adapt."KNUPATH Knureon 1000 Series Developer Board

To follow that notion, Goldin knew he would have to expand his scientific expertise, which already included the physical sciences, to include neuroscience. But already in his 60s, Goldin shied away from going back to school.

"I didn't want to do a Ph.D. program at my very ripe old age," Goldin says.

And so Goldin tapped into his post-NASA network of scientists and persuaded Nobel Prize-winning biologist Gerald Edelman, who died in 2014, to take him on as a senior fellow for three years.

With that knowledge, and new contacts in the field of neuroscience research, Goldin knew it was time to start his company. But he very purposely didn't want to come to Silicon Valley, even though that's where much of the talent is.

San Diego's 'patient money'

"I needed patient money and patient coworkers," Goldin says.

Knowing that KnuEdge would take at least a decade to come to any kind of fruition, Goldin says he resisted the idea of going to Silicon Valley. As much as he respects that Silicon Valley "has magic," he was afraid of taking on investors and new hires who were looking for quick payouts.

It turned into a boon for KnuEdge in another way, too: Goldin says he was able to hire top-shelf researchers, scientists, and engineers by promising them that they would have all the years they needed to explore their fields, without short-term pressure to make something salable.

"I wanted people to have time to dream, and you can't dream to schedule," Goldin said.

'We live in a world of noise'

KnuVerse, the voice-recognition software, is KnuEdge's first real commercial product, and it has been tested in "battlefield conditions," Goldin says.

It uses artificial intelligence to sift out the noise so computers can recognize your voice. The potential is to use the KnuVerse tech to build the best-sounding voice chat app of all time, or even in a police department's forensics division to clear up recordings from crime scenes.

"We live in a world of noise," Goldin says.

knuedge knuverse

Going forward, though, KnuEdge's real focus is the Hermosa processor and Knuboard motherboard, optimized for artificial intelligence. Banks and insurance companies have already been experimenting with the first versions, using them to sift through massive stores of data more efficiently than existing processors.

The Knuboard system can integrate with Intel- and AMD-based systems, Goldin says, which is good considering they're still the standard. But KnuEdge is super-focused on building that next big step in processors.

Next for KnuEdge, Goldin says, is to keep developing the technology. And he says that later this year KnuEdge will finally come to Silicon Valley for more funding from traditional sources. Regardless, now that he can talk about KnuEdge, Goldin says this is only the beginning.

"The world needs us," Goldin says.

SEE ALSO: The weird way video games are paving the road to the future of technology

Join the conversation about this story »

NOW WATCH: NASA just added an extra room to the Space Station — see how they did it

Siri doesn't have to stink — here's what Apple could do to fix it (AAPL)

$
0
0

iphone siri

You'd have a hard time finding anyone who thinks Siri is the best of the digital assistants out there. Google Now, Microsoft Cortana, and even newcomers like Amazon's Alexa have all bested Siri in various ways.

But now the stakes are higher than they ever were before. The common theory in the tech world today is that voice control and artificial intelligence have the potential to dramatically change how we use our gadgets.

If that prediction comes true — just use the Amazon Echo for a few minutes and you'll see the potential there — Apple will need to give Siri a massive upgrade.

Siri turns five this year, and while it has seen a number of iterative improvements since its debut, it still feels like a stagnant product, especially compared to the amazing tricks the competition has pulled off.

So, what does Siri need to catch up? Let's dive in.

Faster response times

One of the things that blew me away when I tested the Amazon Echo was how fast it was. I'd ask it a question or tell it to play a song, and — boom! — it'd answer almost immediately. That gap between when you ask a voice assistant to do something and when that something actually happens is called latency, and it's something Amazon worked really hard to reduce when it was building the Echo.

Siri, on the other hand, feels like it's working at a snail's pace by comparison. There's a noticeable lag with every Siri request, whether you're using it on your iPhone, iPad, or Apple TV. (For some reason, it seems especially slow on Apple TV.) Siri would feel much more efficient if Apple figured out how to reduce latency the way Amazon did.

Integration with apps

Siri can do a lot of things, but most of it is related to the built-in features already on the iPhone. What if I want to make a dinner reservation? What if I want to call an Uber? What if I want to play a song from Spotify?

Siri has learned a lot of new tricks over the years, but Apple will have to open Siri up to third-party apps if it's going to be truly useful.

Amazon Echo

Amazon has already proven how useful this can be. The Echo now has 1,000 "skills" from third parties, with more being added every week.

Luckily, it sounds like this is coming soon to Siri. According to Amir Efrati of The Information, Apple will give developers tools to integrate with Siri as early as next week at WWDC.

Eliminate the need to look at the screen

The end-game for Siri is to create a tool that removes the need to tap and swipe at a screen to do what you want. In theory, Siri should be smart enough to handle everything simply through voice commands.

But that's not always the case. For many tasks, Siri asks you to confirm a request by looking down at your iPhone to make sure it understood you correctly. 

For example, here's what it looks like when you try to send a text message through Siri:

send a text message in siri

If voice really is going to change how we interact with our devices, then Siri will need to get a lot smarter.

Luckily, Apple acquired a company called VocalIQ last year that could change all that. VocalIQ built artificial intelligence technology that is much better at letting users complete tasks using just their voice. Now Apple has to figure out how to integrate that into Siri. You can read more about how VocalIQ works here.

Search

Search is the top feature people use Siri for, according to a new survey by tech analyst Ben Bajarin of Creative Strategies. Although Siri has gotten a lot better at search over the years, it often kicks you to a regular Google or Bing search results webpage instead of giving you the answer you're looking for.

In my experience, Google Now is much better at giving you the one answer you're looking for, and it's been that way since it first launched about four years ago

SEE ALSO: 23 iPhone-only apps that will make your Android friends jealous

Join the conversation about this story »

NOW WATCH: 11 Easter-egg questions you can ask Siri to get a hilarious response

This is one of the most promising robot butlers we've ever seen

$
0
0

HERB robot

Robots are often feared as entities that will become evil and destructive in the future. But the reality is robots have potential to play a critical role in helping and caring for people.

Japan is actually actively building robots designed to care for the elderly, known as Carebots, due to its rapidly aging population.

Now Carnegie Mellon University in Pittsburgh is building a robot it thinks could help the elderly or people living with disabilities live independently for longer.

Called HERB, for "Home Exploring Robot Butler," the robot can already put books away on a bookshelf, grab drinks from the fridge, load a dishwasher, clear a table, and sort items all on its own. Those are some pretty advanced skills in an environment that is difficult for robots to navigate.

"Robots have been used for years on factory floors. These are nice places for robots to work because the environment is very structured, clean and repeatable," Jennifer King, a doctorate student in robotics working on HERB, told Tech Insider. "As we move robots into the home, the robot must be able to operate in much less structured human environments."

For example, for HERB to successfully grab a beer from the fridge, it must move other items out of the way without breaking or knocking them over. The software Carnegie Mellon is developing allows the robot to successfully navigate tricky environments like a crowded fridge to accomplish its task.

The unique software King's team is developing has given HERB creative abilities as well. In one instance, the robot moved an object by cradling it in its arm, something the researchers never taught it to do. The cradling shows HERB is learning to move multiple objects simultaneously and use its whole arm, King said.

HERB isn't ready to enter your home just yet, but shows how major strides are being made in building robots to care for people.

"We would like to continue to improve our robot's capability to work in clutter," King said. "In particular, we want to improve the ability for the robot to work in uncertain environments."

Join the conversation about this story »

NOW WATCH: Pizza Hut is using this robot to wait tables in Singapore


Apple is planning for the next 1,000 years (AAPL)

$
0
0

Roman

Last month, Apple CEO Tim Cook said in a televised interview that Apple was going to be around for the next "thousand years."

"We are not here for a quarter or two quarters or the next quarter or the next year to next year, we are here for [a] thousand years, and so we're not about making the most, we're about making the best," Cook said.

Nowhere was that more evident than at Apple's annual developers conference, which kicked off in San Francisco on Monday.

The assortment of product updates and new features unveiled at the event will be available to consumers in a few months. But a close look at some of the things Apple introduced reveal a strategy that's much more far-sighted than the next iPhone release.

So while Wall Street worries about whether iPhone sales will drop in 2016, Apple leadership is laying the groundwork that it hopes will let it continue to dominate the tech industry for decades to come.

Hook them while they're young

Cook didn't spend much time onstage on Monday, instead turning over many of the demos and announcements to his growing stable of lieutenants.

But one of the announcements he personally gave was a curious one. Swift Playgrounds, an extremely fun looking iPad app to teach children basic programming. In Swift Playgrounds, kids will learn programming concepts to help a tiny alien collect some gems.

Children aren't really Apple's core market — few 10-year-olds have $650 for a new iPhone, after all. But if there's one certainty about children, it's that they'll eventually grow up — and when they do, Apple wants them to be fluent in its programming language.

Now that's long-term planning: spending to educate young people so that you'll be able to replenish your workforce for years to come.

Platforms on platforms

Apple made a big deal at the event about how it now has four main computing platforms: one each for wearables, smartphones, desktop computers, and big screens like televisions.

But Apple actually sneakily introduced even more new platforms on Monday. In addition to the big ones for maps, Siri voice assistant, and messaging, Monday's keynote heralded the official arrival of HomeKit for smart-home notifications, one-touch checkout for Apple devices on the web, and the phone app as an interface for other apps.

Apple TV Tim Cook

All of these new platforms have one thing in common: They are designed to attract outside software developers who will create the next generation of apps and services in which Apple is the center of gravity.

Another example: gesture-recognizing APIs for the Apple Watch, so that one day you might be able to control computers by waving your arms. You probably won't be doing that in the next year, but maybe by 2018 it will seem like second nature.

"We recently asked if Apple gets platforms. Apparently it does based on the litany of technologies being opened up to developers," wrote UBS analyst Steven Milunovich.

You can check out all of Apple's platforms at its new API reference page.

Getting it right

Every big tech company, from Google to Facebook, is building up an arsenal of artificial-intelligence technology right now, bulking up for what many believe will be the next big paradigm in computing.

To some observers, Apple looked like it was falling behind.

The message on Monday was that Apple was simply taking the time to define its own philosophy to AI and get it right.

Apple sneaked in, unmentioned during the main event, a set of tools for developers to make apps that use cutting-edge artificial-intelligence techniques.

And the company mentioned once again that it has an approach to data that goes far beyond the anonymized approach of rivals like Google and Facebook. Apple calls it "differential privacy," and although what it entails isn't clear yet, Apple is doing cutting-edge research so it can take advantage of AI without the pitfalls of massive data collection.

Analyst Jan Dawson wrote shortly after the keynote:

Though Google is arguably the leader in machine learning and artificial intelligence, Apple is showing that it's perfectly capable of innovating in these areas too. But it's doing it in a way that's in keeping with its privacy stance, by keeping personal information on devices and not sharing it with third parties.

Apple's not worried about falling a few months behind Google or Facebook in the AI race. It's playing a long game. And in the worst-case scenario, with its massive cash pile, Apple can spend money buying AI technology and talent to catch up.

Nothing revealed during Apple's keynote is immediately available for consumers. But that's not the point to a high-level programmer's conference like WWDC.

By 3016, or even 2026, people might look back and see that Apple planted the seeds for an important long-term technology — but whether that's voice, messaging, maps, or one of the myriad lower-key additions Apple made on Monday remains to be seen.

SEE ALSO: Apple just renamed one of its oldest and most important products

Join the conversation about this story »

NOW WATCH: Taylor Swift rapped and then fell off a treadmill in a new Apple Music ad

Apple quietly goes big on AI as it looks to keep up with Google and Facebook

$
0
0

apple wwdc intelligence siri

Google and Facebook are widely regarded as being more innovative than Apple when it comes to developments in the coveted field of artificial intelligence (AI) — hailed as the next major area for computing.

While Google and Facebook have developed self-learning AI agents that can learn and master complex games like Go, Apple has remained relatively silent — offering little more than a personal assistant known as Siri.

Facebook has also shown off Chatbots, while Microsoft made a regrettable foray of its own into the world of bots with Tay, a controversial Twitter bot that ended up being highly offensive (as a result of humans).

But Apple made a series of practical product updates at its Worldwide Developers Conference (WWDC) on Monday that show it is not ready to be left behind just yet. Among other things, the Cupertino-headquartered company announced:

Each of these features relies heavily on AI. They employ software that analyses what's going on (be it in a message thread, or a photo library) and then makes or suggests an action based on the information it is presented with.

Although none of the features are unique to Apple, the company will be betting that they are slicker, better-better designed, and more friendly on tis devices than they are on other platforms. After all, these are the areas where Apple has excelled in the past.

Unlike recent keynotes from Google and Facebook, Apple didn't dwell on the fact many of its new features are supported by AI. But just because Apple's not shouting about something doesn't mean it's not interested in it. Look at the self-driving car that it's developing, for example.

Behind closed doors, Apple is likely doing a lot of AI research and development work in many of the same areas that Google and Facebook have been shouting about. You just won't know about it until Apple figures out how to use the technology in its products.

Join the conversation about this story »

NOW WATCH: Mark Cuban explains why downloading Snapchat is a huge mistake

Google has created a new AI research group in Europe to focus on machine learning (GOOG)

$
0
0

Google Zurich

Google announced in a blog post on Thursday that it has set up a new AI research group in Europe to focus on machine learning (ML).

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.

Google Research, Europe — as the group is known — is based out of Google's office in Zurich, Switzerland, which is home to Google’s largest engineering office outside the US.

Google said the group, which is expected to grow to over 100 people in the coming years, will focus on three key areas: machine intelligence, natural language processing and understanding, and machine perception.

Companies like Amazon, Facebook, and Microsoft are all investing heavily in these areas as they look to make their platforms and services more intelligent.

Google Research, Europe will be part of the wider Google Research operation, which includes thousands of people worldwide. However, unlike many of the other divisions, its remit will be to focus purely on machine learning, which can, for example, help a machine to understand whether a photo shows a cat or a dog.

Researchers at Google Research, Europe, are likely to collaborate with scientists at Google DeepMind — a London based AI research lab that's building general purpose self-learning algorithms.

Google Research Europe will be led by Emmanuel Mogenet. In a blog post published Thursday, Mogenet said:

"Google Research, Europe, will foster an environment where software engineers and researchers specialising in ML will have the opportunity to develop products and conduct research right here in Europe, as part of the wider efforts at Google."

Elsewhere in his post, he said Google is using machine intelligence to power products that are used by hundreds of millions of people a day including Translate, Photo Search and Smart Reply for Inbox.

Google machine learning

"One of the things that enables these advances is the extensive collaboration between the Google researchers in our offices across the world, all contributing their unique knowledge and disseminating ideas in state-of-the-art Machine Learning (ML) technologies and techniques in order to develop useful tools and products."

Mogenet added that Europe is home to some of the world’s "best technical universities, making it an ideal place to build a top-notch research team."

Google engineers in Zurich have previously developed the engine that powers Knowledge Graph, as well as the conversation engine that powers the Google Assistant in Allo.

Elsewhere, Facebook is also taking advantage of the academic expertise on offer in Europe with a dedicated AI research lab in Paris.

Join the conversation about this story »

NOW WATCH: How to find Netflix’s secret categories

Elon Musk's $1 billion nonprofit wants to build a robot to do housework

$
0
0

elon musk robots

Elon Musk has built cars and rockets. Next up: domestic robots. 

OpenAI — the artificial intelligence research nonprofit co-chaired by Tesla Motors CEO Musk and Y Combinator President Sam Altman— wants to build a robot for your home.

 

Building a robot, OpenAI's leadership explains in a blog entry on Monday, is a good way to test and refine a machine's ability to learn how to perform common tasks. By "build," OpenAI means taking a current off-the-shelf robot and customizing it to do housework.

"More generally, robotics is a good testbed for many challenges in AI," says the blog entry.

The mission of OpenAI is to research AI and other machine learning technologies with an eye towards making sure the robots don't one day go rogue and destroy humanity.

When OpenAI launched in December 2015, it secured $1 billion in funding from a who's who in tech, including Altman and Musk as well as Silicon Valley luminaries like Jessica Livingston and PayPal cofounder Peter Thiel.

Apart from robotics, OpenAI says that its other big ambitions are around chatbots, or "intelligent agents" that can talk to you in plain natural speech. You know, kind of like on Facebook Messenger

One of OpenAI's goals is to build a chatbot agent that can go beyond simple tasks like looking up movie times or doing simple translation tasks, and up to holding a conversation or truly understanding a document or even the ability to ask clarifying questions if it doesn't understand something.

Sam Altman

OpenAI also says that it's been inspired by the work of Google's DeepMind, which beat world champions at the game of Go. To that end, OpenAI wants to build intelligent agents that can conquer games.

"Games are virtual mini-worlds that are very diverse, and learning to play games quickly and well will require significant advances in generative models and reinforcement learning," OpenAI writes.

And while this isn't as flashy as the others, OpenAI says that its first priority is to measure its success with a common set of criteria that can be applied to all of these robots and chatbot agents. OpenAI says that it's working on a "living metric" to measure intelligence, no matter what kind of test the AI is undergoing.

Both Musk and Altman are wary of the power of artificial intelligence. Musk, in particular, thinks that sci-fi visions of a world overrun by robots are actually within reason, while serial investor Altman once said that "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."

As a nonprofit, OpenAI isn't committed to releasing commercial products soon, if ever. Still, with this many entrepreneurial types behind it, it would be more surprising if, say, Tesla didn't end up taking some OpenAI robot designs and building them for real. 

SEE ALSO: HBO's 'Silicon Valley' nailed something that lots of real-life tech startups get wrong

Join the conversation about this story »

NOW WATCH: This machine fixes the worst thing about doing laundry

5 ways to get more likes on your selfies according to a robot

$
0
0

selfies kim kardashian artificial intelligence robots shutterstock getty

How do you look your best in a selfie on National Selfie Day? Luckily for anyone wondering, a robot has figured it out.

An artificial intelligence (AI) system built by a Stanford University researcher Andrej Karpathy looked at 2 million selfies and learned what makes a great selfie.

In other words, it discovered what elements make up a selfie that gets more likes, favorites, and hearts.

Here's a few tips the program came up with for women wanting to take a better selfie, according to Karpathy's blog post about the project:

1. Be female. Sorry, guys.

2. Show your long hair. Wear a wig if you don't have any?

3. Take it alone.

4. Use a light background or a filter. Selfies that were very washed out, filtered black and white, or had a border got more likes. According to Karpathy, "over-saturated lighting ... often makes the face look much more uniform and faded out."

5. Crop the image. Make it so your forehead gets cut off and your face is prominently in the middle third of the photo. Some of the "best" selfies are also slightly tilted.

Below are the cream of the crop — the top 100 of 50,000 images that the AI analyzed after being trained on more than 2 million selfies.

Notice that of the best 100 selfies, not a single man is included, and there are very few people of color.

good selfies AIAnd here are the male images that did the best, you can see similar trends cropping up, especially the number of images with white borders, though the male images more frequently included the whole head and shoulders, Karpathy writes:

malesOn the other hand, the worst images, or the selfies that probably wouldn't get as many likes, were group shots, badly lit, and often too close up.

So if you want your selfie to get a lot of love, make sure you follow the rules above.

Guia Marie Del Prado wrote a previous version of this post.

Join the conversation about this story »

NOW WATCH: This guy took a selfie every day for 8 years

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>