Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

Here's what happened during Google's biggest event of the year (GOOG)

$
0
0

Google IO 2017 Google Lens

MOUNTAIN VIEW, California — Google CEO Sundar Pichai said there were now 2 billion active devices based on the company's Android software and touted the company's new artificial-intelligence efforts as he took the stage at I/O, Google's annual developers conference, on Wednesday.

He also announced a new product called Google Lens, which will be part of the Google Assistant for Android phones. Lens can identify objects in the real world for a variety of uses.

"It's been a very busy year since last year," Pichai said. "We've been focused on our core mission of organizing the world's information."

He also said Google's efforts in AI were solving the world's problems at scale.

For example, Pichai spoke about how machine learning and AI were being used in the medical and scientific industries to analyze molecules and other big-data problems. The company launched Google.ai to document these efforts.

Google Assistant

Later, Scott Huffman, vice president of Google Assistant, detailed its advances, such as the ability to type in queries.

But Lens has some of the most impressive new features. You can scan just about anything with your phone's camera and have Assistant analyze its contents. For example, if you take a photo of a concert venue, you can listen to an artist's music, buy tickets, and more.

And the big headline: Google Assistant is coming to the iPhone. It's no longer stuck on Android.

Google IO 2017 Scott Huffman

Google is also expanding its third-party support for Assistant. Before, third parties could build "actions" for the Assistant in the Google Home speaker. Now they'll work wherever Assistant is, including Android phones and the iPhone.

It also works with transactions. In a demo, Google showed off a Panera Bread integration where the user could order with her voice. Google says it's just like ordering from a human in a store.

Google Home

Google Home, the company's connected speaker, also got some new features. The Assistant inside Home now has proactive alerts, so when you ask it "what's up?" it can send you notifications based on your schedule, like telling you when to leave for your next meeting.

But the big news is that Google Home will soon let you call any number in the US or Canada from the speaker for free. (Amazon's Echo, on the other hand, can call only other Echos or devices running the Alexa app.) Google says it can even recognize individual voices so the person you're calling has the appropriate caller ID.

Calling will roll out to the Home speaker later this year.

Google IO 2017 Google Home

Google also gave updates on Google Photos, its online storage service. It can now suggest photos you might want to share with your friends through facial recognition and use other signals to pick the best photos. It also has a new feature to let you order prints and books of your photos.

As for YouTube, Google updated its Super Chat feature for comments that adds a new layer of interactivity to live videos. Users can pay to have things happen in live broadcasts, and it's up to the broadcaster to get creative with it. (For a demo, YouTube brought out its stars The Slow Mo Guys, who got pelted with balloons for $500.)

Android

Google didn't forget about Android. It showed off some features in Android O, the new version coming this fall. It's not a major update, but there are some new things worth highlighting — for example, you can now use picture-in-picture video to watch video or video chat while doing other stuff on your phone.

Android is getting a new auto-fill feature that knows your password and log-ins for apps so you can sign in with one click. There's also a new copy/paste feature that uses machine learning to automatically predict text you'd likely want, such as an address.

But overall, Android O's improvements help with stuff under the hood, like battery life and processing power. Most people probably won't notice it. You can download a beta version of Android O starting Wednesday if you have a Google Pixel phone.

Google IO 2017 Android

Virtual reality and augmented reality

Google's Daydream virtual-reality platform now supports standalone headsets, not just ones that need to be powered by smartphones. HTC and Lenovo will be two of the first companies to make standalone Daydream headsets that launch later this year.

As for augmented reality, Google is bringing its Tango feature, which maps indoor environments, to more Android phones this year. Tango could be used to help people find things in crowded stores, for example. Google calls this a visual position service, or VPS, since Tango uses visual cues in the environment to create maps.

Google IO 2017 VR

That's it! Overall, we didn't see any massive or unexpected news, but we did get a peek at how Google views AI and machine learning as its next major platforms.

Join the conversation about this story »

NOW WATCH: Here's a visualization of Elon Musk's tunneling project that could change transportation forever


Google and Amazon are at war to control your home, and the effects will be felt for years (AMZN, GOOG)

$
0
0

jeff bezos amazon ceo

Amazon and Google are currently in the early stages on an epic battle to control your home, the effects of which will be be felt for years to come.

Artificial intelligence (AI) and virtual assistants that promise to organise your life are just about the hottest topic in tech right now. Everyone has one, whether it’s Amazon’s Alexa, the Google Assistant, Apple's Siri, or Microsoft’s Cortana.

They tell you the news, read you your calendar, play music, control your heating — and they're no longer limited to just your smartphone. Both Amazon and Google are racing to bake their virtual assistants into as many devices as possible, as they struggle to gain the upper hand in an epic new frontier for tech companies: The home.

On Wednesday, at its annual I/O developer conference, Google announced the Google Assistant SDK. This will let developers and product makers build the Google Assistant into just about anything. Want to stick it in a new refrigerator? Sure thing. How about an alarm clock? Not a problem. A toaster? Go right ahead.

Both Google and Amazon have already had their AI assistants integrated into some home appliances, including fridges. (Samsung has also signalled its intention to include its Bixby assistant in appliances.) But Google's SDK promises to radically accelerate the deployment of AI assistant into countless other products.

google homeIt's crucial for the warring companies that they get the upper hand in these early stages — because once customers are locked into an ecosystem, they're far less likely to change down the line.

People tend to replace their smartphone every one-to-two years. Every time they get a new one, there's the option to switch platforms — whether that's from Android to iOS, or Windows Phone to Android. Sure, most people don't, but it's not too difficult.

In contrast, people buy home appliances for far longer, and they certainly don’t replace all of them at once.

So once you’ve got an Alexa-powered fridge, you’ll be using Alexa for years. And if all your appliances are running Google Assistant, and one breaks, you’re not going to buy an Alexa-powered one to replace it. (Because the AI assistants run in the cloud, you also won't need to buy new devices to upgrade them — they'll get smarter over time automatically.)

The promise of virtual assistants is that they work seamlessly across devices to help organise and streamline your life. On a practical level, that’s great for consumers — but it ties them in more tightly to a single tech company’s ecosystem than ever before.

Right now, it’s early days. For Google and Amazon, it's all still to play for. But it won’t be that way for long.

Join the conversation about this story »

NOW WATCH: 16 keyboard shortcuts only Mac power users know about

How Salesforce CEO Marc Benioff uses artificial intelligence to end internal politics at meetings (CRM)

$
0
0

Marc Benioff

Salesforce CEO Marc Benioff isn't just predicting that artificial intelligence will one day help run everyone's companies, he's already using it at Salesforce today.

He's got a special, not-yet-released version of Einstein, the company's artificial intelligence tech baked into its products, helping him run his company, he told Wall Street analysts on Thursday during the company's quarterly conference call.

He invites this version of Einstein, called Einstein Guidance, to his Monday morning staff meeting, where up to 30 top executives update him on their progress for the quarter. Einstein Guidance is designed to do forecasting and modeling. 

It's especially useful to make sure that managers aren't trying to snow him.

He said (emphasis ours):

"Like in a lot of our technologies, we really become the first and fastest-moving user. We even have a piece of Einstein that we've not yet rolled out to our customers called Einstein Guidance.

"This is a capability that I use with my staff meeting, when I do my forecast and I do my analysis of the quarter, which happens every Monday at my staff meeting ...

"We have our top 20 or 30 executives around the table. We talk about different regions, different products, different opportunities. And then I ask one other executive their opinion ... and that executive is Einstein.

"And I will literally turn to Einstein and say, 'You've heard all of this, what do you think?' And Einstein will give me the over and under on the quarter and show me where we are strong and where we are weak and sometimes it will point out a specific executive, which it's done in the last three quarters, and said, this executive is someone who needs specific attention during the quarter ...

"I have the ability to talk to Einstein and ask everything from product areas I should be focusing on, geographies I should be focusing on, linearity of bookings during the quarter. Every question I could possibly have, I'm able to ask Einstein.

"And for a CEO, typically the way it works, is you have various people, mostly politicians and bureaucrats at your staff meeting who are telling you what they want to tell you to get you to believe what they want you to believe. Einstein comes with no bias. It's just based on the data.

"To have Einstein's guidance has transformed me as a CEO."

Now, while it sounds like Einstein is listening and processing a verbal discussion, that's not clear and somewhat unlikely. Einstein is designed to create its models and suggestions off of the data stored in Salesforce apps. However, in March, Salesforce and IBM announced a deal that will integrate Einstein with IBM's Watson. Watson is a form of AI that geared to work with unstructured information and specifically with language.

One day, most CEOS will likely have a so-called AI "executive" listening on meetings and keeping everyone honest with the data.

"AI is the next platform. All future apps for all companies will be built on AI," Benioff predicts.

SEE ALSO: The best part of Google's conference was a teen who taught himself to code to diagnose cancer

SEE ALSO: This guy took a Facebook project people hated, made them love it ... and catapulted into an incredible career

Join the conversation about this story »

NOW WATCH: The Marine Corps is testing a machine gun-wielding robot controlled with just a tablet and a joystick

Facebook, Microsoft, and Google are raising the bar higher than ever for Apple (FB, MSFT, GOOG, GOOGL, AAPL, AMZN)

$
0
0

Sundar Pichai

We're coming up on the end of the busiest season in tech — the month-and-a-half stretch where the biggest companies in the market reveal their grand visions for the next 12 months at their annual mega-conference events.

Facebook kicked it off in late April with its F8 conference, followed in early May by Microsoft Build, and then the just-completed Google I/O conference. This particularly Silicon Valley kind of marathon will conclude early next month, when Apple hosts its Worldwide Developers Conference (WWDC).

And, to be honest, this stretch has been kind of a snooze so far. Facebook, Google, and Microsoft all used their time in the spotlight to reiterate their commitments to artificial intelligence and augmented reality. It's super interesting in a philosophical sense, but it'll be a while before these big ideas congeal into finished products.

But if you take a step back to look at the bigger picture, something important is happening: In aggregate, Facebook, Microsoft, and Google are giving Apple an ever-higher bar that it'll have to clear if it wants to continue its winning streak into the next decade. And even the newest, shiniest iPhone may not be able to be able to help Apple if it can't clear that bar. 

The phantom menace

At Facebook's F8 conference, Mark Zuckerberg made the provocative declaration that augmented reality, the technology for projecting digital images over the real world, could render TVs and every other gadget with a screen obsolete. Why carry a phone when your games, videos, and conversations are projected right into your eyes?

facebook mark zuckerberg smart glasses

That idea alone should give Apple, which derives the vast majority of its revenue from the iPhone, cause for concern. While Apple is said to be working on augmented reality features for the next iPhone, Facebook is envisioning an end to the phone itself, possibly as soon as the next decade.

Microsoft and Google both gave lip service to virtual and augmented reality tech at their respective events. But their events had broader themes that signal an equally important but far more subtle threat to the future of Apple

All about the data

At its event, Microsoft showed off the Microsoft Graph, a system for tracking the relationships between your documents and files across all your devices. Later this year, you'll be able to start working on a Word document on your iPhone, switch to a Windows 10 PC and pick up where you left off, and then have the Cortana voice assistant send it to your boss.

Google, at its own event, highlighted Google Lens, a new "computer vision" system coming to Google Photos and the Google Assistant. Using Lens, you'll be able to get more information on a band by simply taking a picture of the marquee outside the venue where it will play or automatically connect your phone to a WiFi network by just taking a picture of the nearby router. 

Satya Nadella

The common thread here is each of these companies is collecting lots of data on its users and is attempting to make sense of it. Facebook knows all about your social life. Microsoft knows all about your professional life. Google has insane insight into your life and hobbies. All of them are using that data to offer intensely personalized experiences that help users make sense of the increasingly complex digital world.

Google, for example, has the self-given mission to "organize the world's information and make it universally accessible and useful." CEO Sundar Pichai now says that the only way to accomplish that is to give every user their own "personal Google." As part of that vision, each Google user would see information that's tailored to her and delivered just when they're most likely to want or need it.

By focusing on data rather than devices, Microsoft, Google and Facebook lessen the risk that they'll become overly reliant on any one product or class of gadgets. Pichai's "personal Google," exemplified today by the Google Assistant intelligent assistant app, already works on the iPhone, Android devices and the Google Home smart speaker. You'll soon be able to use it to interact with home appliances. If Zuck is right about smartphones vanishing, the Assistant is a vital hedge for Google, which today relies heavily on the near-ubiquity of Android.

The Apple factor

The impending arrival of this artificial intelligence-filled, data-driven future could pose some very tough challenges for Apple. 

First and foremost, Siri, Apple's own intelligent assistant, is still frustrating to use. Alexa, which is Amazon's Siri counterpart, has won accolades for its excellent speech recognition, and for how it allows users to easily control their smart-home devices via voice commands. Siri, by contrast can be inconsistent and understands you poorly. Meanwhile, HomeKit, Apple's technology for allowing its devices to control smart-home products, has far less support from home automation gadget makers than Apple's rivals.

Echo Show

It's not that Apple doesn't have access to any data. Siri is probably the most-used voice assistant out there, which gives the company a view into both users' voice interactions with their devices and the web and app searches they conduct via voice. And lots of data flows through the Mail, Maps and other Apple apps that come pre-installed on iPhones, iPads and Macs.

What Apple lacks

What Apple is lacking is a coherent strategy for tapping into all that information. And because of its public commitment to privacy, the company has been cautious about the ways it collects and uses customers' data.

Consequently, Apple hasn't really shown off a super-compelling way it's using artificial intelligence and data. Apple has certainly integrated some intelligence into iOS, with Siri suggesting apps you might want to use, Apple Maps warning you when to leave to make it to your next meeting, and Photos tagging faces in your pictures. Still, as any Google Photos user on an iPhone would tell you, Apple’s default photo app is nowhere near as intelligent as what Google’s cooked up, and that extends to the rest of the operating system’s features, too.

Apple has staffed up with artificial intelligence experts. But we've heard through the grapevine that they're more focused on a self-driving car project than they are on improving Siri or making sense of the data collected from iPhone or Mac users.

Tim Cook

Apple is probably more aware of this than anybody, and we have to imagine that the company is working behind the scenes to up its artificial intelligence game. The company is, after all, famous for preferring to be the best rather than the first.

And despite its problems, it would be foolish to predict Apple's demise. This wouldn't be the first time the company has trailed behind rivals. It didn't make the first MP3 player, the first smartphone or the first tablet, yet it eventually found huge success in all three markets. 

Still, with Google, Amazon, Microsoft, and Facebook presenting such compelling examples of futuristic, data-driven, intelligent systems, Apple faces a bigger challenge than ever before. So when it's Apple's turn on the big stage, it'll have to show off more than just slick new hardware — it'll have to show a real vision for the future.

SEE ALSO: Google is reinventing search itself as it moves past Android and into our crazy future

Join the conversation about this story »

NOW WATCH: A former iPhone factory worker explains why he thinks iPhones will never be made in the United States

Google is getting ahead of itself in its quest to make the future happen now (GOOG)

$
0
0

google ceo sundar pichai at google i/o 2017

One of the the most unusual demos at Google's annual I/O conference last week was of a custom-built automatic cocktail maker powered by the company's artificial intelligence system, Google Assistant.

Event attendees could order up a mango mixer, say, by just talking to the drink dispenser. The machine, which some developers hacked together to show how Google's AI can be added to just about any gadget, would quickly serve up a cocktail with just the right ingredients drawn from tubes on its top.

You're already familiar with using Google's services on your phone or computer. Google wants to be in many more places than those, and it's planning on using AI to get there.

As Google officials laid out at last week's conference, the company envisions a future where its AI is inside everything from dish washers to cars. It helps manage your digital photos. And yes, it even could help mix drinks.

The search giant argued this development will be great for consumers. By injecting a little of its smarts into the stuff you use every day, the company will be improving lives.

But the company has other reasons for pursuing its vision. Google and other companies see AI as the next major computing breakthrough after smartphones. The payoff for the company that dominates AI could be huge. (Google doesn't know how to make money off AI yet, but that's a problem for another day.)

But the aggressive AI push by Google and its rivals begs the question: Do we really need Google (or Alexa or Siri or whatever else) inside of everything?

I don't think so.

Nothing we've seen from Google or its competitors to date has shown that voice commands and AI are easier or faster to use than smartphone apps, computer programs or web apps. AI may help extend Google and others' reach beyond phones, but it'll be a very, very long time before anything comes along that's capable enough to replace an app-empowered smartphone as your primary computing device.

Unfortunately, a lot of what we're seeing today with AI and voice control is trying to do just that.

During Google's keynote, one of the demos showed how Panera Bread built an app on top of Google Assistant that allows a customer to order by just talking to it. The demonstrator claimed it was just like ordering at the counter with another human at the store.

It was an impressive feat for a digital assistant. But you could place an order much easier by just tapping on Panera's smartphone app or visiting its web site. You shouldn't have to go through a lengthy verbal back-and-forth with a faceless virtual assistant just to get the Panera sandwich you want.

As tech analyst Ben Thompson put it:

I've experienced similar frustrations using voice assistants like Alexa to order an Uber or control smart lights. While they technically work, they're not easier or faster than just using a smartphone app.

The dubiousness of Google's vision seems even more clear when it comes to AI being embedded into everyday devices like thermostats, as we saw a few weeks ago when Ecobee announced thermostat with Alexa inside. I can't think of a single scenario where I'd rather talk to a virtual assistant in my thermostat than just use an app on my smartphone. 

Voice-powered AI can be useful for simple web searches; straightforward queries, like "What's the weather?"; and basic commands, like "Play the new Katy Perry song. But they're poorly suited for just about everything else you'd want to do. Apple's marketing boss Phil Schiller put it pretty well a few weeks ago in an interview with NDTV when he asked about the rise of digital assistants in devices like the Amazon Echo.

"Voice assistants are incredibly powerful, their intelligence is going to grow, they’re gonna do more for us, but the role of the screen is gonna remain very important to all of this," he said.

In other words, if your goal is to kill the screen, you've blown it.google i/o 2017 sundar pichai AI first

All of the major tech companies are investing in AI in the belief that it will replace the smartphone as the dominant platform in tech. What none of them seem to realize is that AI won't replace the smartphone but improve it. It won't kill the category; it'll just make it more useful.

Google is the company perhaps least in touch with this reality. At its event last week, it went so far as to claim it's shifting from a "mobile first" company to an "AI first" one. As exciting as Google's (and Amazon's and Microsoft's and Apple's) advancements in AI have been, they still don't come close to its ambition.  

The truth is we're going to be stuck with smartphones for a very long time — think decades, not years. And while a Google Assistant-powered cocktail mixer makes for a cool demonstration, it goes to show that voice-powered AI these days is more entertaining than practical.

SEE ALSO: How Google's band of hardware pirates has re-invented itself after its legendary leader jumped ship

Join the conversation about this story »

NOW WATCH: Google just showed off an incredible camera app that identifies real-world objects

Google DeepMind is edging towards a 3-0 victory against world Go champion Ke Jie (GOOG)

$
0
0

David Silver

Google DeepMind has won its second game against world Go champion Ke Jie in China, winning the series and putting it one step closer to a 3-0 victory.

The company's self-learning AlphaGo AI agent is playing 19-year-old Ke Jie at the "Future of Go Summit" near Shanghai this week in a three-game match.

The win against Ke Jie — who has been playing Go since the age of 5 — puts AlphaGo just one victory away from a 3-o win.

"#AlphaGo wins game 2," wrote Google DeepMind cofounder and CEO Demis Hassabis on Twitter. "What an amazing and complex game! Ke Jie pushed AlphaGo right to the limit."

AlphaGo won the first game by half a point on Tuesday, which is the closest margin possible in Go — a two-player board game that originated in China around 3,000 years ago. The game simple, yet complex game has been incredibly difficult for computers to crack due to the sheer number of moves possible.

Dave Silver, lead researcher for AlphaGo at DeepMind, explained how DeepMind had estimated Ke Jie's impressive play. "We can always ask AlphaGo how well it thinks it’s doing during the game," he said in a statement. "And when we asked today, AlphaGo thought it was perfectly balanced. If anything, AlphaGo thought Ke Jie had come out better in the opening. It was only towards the end of the game that AlphaGo thought it would win."

Ke Jie DeepMind

The games are being streamed live on YouTube but the millions of Go fans in China are unable to watch them without a VPN [virtual private network] because the Google-owned service is banned in China. The Chinese government also issued a censorship notice to broadcasters and online publishers, warning them not to livestream the first game, according to China Digital Times.

The Financial Times reported on Monday that DeepMind's trip to China is part of a wider Google "charm offensive" in the communist-run country.

DeepMind writes on its website that it hopes to uncover more secrets of the ancient game at the "Future of Go Summit," where it'll also be playing different versions of Go. The company is also visiting a number of Chinese companies and research institutes to talk about AI research.

AlphaGo beat its first world champion last March, when it defeated South Korea's Lee Sedol in a five-game tournament.

Demis Ke Jie and Eric Schmidt

Join the conversation about this story »

NOW WATCH: 15 things you didn't know your iPhone headphones could do

Mark Zuckerberg calls for exploring basic income in Harvard commencement speech

$
0
0

mark zuckerberg commencement

In his Harvard commencement speech on Thursday, Facebook CEO Mark Zuckerberg advocated exploring a system in which all people receive a standard salary just for being alive, no questions asked.

The system, known as universal basic income, is one of the trendiest economic theories of the past few years. Experiments in basic income have popped up in Kenya, the Netherlands, Finland, Canada, and San Francisco, California, among other places.

Basic-income advocates say the changing nature of work — from human labor to artificially intelligent robots — combined with rising wealth inequality signal the need for an overhaul of how money is distributed.

"We should have a society that measures progress not just by economic metrics like GDP, but by how many of us have a role we find meaningful," Zuckerberg told the crowd. "We should explore ideas like universal basic income to make sure everyone has a cushion to try new ideas."

The statement was Zuckerberg's first public endorsement of the idea, which makes him somewhat late to the party, as far as Silicon Valley goes. Tech executives like Tesla CEO Elon Musk, Y Combinator President Sam Altman, and Facebook cofounder Chris Hughes — who runs a basic-income fund called the Economic Security Project— have endorsed basic income.

Many point to economic forecasts that say robots will displace much of the human workforce in the coming decades. A report from Oxford University in 2013, for instance, found that about 50% of jobs could be taken over within the next 10 to 20 years. A McKinsey report released in 2015 backed up that prediction, suggesting that today's technology could feasibly replace 45% of jobs right now.

"As our technology keeps on evolving, we need a society that is more focused on providing continuous education through our lives," Zuckerberg said. "And yes, giving everyone the freedom to pursue purpose isn't going to be free. People like me should pay for it, and a lot of you are going to do really well, and you should, too."

Watch the commencement address, starting at 1:38:12:

SEE ALSO: 8 high-profile entrepreneurs who have endorsed universal basic income

Join the conversation about this story »

NOW WATCH: I wear these computer glasses every day even though I have perfect vision — here's why

Apple is working on a chip to power artificial intelligence in future gadgets, including the iPhone

$
0
0

apple

Apple is working on chips to power artificial-intelligence capabilities in its gadgets, Bloomberg's Mark Gurman reported Friday.

The chips would handle more advanced AI tasks, such as facial recognition, and help better manage battery life and power, the report says. The chips could also be used in future products, like self-driving cars or digital glasses, in addition to iPhones and iPads.

The news comes as Apple's competitors like Google, Amazon, and Microsoft have made significant advancements in AI. At its developers conference last week, Google showed how it was adding AI to a variety of products, including phones, connected speakers, and cars. Apple is seen largely as behind the competition when it comes to AI, which could power the next wave of connected gadgets.

Last year, Apple made some improvements to its Siri AI assistant, giving access to third-party developers in limited categories like messaging and payments. Apple's developers conference starts June 5, and many will be paying attention to more advancements in Apple's AI.

SEE ALSO: Everything Google announced at its developers conference

Join the conversation about this story »

NOW WATCH: Microsoft just unveiled a $1,000 laptop — and it’s taking on Apple's MacBook Air


I used software to analyse if my relationship is doomed to failure

$
0
0

Buffy and Angel

Artificial intelligence can now take a guess at whether you and your partner can go the distance.

An AI firm called DataRobot has built a tool based on Stanford University data that asks you six questions about your relationship, and predicts your chances of staying together for the next couple of years.

I decided to give it a shot.

First, a bit more about how DataRobot's Labs arm built the tool.

The quiz is based on a 2009 Stanford study of around 4,000 Americans called "How Couples Meet and Stay Together." Stanford did follow-up surveys to see how many couples were still together after a few years, and made all the data publicly available.

DataRobot's main business involves predictive modelling, which uses data to predict outcomes. The company's Labs arm took Stanford's data to build a model that could predict relationship outcomes.

How did the company decide on just six questions?

Greg Michaelson, DataRobot Labs' director, told Business Insider it found 150 variables in the data that indicated whether a couple was more likely to stay together. But the company picked six that felt non-intrusive, and that people would feel comfortable answering. For example, the quiz doesn't ask you if you're living with your other half, even though that's a factor in how likely you are to stay together.

"That’s kind of got the icky feel to it," said Michaelson. "We wanted to stay away from anything too personal and sensitive."

I decided to try DataRobot's tool. Obviously, I back my relationship, and wanted to see if DataRobot's model would do the same.

I did get the permission of my other half, though only one of you needs to take the quiz. His view: "This is basically everything that's wrong with machine learning! But sure, why not!"

What could go wrong?

The first question asks if you're married.

DataRobot first question

I'm not married to my boyfriend, and apparently that's not good. It's probably a bit much to expect after a year together though. And apparently it's better than being divorced.

Then you state your education level.

DataRobot 2

Higher education is pretty common in the UK, so this isn't much of a surprise. But as per DataRobot's note, this is better news for my relationship than the marriage question.

And then how old you are.

DataRobot 3

Does four years count as a big age gap? I don't know, but it looks like our chances might be better than Donald and Melania Trump's.

Being together for a long time helps.

DataRobot 4

DataRobot Labs' Michaelsen has been with his wife for 19 years, so I'm barely in the race here.

Having kids can be bad for your relationship.

DataRobot 5

Young ones, anyway.

But family is good for your relationship.

DataRobot 6 It's a peer pressure thing. If all of your friends and family associate you and your partner as a couple, it's harder to break up.

And finally ... the results

DataRobot 7

An 80% likelihood of staying together for another two years! I'll take that. DataRobot's Michaelson beat me with a 90% chance, but he has 18 years of marriage on me.

Here's DataRobot's breakdown

DataRobot 9

The summary: Your chances of staying with someone are better if you two have a long marriage, lots of family members, fewer young kids around, and a good level of education.

What would Michaelson say to anyone who hates the idea of machine learning being used to predict personal lives?

"I totally understand," he said. "We kept that sort of thing in mind as we were picking out the questions. The other thing is — we’re not doing anything fancy, this is letting the data speak. "My perspective is that I'd rather know what the data says, then make an informed decision. Maybe you take this quiz, maybe you say it's rubbish and that you don't like it. But at least you know!"

Join the conversation about this story »

NOW WATCH: LinkedIn's gorgeous San Francisco offices are unlike anything we've ever seen

The $2,500 answer to Amazon's Echo could make Japan's sex crisis worse

$
0
0

Japan has a sex problem. The country's birthrate is shrinking year after year, to the point where deaths are outpacing births.

Simply put, Japan's population is decreasing.

Japanese birthrate

But let's be clear: Population change is a complicated subject affected by many factors.

Western media often correlates the decline in Japan's population size with recent studies of Japanese sexual habits and marriage. A 2016 study by the National Institute of Population and Social Security Research in Japan, for instance, found that "almost 70 percent of unmarried men and 60 percent of unmarried women are not in a relationship."

But just because people aren't in relationships doesn't mean they don't want companionship, of course. And that's where something like Gatebox comes in.

Gatebox AI

Yes, that is an artificially intelligent character who lives in a glass tube in your home. Her name is Azuma Hikari, and she's the star of Gatebox — a $2,500 Amazon Echo-esque device that acts as a home assistant and companion.

Here's what we know:

SEE ALSO: Japan's sex problem is so bad that people are quitting dating and marrying their friends

DON'T MISS: Japan's huge sex problem is setting up a 'demographic time bomb' for the country

A Japanese company named Vinclu created the Gatebox.

It's about the size of an 8-inch by 11-inch piece of paper, according to Vinclu. And there's a good reason for that: The device is intended to be "big enough for you to be able to put right beside you." You'll understand why you'd want a Gatebox so close soon enough.



The Gatebox is similar to Amazon's Echo — it's a voice-powered home assistant.

The Gatebox has a microphone and a camera because you operate it using your voice.

For now, it will respond only to Japanese; the company making Gatebox says it's exploring other language options. Considering that preorder units were available for both Japan and the US, we'd guess that an English-language option is in the works.



Gatebox does a lot of the same stuff that Echo does — it can automate your home in various ways, including turning on lights and waking you up in the morning.



See the rest of the story at Business Insider

This 'shopbot' allows you to find out where an outfit is from simply by taking a photo

$
0
0

glamix 3

Fashion lovers, rejoice — recreating the outfits you see on your social media feeds or even on the street is now as easy as taking a picture.

Glamix, a chatbot created by Israeli startup Syte, allows users to snap a photo of an outfit they like, and send the image via Facebook Messenger to Glamix (@glamix.me), the "personal fashion assistant" the company calls "gigi."

The photos generating the search can be taken either directly from the Instagram pages of influencers, online magazines, or can simply be a photo taken by you and stored on your phone.

The AI-bot or "shopbot" platform then uses automatic responses and image recognitions to identify the exact same products or similar (and sometimes cheaper) items that are available to buy online, allowing you to recreate any outfit.

The "visual search engine" has agreements in place with most major retailers in the UK, according to co-founder, CMO, and former City capital markets banker Lihi Pinto Fryman, as well as some in the US and France. She said that at the point of last week's launch, the bot could access around 10 million products.

In the UK, it has access to high street favourites like Topshop, ASOS, H&M, Zara, Mango, as well as department stores House of Fraser, John Lewis, Selfridges, and Harrods, with the company earning a commission on each sale. In the US, it counts Bloomingdales and Macys among its stockists.

To find out where an outfit featured on a fashion blogger's Instagram post came from, firstly, select the photo.

frugality

Then, click on the three dots in the top right hand corner, where the drop down will give you the option to "Share to Messenger."

frugality 2

Send the image to Glamix (@glamix.me). The bot replies right away, giving users some categories to choose from. We chose "Bags."

frug ss

Glamix then comes back with a range of 10 to 20 product options per photo, and users can filter products by price.

frug ss 2

"If, for example, you were to send an Instagram photo from the Gucci store, you would probably get the original version, as well as other options, as the machine will recognise the features of the product," Fryman said. "It's fun, instant, and addictive."

Glamix may throw up items from websites in different geographies, but it will only show products that can be shipped to your location, says Fryman. Though, usually that would incur higher delivery costs and longer shipping times.

glamix screenshotSyte, a company focused on combining artificial intelligence and fashion, has spent the last three years developing the platform.

Fryman told Business Insider that it all began with a red dress.

While living in London, she was surfing the internet one day and came across the perfect dress, but couldn't find it for sale anywhere.

"I asked myself, 'How is it possible in 2014 that I see something I like but I can't just tap and get it? It felt a bit surreal given we have achieved so much in tech, but yet fashion is so behind."

It got Fryman and her tech-minded husband, Ofer, thinking. Along with her brother Idan, the three then founded the company.

Now, in Fryman's opinion, a shopbot like Glamix is more user-friendly than an app. "People are tired of downloading endless apps, but everyone has Instagram and Facebook," she said.

The Glamix Instagram account @glamix.me currently has almost 45,000 followers.

It is targeting millennials, both male and female, and generally keen online shoppers. Eventually, Fryman says the plan is to offer new features, such as user discounts and sale alerts, once it has collected sufficient data about what they like.

"It's advertising in a completely different way — when we want them (brands) we can call them by tapping the image," says Fryman. "We want to change the way retailers interact with consumers, which the market is demanding."

See how the app works in the video below:

Join the conversation about this story »

NOW WATCH: Everything we know about the next iPhone — including a completely new look

A hedge fund veteran is trying to bring quant trading to a new market — sports betting

$
0
0
  • Sports trading company Stratagem raising a £25 million fund to bet on sports;
  • Company set up by ex-hedge funder, run by former Goldman Sachs partner;
  • Stratagem uses artificial intelligence to analyse football, tennis, and basketball, identifying betting opportunities;
  • One of several startups trying to make betting markets more like financial markets.

Stratagem   Andreas Kourkorinis   Founder and Head of Trading 7LONDON — "A good analogy is that we’re building these robots to let them run around the floor," Andreas Koukorinis, the founder of Stratagem, told Business Insider.

"Well, the first time we tried to deploy it the robot fell on its face."

The "robot" was a predictive analytics programme for sports betting, meant to use machine learning and artificial intelligence to crunch through huge amounts of data and find an edge in the market to bet on.

Koukorinis had been working on the programme for around a year by this point. He had quit $3.3 billion hedge fund giant Fortress in late 2011 with the idea of bringing the analytical rigour of hedge funds to sports betting. Quantitative trading — also known as quant trading — had long existed in financial markets, where complex mathematical models were used to identify trading opportunities. Why wasn't there something similar in sports?

"2013 was me in my living room with my wife being like what are you doing with your life?" he recalls. "You used to work in finance, we used to fly first class, and now you’re sitting here in a t-shirt in our living room with two guys, you guys are barely speaking — this is crazy. I said, no, no, there’s something here."

'This is the way to have an insight into AI for trading'

Today, Koukorinis' vision is starting to pay off. Stratagem now has an office of around 30 people on Russell Square, London, which includes former Goldman Sachs quants and ex-CERN scientists. The predictive model — the robot that fell down — is up and running and bringing in money for the company. Stratagem has an internal syndicate, betting its own money and making a return.

The company is also hoping to raise a fund of around £25 million by the autumn that it will invest — in effect, a sports betting hedge fund. Most of the trading will be automated.

Stratagem   Charles McGarraugh   CEO 1"The pitch [to institutional investors] is really straightforward," says Charles McGarraugh, Stratagem's CEO. "Sports lend themselves well to this kind of predictive analytics because it’s a large number of repeated events. And it’s uncorrelated to the rest of the market. And the duration of the asset class is short — things can only diverge from fundamentals for so long because then you’re on to the next one pretty quickly."

McGarraugh spent 16 years at Goldman Sachs prior to joining Stratagem as CEO in September last year. He knew Koukorinis through work, was an early investor in the company, and found himself increasingly drawn to what his friend was up to.

"One of the reasons I was keen to stay close to Andreas was because this is the way to have an insight into AI for trading as it evolves," he says. "That’s an interesting thing in terms of the bigger picture."

Koukorinis says: "I was fortunate enough to get access to machine learning before this boom came up. I observed how people set up DeepMind [the famous London AI lab acquired by Google for £400 million in 2014], which was across the street for a while, and other AI companies."

'Think about oil in the ground. It's the same as data'

Stratagem's business has two main parts: data collection and processing. At both stages, the company believes it has an edge.

On the data collection front, Stratagem doesn't just rely on publically available data sources but generates its own in-house data. The company employs around 65 football analysts based all over the world covering local leagues.

Koukorinis says: "Think about oil in the ground, all of this in various locations. It’s the same thing as data. Our first job is to collect up oil and bring it to the ground. We collect Twitter feeds, crowdsource videos, market data, we collect from operators, action data we buy from various sources, tech data the analysts write — all of those sources. That’s job number one."

Once the data is collected, Stratagem must crunch the numbers. Its programme can not only read different data sources but decides the correct weighting to give each source. The end goal is for the model to spot "alpha" in the market — mispriced odds where Stratagem has a better chance of winning. The programme then places bets, both before and during games.

"For us, it’s really about having access to data that comes from multiple sources and of different textures and having the backbone of the overlay to be able to analyse them," Koukorinis says. "That’s really the edge."

Stratagem has built models looking at football, tennis, and basketball, and is bringing in money trading its own book.

'Whether it’s a football match or Brexit — they are akin to options trading'

The idea of generating proprietary data and using technology to analyse sports betting markets isn't new. I wrote extensively last year about Starlizard, a private syndicate that does just that to generate big returns for staff and partners.

Stratagem is not using its tool just for its proprietary bets but to pitch these systems to finance firms and fund managers.

"It’s interesting to see how event trading is becoming more of an interest to people I’m in touch with in the hedge fund space," says Todd Johnson, the COO of betting exchange Smarkets. "All these things, in the end, are outcomes that we’re trying to take a bet on. Whether it’s a football match or whether Brexit is going to happen or you want to bet on insurance markets — they are in essence akin to options trading."

Todd Johnson SmarketsJohnson, who left a hedge fund he cofounded to join Smarkets, believes the sports betting market will attract more sophisticated investors as the infrastructure around improves.

"Coming from the hedge fund world, a lot of the people in financial services and who worked in the City bet," Johnson says. "They’ve always viewed it as something that was entertainment.

"As we start to get to tools that look and feel like the tools they use to trade equities, they’re starting to get that this is an interesting space to trade in."

Like Stratagem, Smarkets is hoping to professionalise and financialise sports betting. It is working on a Bloomberg-style interface to help give punters more information and pitches itself as a home for sports traders. Its platform supports automated market making bots that people can set loose via APIs to trade the markets.

Johnson says: "When Jason [Trost, Smarkets' founder] started the company he saw a lot of parallels between the opportunities in the betting markets and sports trading markets, and what happens generally in financial services.

"If you go back to where equity markets were in the 1970s and 1980s, it wasn’t a market that people actively invested in, in terms of the average investor. The technology didn’t lend itself to having great price discovery, it was expensive to trade. All those things were adjusted in the 80s and the 90s. In the betting industry, we’re seeing that."

'Maybe we’re on the line between genius and madness'Theye are still a long way off, however.

"The betting market certainly doesn’t have the scale, liquidity, and the velocity that you see in traditional financial services," says Johnson.

Betfair is the largest betting exchange and the total sportsbook of its parent company Paddy Power Betfair last year was £5.6 billion. That averages out at £14.4 million a day. That is simply not enough volume to interest most fund managers.

Attempts in the past to set up a sports betting fund backed by traditional finance have also struggled. As the Financial Times pointed out last month, London-based Centaur Corporate's Galileo fund bet on football, racing, and tennis matches. It projected returns of 15 to 20% but lost $2.5 million and collapsed in 2012, two years after launch.

People are captivated with the idea of a lot of smart guys sitting in an attic, looking at predictive analytics for sports

McGarraugh says he is confident Stratagem will be able to raise the £25 million or thereabouts it is targeting. He says the fund will "not be offered widely," with the money coming from Stratagem's associates.

Still, the company is hedging its bets. As well as raising the fund, Stratagem is also selling tips to punters generated by its programme and marketing its services to bookmakers to help fine tune their odds.

McGarraugh says: "This process of searching for the right business model — how do you commercialise what you’ve built? I feel pretty good that we’re on the right track."

The reception among bookmakers has been encouraging so far, he says. "I think people are captivated with the idea of a lot of smart guys sitting in an attic, looking at predictive analytics for sports."

The former Goldman partner also believes that the tools Stratagem are developing could well stretch beyond sports betting. The core USP, he says, is "enhancing your performance but leveraging technology, using the latest AI-style technologies." That could apply to traditional finance just as much as sports betting.

He adds: "Maybe we’re on the line between genius and madness, but I’m pretty sure we’re on the right side. It’s a big global market. It can be better. We want to be part of this."

Join the conversation about this story »

NOW WATCH: Investing legend Ray Dalio shares the simple formula at the heart of his success

Alphabet tops $1,000 for the first time (GOOGL)

$
0
0

google ceo sundar pichai at google i/o 2017

The internet giant just hit new highs.

Class A shares of Alphabet, the parent company of Google, broke $1,000 in early trading Monday.

Alphabet has gained about 25% so far this year. The S&P 500 is up around 7.8% in the same time.

This is the first time Alphabet has hit this price in the almost 13 years since it went public, and the new high comes only a couple weeks after the company's big developer conference, Google I/O.

Google has announced a major shift to focus on artificial intelligence, as it fights companies like Amazon and Apple for dominance in the next big internet trend. The artificial-intelligence-powered Google Assistant is now available on smartphones via an app.

Last week, Amazon broke the $1,000 milestone for the first time. That stock is up around 32% this year. 

Click here to watch Alphabet's price live...

alphabet stock price

SEE ALSO: Amazon hits $1,000 a share for the first time

Join the conversation about this story »

NOW WATCH: 9 phrases on your résumé that make hiring managers cringe

A Stanford researcher is pioneering a dramatic shift in how we treat depression — and you can try her new tool right now

$
0
0

alone sad depressed sea

Depression is the leading cause of disability worldwide, and it can kill. But scientists know surprisingly little about it.

We do know, however, that talking seems to help — especially under the guidance of a licensed mental health professional. But therapy is expensive, inconvenient, and often hard to approach. A recent estimate suggests that of the roughly one in five Americans who have a mental illness, close to two-thirds have gone at least a year without treatment.

Several Silicon Valley-style approaches to the problem have emerged: There are apps that replace the traditional psychiatry office with texting, and chat rooms where you can discuss your problems anonymously online.

The newest of these tech-based treatments is Woebot, an artificially intelligent chatbot designed using cognitive-behavioral therapy, or CBT, one of the most heavily researched clinical approaches to treating depression.

Before you dismiss Woebot as a half-baked startup idea, know that it was designed by Alison Darcy, a clinical psychologist at Stanford, who tested a version of the technology on a small sample of real people with depression and anxiety long before launching it.

"The data blew us away," Darcy told Business Insider. "We were like, this is it."

The results of the trial were published Tuesday in the Journal of Medical Internet Research Mental Health.

For the test, Darcy recruited 70 students who said they experienced symptoms of depression and anxiety and split them into two groups. One group spent two weeks chatting with Woebot; the other was directed to a National Institute of Mental Health e-book about depression. Over two weeks, people in the Woebot group reported not only chatting with the bot almost every day, but seeing a significant reduction in their depressive symptoms.

That's a promising result for a type of treatment whose results have so far been tough to quantify — we don't have a lot of research comparing bot-to-human therapy with traditional human-to-human therapy.

Woebot uses CBT to talk to patients, and several studies suggest the approach lends itself to being administered online. A review of studies published recently in the journal World Psychiatry compared people who received CBT online with people who received it in person and found that the online setting was just as effective.

Dr. Ali Darcy Headshot 2One reason for this, according to Darcy, is that CBT focuses on discussing things that are happening in your life now as opposed to things that happened to you as a child. As a result, instead of talking to Woebot about your relationship with your mom, you might chat about a recent conflict at work or an argument you had with a friend.

"A premise of CBT is it's not the things that happen to us — it's how we react to them," Darcy said.

Woebot uses that methodology to point out areas where a person might be engaging in what's called negative self-talk, which can mean they see the environment around them in a distorted way and feel bad about it.

For example, if a friend forgot about your birthday, you might tell Woebot something like, "No one ever remembers me," or "I don't have any real friends." Woebot might respond by saying you're engaging in a type of negative self-talk called all-or-nothing thinking, which is a distortion of reality. In reality, you do have friends, and people do remember you. One of those friends simply forgot your birthday.

"Self-talk is a part of being human," Darcy said. "But the kinds of thoughts that we have actually map onto the kinds of emotions we're feeling."

Darcy is quick to point out that Woebot is not a replacement for traditional therapy, but an addition to the toolkit of approaches to mental health.

"I tend to not think of this as a better way to do therapy. I think of this as an alternative option," Darcy said. "What we haven't done a good job of in the field is give people an array of options. What about the people who aren't ready to talk to another person?"

SEE ALSO: Text-based therapies like Talkspace are transforming how we approach mental health

Join the conversation about this story »

NOW WATCH: How to know if you're actually depressed

This AI will turn your doodled self-portraits into horrifying nightmares

$
0
0

pix2pix

Artificial intelligence (AI) may one day be used to do everything from drive your car to help cure diseases.

But for now, it's being harnessed for some truly bizarre experiments.

The Dutch broadcaster NPO has created an AI system called Pix2Pix that turns line-drawings into realistic pictures of humans. (We heard about it via The Verge.)

Sometimes, they're impressively realistic. Much more often, they're totally disturbing.

It's similar to Edges2Cats, an AI that was trained using thousands of pictures of cats, letting you transform any drawing into a realistic feline creation. But this time round, it was trained using photos of just one person — reporter Lara Rense.

Check out some of Pix2Pix's unsettling creations below...

First up, here's how it looks when fed a realistic drawing of Rense. Pretty normal, right?



But if you try and draw yourself, it quickly devolves into nightmare fuel.



My colleague Lindsay's creation had fleshy hair and hairy flesh.



See the rest of the story at Business Insider

Chinese poetry scholars are 'disgusted' by a new book written by a robot

$
0
0

sad robot

There are 139 Chinese poems in the new book "The Sunlight that Lost the Glass Window," and the fact they're all written by one artificially intelligent bot doesn't make local scholars too pleased.

"It disgusted me with its slippery tone and rhythm," poet Yu Jian told local newspaper China Youth Daily, according to the South China Morning Post. "The sentences were aimless and superficial, lacking the inner logic for emotional expression."

Others said computers couldn't create poetry because they weren't alive, and that the work could "kill our beloved art."

The book's contentious author is Xiaoice, a natural-language chat bot developed by Microsoft in 2014. It debuted to great fanfare on the Chinese blogging site Weibo and has since interacted with tens of millions of users both online and in the app.

Xiaoice's breakout book was met with mixed reviews, however. Some praised the technology's innovative leap from conversation to creative efforts, including one professor who embraced the new take on the art form.

"This is what we call a poetic jump," Zhang Zonggang, of Nanjing University, told the SCMP.

But there were also purists who wholeheartedly rejected the premise that poetry could come from AI.

"A computer that has not lived life cannot write a poem," Shanghai-based poet Ding Shaoguo told the SCMP.

Here is one of Xiaoice's poems, if you want to judge for yourself:

The rain is blowing through the sea/ A bird in the sky/ A night of light and calm/ Sunlight/ Now in the sky/ Cool heart/ The savage north wind/ When I found a new world.

SEE ALSO: 'This is death to the family': Japan's fertility crisis is creating economic and social woes never seen before

Join the conversation about this story »

NOW WATCH: An up-close and personal look at the SpotMini robot and its creator

Experts predict when AI will exceed human performance

$
0
0

ex machina movie artificial intelligence robot

Artificial intelligence is changing the world and doing it at breakneck speed. The promise is that intelligent machines will be able to do every task better and more cheaply than humans.

Rightly or wrongly, one industry after another is falling under its spell, even though few have benefited significantly so far.

And that raises an interesting question: when will artificial intelligence exceed human performance? More specifically, when will a machine do your job better than you?

Today, we have an answer of sorts thanks to the work of Katja Grace at the Future of Humanity Institute at the University of Oxford and a few pals. To find out, these guys asked the experts. They surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. And many of the answers are something of a surprise.

 ai takeover

The experts that Grace and co coopted were academics and industry experts who gave papers at the International Conference on Machine Learning in July 2015 and the Neural Information Processing Systems conference in December 2015. These are two of the most important events for experts in artificial intelligence, so it’s a good bet that many of the world’s experts were on this list.

Grace and co asked them all—1,634 of them—to fill in a survey about when artificial intelligence would be better and cheaper than humans at a variety of tasks. Of these experts, 352 responded. Grave and co then calculated their median responses

The experts predict that AI will outperform humans in the next 10 years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by 2027).

But many other tasks will take much longer for machines to master. AI won’t be better than humans at working in retail until 2031, able to write a bestselling book until 2049, or capable of working as a surgeon until 2053.

The experts are far from infallible. They predicted that AI would be better than humans at Go by about 2027. (This was in 2015, remember.) In fact, Google’s DeepMind subsidiary has already developed an artificial intelligence capable of beating the best humans. That took two years rather than 12. It’s easy to think that this gives the lie to these predictions.

ke jie alphago deepmind

The experts go on to predict a 50 percent chance that AI will be better than humans at more or less everything in about 45 years.

That’s the kind of prediction that needs to be taken with a pinch of salt. The 40-year prediction horizon should always raise alarm bells. According to some energy experts, cost-effective fusion energy is about 40 years away—but it always has been. It was 40 years away when researchers first explored fusion more than 50 years ago. But it has stayed a distant dream because the challenges have turned out to be more significant than anyone imagined.

Forty years is an important number when humans make predictions because it is the length of most people’s working lives. So any predicted change that is further away than that means the change will happen beyond the working lifetime of everyone who is working today. In other words, it cannot happen with any technology that today’s experts have any practical experience with. That suggests it is a number to be treated with caution.

But teasing apart the numbers shows something interesting. This 45-year prediction is the median figure from all the experts. Perhaps some subset of this group is more expert than the others?

To find out if different groups made different predictions, Grace and co looked at how the predictions changed with the age of the researchers, the number of their citations (i.e., their expertise), and their region of origin.

It turns out that age and expertise make no difference to the prediction, but origin does. While North American researchers expect AI to outperform humans at everything in 74 years, researchers from Asia expect it in just 30 years.

That’s a big difference that is hard to explain. And it raises an interesting question: what do Asian researchers know that North Americans don’t (or vice versa)?

Ref: arxiv.org/abs/1705.08807 : When Will AI Exceed Human Performance? Evidence from AI Experts

SEE ALSO: There's a dark secret at the heart of artificial intelligence: no one really understands how it works

Join the conversation about this story »

NOW WATCH: Colonel Sanders' nephew revealed the family's secret recipe — here's how to make KFC's 'original' fried chicken

DeepMind created a YouTube database of humans doing stuff that can help AIs to understand us (GOOG)

$
0
0

James Bardgett checks the quality of beer before filling up kegs and barrels to be dispatched at the Wild Beer Co brewery at Lower Westcombe Farm on February 11, 2016 near Evercreech, England. Over recent years there has been a surge in the popularity of craft beers, which are brewed by smaller independent breweries, such as the Wild Beer Co in Somerset which specialises in beers that have an emphasis on quality, flavour and brewing methods. (Photo by )

Google DeepMind has created a database of hundreds of thousands of YouTube URLs that can help artificial intelligence (AI) agents to identify human actions such as drinking beer, riding a mechanical bull, and bench pressing.

The London-based AI lab, which was acquired by Google in 2014 for a reported £400 million, developed the "Kinetics" dataset for the ActivityNet Challenge and published a paper detailing the work in May. The full paper can be read here.

The database is comprised of some 300,000 already-published "realistic" and "challenging" YouTube URLs that are no more than 10 seconds in length. There are 400 URLs for each of the 400 actions. Each URL directs to a video of a human doing one of 400 actions and has been tagged accordingly by someone via the Amazon Mechanical Turk platform, which allows people to earn money for doing small computer tasks.

"The actions are human focused and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands," DeepMind's authors wrote.

DeepMind fed the dataset into a number of off-the-shelf AIs and successfully taught them how recognise certain tasks. Interestingly, DeepMind had more luck teaching AIs how to recognise actions such as bowling, tennis, and trapezing than it did teaching them how to spot yawning, headbutting, and faceplanting. Google DeepMind human actions

Earlier reports suggested that DeepMind had used clips of Homer Simpson performing various actions but this was not the case.

A Google DeepMind spokesperson told Business Insider that data classification is essential to machine learning research.

"AI systems are now very good at recognising objects in images, but still have trouble making sense of videos," said the spokesperson. "One of the main reasons for this is that the research community has so far lacked a large, high-quality video dataset, such as the one we now provide.

"We hope that the Kinetics dataset will help the machine learning community to advance models for video understanding, making a whole range of new research opportunities possible."

A separate paper, published shortly after the first one, shows how DeepMind then used its own algorithms on the Kinetics dataset with even better results.

"We have shown that the performance of deep learning architectures can be substantially improved by first training on Kinetics, and then training and evaluating on standard action classification benchmarks," the DeepMind spokesperson added.

Join the conversation about this story »

NOW WATCH: Here's everything Apple is rumored to be launching in 2017

A Microsoft robot got the highest all-time score in 'Ms. Pac-Man' (MSFT)

$
0
0

Game over, man.

A Microsoft-made artificial-intelligence system has achieved a perfect score of 999,990 points on the Atari 2600 version of the classic "Ms. Pac-Man"— making it very likely the first time anybody, human or robot, has "beaten" the game. Ever.

That notion is backed up by Highscore.com, a resource for tracking high scores in the still competitive classic-arcade-gaming scene. Per that site, the highest score ever recorded in this version of "Ms. Pac-Man" was 266,330, by a player in Brazil.

Here's a video of the Microsoft system's achievement, including footage of the game resetting when it hits that top score:


This achievement came out of Maluuba, an AI startup Microsoft snapped up in January. A representative explained that Maluuba chose to test the system with this version of "Ms. Pac-Man" because AI researchers have standardized using the Atari 2600 so they can directly compare research results and methods.

It's kind of funny, too, because Microsoft CEO Satya Nadella once quipped that where Google was building AI systems to win games like Go and "Starcraft II," Microsoft was building AI to get real work done.

Cut Maluuba some slack, though — Microsoft said the tech used to help its robots make split-second decisions in "Ms. Pac-Man" could also be used in, say, software to help salespeople determine the right leads to contact on any given day.

SEE ALSO: Microsoft CEO Satya Nadella slams Google's game-playing artificial brain: 'We are not pursuing AI to beat humans at games'

Join the conversation about this story »

NOW WATCH: How a young Steve Jobs hustled Atari into sending him to India for a spiritual quest

Apple's HomePod is not artificial intelligence — but it is a great speaker (AAPL)

$
0
0

Apple HomePod white and black

Sometime later this year, Apple's $349 smart speaker, called HomePod, will go on sale.

The HomePod may be many things: a high-quality speaker, another possibly overpriced Apple product, or an odd move from a company best known for portable devices.

One thing it is not — at least, not yet — is a Trojan horse for Apple to put artificial intelligence in your house that talks to you and runs your house and life. 

One key to understanding Apple is that it doesn't pursue technologies for their own sake. It builds things that people presumably want — the user experience, or the reason why someone would pay for it, comes first. 

Apple thinks that people will buy the HomePod because they want a premium stereo. Nowhere is this clearer than in comments that Apple CEO Tim Cook gave to Businessweek earlier this month.

"The thing that has arguably not gotten a great level of focus is music in the home. So we decided we would combine great sound and an intelligent speaker," Cook told Bloomberg's Megan Murphy. 

"When I was growing up, audio was No. 1 on the list of things that you had to have. You were jammin’ out on your stereo. Audio is still really important in all age groups, not just for kids. We’re hitting on something people will be delighted with. It’s gonna blow them away. It’s gonna rock the house," he continued.

This is completely in line with how late CEO Steve Jobs described the company in 2010. Apple's philosophy is to "make extremely advanced products from a technology point of view but also have them be intuitive, easy to use, fun to use, so that they really fit the users and users don't have to come to them, they come to the user," Jobs said. 

Notice what Jobs didn't say: Apple's goal is not to have the most drool-worthy pure technology that people in Silicon Valley see as the future of computing — although it's doing that a little bit lately, particularly with its experiments in augmented reality, a very early emerging technology.

Apple's not really a tech company. As independent Apple analyst Neil Cybart has previously argued, it's a design company, and with HomePod, it's designed an easier way to play high-quality sound in your home. It's almost incidental that Apple's using Siri as its main control system. 

For the most part, Apple only likes to talk about tech that it's about to sell. As Cook told MIT Technology Review earlier this month, a lot of technology companies "sell futures"— and you'll be able to buy a HomePod later this year. 

apple is selling homepod as a fantastic speaker first and foremost

Rock the house 

A homepodI've personally heard the HomePod, and I can tell you, in my brief listening experience in a controlled and simulated living room, it does sound great. 

I heard HomePod play the same songs as the Sonos Play 3, which is a premium speaker that you can't talk to, as well as the Amazon Echo, which is a cheap speaker that exists to be spoken to.

HomePod clearly sounded better than both to my ears. For someone who wants a really good home stereo, and price isn't a major factor, I suspect the HomePod will have to be a consideration. 

Eventually, you'll be able to talk to Apple's Siri on the HomePod. But I didn't get a chance. Apple didn't want the story out of its recent WWDC conference to be how impressive Siri is — it wanted it to be that the sound is amazing. 

I buy that. Siri can still be frustrating to use. And studies show that when people talk to their Amazon Echo, the most common thing they do is tell is to play music.

Someday, futurists imagine, these speakers will contain a generalized artificial intelligence that humans can converse with, rely on, or maybe fall in love with (ever see the movie "Her"?). 

But that's not what Siri is. Siri is a complicated piece of software that uses machine learning to understand what you say and return answers. Machine learning is a key component of creating an AI, but it's also used all over technology — for example, to keep your iPhone's battery lasting longer. It's merely a way of solving a problem that's hard to define with simple rules.

So Apple doesn't want to be compared against "futures." With the HomePod, Apple's not saying Siri will become your new virtual friend, like the future depicted in movies like "Her." Apple didn't even tell its armies of software makers how to program simple apps for the speaker. 

Apple is simply saying it will rock the house. 

SEE ALSO: Apple says its new $350 speaker will 'reinvent home music' — here's what we know

Join the conversation about this story »

NOW WATCH: These size comparisons show the true scale of enormous things

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>