Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

A former TV comedy writer learned three ways to succeed at impossible tasks when he taught an IBM computer how to argue

$
0
0

Noam Slonim

  • Dr. Noam Slonim is the computer scientist who conceived of and helped build "Project Debater," an IBM computer that not only debates humans, it sometimes beats them.
  • The effort to build Debater took six years and was fraught with doubt and discouragement for him and his team
  • In a exclusive interview with Business Insider, Slonim says that his period writing for TV comedy shows helped prepare him for creating Debater.
  •  

A unique, Man vs. Machine battle of wits occurred last month, and Dr. Noam Slonim was the brains behind the winning machine.

Slonim is a former comedy writer turned full-time computer scientist. He conceived of and oversaw the development of "Project Debater," the latest IBM supercomputer capable of understanding and speaking in natural language well enough to not only debate humans, but sometimes defeat them.

Last month, Debater engaged in two debates against humans, prevailing in one of them as judged by the audience, made up largely of San Francisco's tech press. The following day, the computer's rhetorical skills received mostly glowing reviews from the same reporters.

Noa Ovadia, Dan Zafrir, Project Debater, IBM, artificial intelligenceSome of the best minds at some of the biggest tech companies are trying to teach computers to converse with humans-— the stuff of science-fiction. The bet is that sometime soon, people will truly be able to control their PCs and gadgets through conversation.

But creating that kind of sophisticated artificial intelligence isn't easy.

IBM's Debater team toiled for six years before Debater's impressive showing last month, a project that was at times full of disappointment and doubt, Slonim recalled during an exclusive interview with Business Insider. He acknowledged the system, which occasionally drifted off topic or repeated itself during the debates, still has a long way to go.   

Slonim’s own path to success in computer science was also a circuitous one.

He almost ended up in the entertainment industry. Two decades ago, he was one of the writers for The Cameri Quintet, Israel’s equivalent of Saturday Night Live. Later, he helped develop a Seinfeld-esque sitcom. The show didn't stay on the air long and he eventually went back to pursuing his PhD full time.

That shift from comedy to computers turned out to be good for him.

He says that three communication skills, instrumental in both pursuits, can help anyone be more confident when working on what seems like an impossible mission:  1) connecting with people on an emotional level, 2) offering criticism without being cruel 3) learning how to deal with doubt.

A writer's mind

"I found myself in an interesting position," said Slonim, the IBM Research’s principal investigator on Project Debater. "I love machine learning and natural language processing but I also love to write, to be creative in writing. This project allows me to combine these skills."

The whole idea behind Debater was to train the machine to gather ideas, analyze and boil them down and then write arguments based on the information -- much the same way human beings do.

Debater sifts through huge quantities of text to gather information that supports whatever argument the system intends to make. Debater doesn’t just regurgitate other people’s ideas, Slonim said. The computer might pull entire paragraphs, but more typically Debater pulls out single sentences or even clauses from any single text.

Making a convincing argument, however, requires more than just rounding up raw data, according to Slonim.

Project Debater, IBM, supercomputer

Slonim and IBM wanted to equip Debater with the ability to connect with an audience on an emotional level. For starters, Debater is trained to recognize human controversy and moral principles. They want the machine to be able to create metaphors and analogies. The system has even been equipped with a digital sense of humor, according to Slonim.

“(Humans) need to look at texts and understand how to build an argument,” Slonim said. “You have to inject humor once in a while.”

Slonim said that having a background in writing was necessary to instill these things into a computer system. He believed that so much that he added a published book author to the Debater team.

Idea exchange

In addition to knowing how to write, it’s a good idea to be a skilled debater oneself before attempting to teach a computer how to do it.

Slonim said that an environment must be created so that colleagues can share ideas and then be enabled to debate those ideas. This is a big part of any collaborative endeavor, be it writing TV shows or building supercomputers.

He said the Debater team often hashes out ideas during jogs in the park, using the time to challenge and defend ideas in a constructive manner.

Noa Ovadia,Dan Zafrir, IBM, Project Debater, Artificial IntelligenceHe added that offering criticism without being cruel is a valuable skill.

Dealing with doubt

Finally, sharing doubts in an open and supportive way can be helpful when setbacks occur.

During Debater’s development, the team experienced some very deep valleys. Doubts that the team could create a debating computer system began to set in. Even Slonim began to question the team’s goals.

Slonim said the problem at times seemed too complex.

"It’s a large team of talented people," Slonim said, "and it took us four years to get Debater to the point to debate in even a (rudimentary) manner. But the hardest part for me was not a technical thing. The hardest part was to believe that this was doable."

He added, "Especially in the beginning, in the first few years people felt it would be hard to make progress and [if] we’re going in the wrong direction and naturally they raised that with me. My wife is a psychologist and she said the most important thing is not to dismiss the doubts but rather share them with the team."

By sharing like that, the team helped buoy each other during the days when nothing seemed to go well, Slonim said.

And the team may yet face more dark days.

Debater is still a work in progress. In addition, artificial intelligence is receiving a lot of scrutiny from those fearful that AI could prove to be a threat to humanity.

"My understanding is that the potential of this technology is huge," said Slonim, who argues Debater might one day become a teaching tool for children.

"We want youngsters to better articulate their arguments, to make more rational decisions, to participate in discussions with peers in a valuable and civilized manner," he said. "We want them to use critical writing and thinking. I truly believe this is going to bring a lot of good to our society."

SEE ALSO: IBM's new AI supercomputer can argue, rebut and debate humans

Join the conversation about this story »

NOW WATCH: Why Rolex watches are so expensive


People love Sonos speakers so much they buy them again and again, but there are 2 big dealbreakers that could spoil its IPO (AMZN, GOOGL, AAPL)

$
0
0

SonosTwoBundle_1_US

  • Sonos, which filed to go public on Friday, has built up a respectable business.
  • But the speaker market is undergoing a major shift to smart speakers, and Sonos runs the risk of being relegated to being a secondary player — or being boxed out of the market.
  • The company's sales growth has already slowed down and it has struggled with red ink.


Sonos has amassed a fervent fan base over the years. But that doesn't mean investors should get overly enthusiastic about the company's stock when it hits the public markets.

The consumer electronics company, which filed on Friday to go public, has carved out a nice niche for itself with its line of wifi connected speakers. But the company's sales growth has moderated from its heyday, and in recent years it's struggled to post a profit. Meanwhile, it's dependent on some of the biggest and most powerful companies in tech for a core new technology that's transforming the home audio market even as it faces growing competition from those same companies.

In other words, I wouldn't bet the house on Sonos — no matter how much you may love its connected speakers.

By some measures, Sonos has built up a respectable business. Since 2013, its sales have nearly doubled to almost $1 billion. Its gross margin — the portion of its sales it has left after accounting for the direct costs of producing its profits — has generally been in the 45% range or higher. That's a healthy figure for a hardware company, indicating that it's able to charge a premium price for its products. It also gives the company plenty of room to spend money on marketing and research-and-development.

Sonos has some fanatical fans

Consumers have been catching on. In the last 12 months, the company sold 4.6 million speakers, more than triple the number it sold in its 2013 fiscal year.

And Sonos' fans really seem to love its products. Of the 6.9 million households that have a registered Sonos speaker, 61% have at least two of the company's products. Some 27% have four or more.

The average customer who starts off with just one Sonos speaker will buy more than two more over time, according to the company. On average, customers who start off with more than one Sonos product initially buy about three at the start and then purchase another two over time.

All that sounds great. But there are signs that Sonos has struggled to build a profitable business outside its niche of geeky audiophiles.

Warning: The best days may be behind it

Most of its sales growth over the last five years happened between Sonos' 2013 and 2014 fiscal years, when its revenue soared 75%. Since then, the company's revenue hasn't grown faster than about 10% on an annual basis.

The bulk of the growth in Sonos' device sales happened in that same period, when they nearly doubled. Ever since, the growth in its product sales have been much more modest, rising just 11% in its last year.

As Sonos' growth slowed, its bottom line deteriorated. The company went from posting a modest profit in fiscal 2014 to a big loss the following year. On an annual basis, it's been operating in the red ever since, although it's gradually improved its bottom line.

Things improved for the company in the first half of its current fiscal year. Sales were up 18% over the same period a year earlier and the company posted a profit for the period. But that improvement could prove to be a chimera.

Sonos' fiscal year ends around the end of September, meaning that its first half includes the all-important holiday season, where it almost certainly gets the bulk of its sales. The company posted a profit for the first half of its fiscal year last year too only to end up posting a full-year loss.

The speaker market is being transformed by "smarts"

Beyond just the numbers, there are bigger reasons to be concerned about Sonos' prospects. Even as the company is hitting the public market, the industry it competes in is changing dramatically.

From when it debuted its first speaker in 2005 until the last year or so, Sonos had the connected speaker market pretty much to itself. If you wanted a whole-home audio system that allowed you to stream music from the internet that you could set up yourself without a custom installer, Sonos was generally the way to go.

Amazon EchoBut that's no longer the case. Amazon's line of Echo smart speakers offers the same capability. So do Google's Home devices and now Apple's HomePod.

The Echo and the Home weren't initially on par with Sonos' devices in terms of audio quality. But both Amazon and Google last year released new versions of their smart speakers with improved sound. And audio quality is the main selling point of Apple's HomePod.

But all three companies offer something with their speakers that Sonos traditionally hasn't — a built-in intelligent agent. You can control Amazon, Google, and Apple's smart speakers with just your voice. And you can do a lot more with them that than. You can use them to turn on and off your lights, tell you the news and weather, answer trivia questions, and give you the latest sports scores.

The smart speaker market has started to catch fire. In the first quarter of this year, unit sales grew a whopping 210% from the same period last year, according to market research firm Canalys, hitting 9 million worldwide. That's nearly twice as many speakers sold in one quarter as Sonos sold in the last year. Canalys expects worldwide sales of smart speakers to reach 56.3 million this year, up from about 35 million last year and fewer than 10 million in 2016.

That kind of growth obviously far outpaces what Sonos has been doing lately. It also illustrates how smart speakers are starting to dominate the speaker market, just as smartphones pushed aside dumb phones and nearly all televisions are now smart TVs.

Sonos' smart speaker strategy is really risky

Sonos has recognized that market shift toward smart speakers. Last fall, it introduced its first one and it has more in the works.

But there's a big flaw in Sonos' strategy — it doesn't have its own voice-assistant technology. Instead, for now, it's relying on Amazon's Alexa assistant, although it plans to add in Google's Assistant and Apple's Siri in the future.

At best, that will relegate Sonos to being a second fiddle to the big players. When people think of an Amazon-powered smart speaker, they think of the company's Echo line. Sonos faces a huge marketing challenge to make consumers aware that one of its smart speakers can offer the same Alexa assistant that they'd get on an Echo.

Sonos beamEven it's able to do so, it may find it tough convincing customers to pay up for one of its speakers, when they can get an entry-level Echo Dot for $50. That challenge could prove even more difficult when you consider that Amazon has full control over which smart speakers it promotes in its web store — and has taken full advantage of that control to promote its Echo line.

Sonos' reliance on the big tech companies could become more problematic over time. Right now, as Sonos acknowledged in the document it filed to go public, it doesn't pay Amazon anything to use Alexa. But that could change if Amazon ever perceives Sonos' devices to be a competitive threat — or as a potential money-maker

Worse yet for Sonos, Amazon and the other tech companies could just cut Sonos off, leaving it without any voice assistant for its smart speakers — both the ones it's already sold to its customers and any future devices. In fact, according to Sonos' regulatory filing, Amazon can sever ties with the company with only "limited notice."

"If these partners disable the integration of their technology into our products, demand for our products may decrease and our sales may be harmed," Sonos warned investors. "We cannot assure you that the resources we invest in research and development, existing or alternative technology partnerships, marketing and sales will be adequate for us to be successful in establishing and maintaining a large share of the voice-enabled speaker market.

"If we are not able to capture and sustain market share, our future revenue growth will be negatively impacted."

For me, that's good reason to be cautious about Sonos. It may make great speakers. But in the smart-speaker era, it's much more important to have the technology to make them intelligent.

SEE ALSO: That this year's IPO market is considered a 'boom' shows how low our expectations are — and why we still haven't figured out the problem

Join the conversation about this story »

NOW WATCH: Everything wrong with Android

Artificial intelligence will create as many jobs as it destroys, according to a PwC analysis

$
0
0

Lee Sedol Go

  • Billionaires including Bill Gates and Elon Musk have argued that robots will basically replace humans at work.
  • People are now worried about vast swathes of unemployment where a few wealthy people own the robots and the rest survive on government handouts.
  • But a new PwC forecast suggests artificial intelligence will create as many jobs as it destroys.
  • AI will disproportionately affect certain sectors negatively such as manufacturing and transport, but will create jobs in healthcare and education.


The likes of Elon Musk and Bill Gates have made repeated doomsday warnings about artificial intelligence becoming more skilled at humans at just about everything.

So it's little surprise that people are scared about a post-AI future where a mostly jobless population subsists on universal basic income while rich people own and operate all the robots.

But a new report from consultancy firm PwC joins a growing chorus of more cautious economic forecasts that suggest the future is brighter than we might think.

Looking at the UK, PwC found that it's true that robots will replace some jobs, especially in sectors like transport or manufacturing. AI will "displace" 38% of transport jobs, and 30% of manufacturing jobs, according to the report.

But other sectors will actually see greater job creation thanks to AI, evening out the balance. Only 12% of jobs in healthcare will be displaced by AI, while 34% will be created, PwC predicts.

The upshot is that AI will create as many jobs as it destroys, when evened out across different sectors.

The report said: "Our estimates suggest that AI will not lead to technological unemployment as we project that it will displace around 20% of existing UK jobs by 2037, but create a similar number.

"In absolute terms, around 7 million existing jobs are projected to be displaced, but around 7.2 million are projected to be created, giving a net jobs boost of around 0.2 million."

The OECD was even more conservative about job displacement in a report earlier this year. The economic organisation concluded that just 14% of jobs in its member countries were at risk of automation.

Both PwC and the OECD suggested that sectors that would benefit the most from AI, or are at least risk from automation, are those that involve complex, specialised tasks and people. This includes:

  • Education
  • Scientific and technical work
  • Information and communication
  • Accommodation and food services

Sectors that will do badly include those that involve repetitive, administrative tasks such as:

  • Finance and insurance
  • Retail
  • Construction
  • Public administration
  • Construction
  • Transport
  • Manufacturing

Here are the full UK numbers from PwC showing AI job creation and displacement in different sectors:

PwC robots AI jobs

SEE ALSO: Apple is failing spectacularly in one of the world's biggest smartphone markets

Join the conversation about this story »

NOW WATCH: This conveyor belt can move in any direction

Meet the 29-year-old who founded a company that's using technology to find treatments for diseases thought to be incurable

$
0
0

Alice Zhang headshot

  • Alice Zhang started Verge Genomics in 2015 with Jason Chen to combine innovation in neuroscience, machine learning and genomics and apply it to the drug discovery process. 
  • The vision for Verge was to become the first pharmaceutical company that automated its drug discovery engine, helping to rapidly develop multiple lifesaving treatments in diseases like Alzheimer's disease, ALS, and Parkinson's disease where no cure exists today. 
  • On Monday, the San Francisco-based company announced it had raised $32 million in series A funding, led by DFJ, bringing its total amount raised to $36.5 million.

The drug development process is laden with problems that make it lengthy and expensive. Right now, it takes 12 years and $2.6 billion to get a single drug to market, with the drug discovery and development process costing $1.4 billion

Verge Genomics, run by 29-year-old Alice Zhang, is trying to address these problems by making drug discovery faster and cheaper.

On Monday, the San Francisco-based company announced it had raised $32 million in series A funding, led by DFJ, bringing its total amount raised to $36.5 million.

Zhang was three months shy of her MD and PhD graduation from University of California-Los Angeles when she left school to start Verge Genomics in 2015 with Jason Chen, who she met during the program.  

"I just became very frustrated with the drug discovery process," she said. "It's largely a guessing game where companies are essentially brute force screening millions of drugs just to stumble across a single new drug that works." 

At the time, Zhang also recognized the advancements in neuroscience, machine learning and genomics occurring all around her. Genome sequencing had become more and more affordable, and breakthroughs in understanding how function connects with genes opened a new field of possibilities for exploring disease and health. And there was an opening for an opportunity to guesswork out of drug discovery. The vision for Verge was to become the first pharmaceutical company that automated its drug discovery engine, helping to rapidly develop multiple lifesaving treatments in diseases like Alzheimer's disease, ALS, and Parkinson's disease where no cure exists today. 

Recently, other big pharmaceutical agencies like Pfizer and Novartis are also starting to follow suit, adapting technology to different steps of the clinical trial process. At least 18 pharmaceutical companies and more than 75 startups have been working on integrating machine learning into the drug discovery process.

Verge, 14 people large, functions at full capacity. Not only do they have computer scientists managing the front-end of machine learning, but they also have researchers working in its own in-house drug discovery and animal lab. The team is stacked with computer scientists, mathematicians, neurobiologists, as well as industry veterans and drug development veterans. 

There are three main problems in drug discovery that Verge is using data and software to tackle. The first is that many diseases like Alzheimer's disease are caused by hundreds of genes. Verge's algorithms on human genomic data can map these genes out. The second is instead of using animal data only for pre-clinical trials, Verge uses human data from day one, which may enable greater insight into how effective the drug actually is on human cells. Drugs that work in mice often fail in humans, and that's because mice usually serve as primary mammal model. Instead of tediously screening millions of drugs, the algorithm will computationally predict drugs that work.

Verge uses brain samples from patients that have passed away from Alzheimer's disease or Parkinson's disease for its human data, obtained through partnerships with over a dozen different universities, hospitals and brain banks. The company then RNA-sequences them in-house, which allows them to measure the gene expression in its most current state, and it can measure simultaneously how all of the genes in the genome are behaving. This data helps scientists figure out what's actually causing disease in these patients and see if there are connections between genes and disease.

Verge's scientists can make predictions about what drugs they think will work. They can take a patient's own skin cell and turn it directly into their own brain cells in a dish. Then the predictions can be tested on these brain cells to see if they can rescue them from dysfunction or death – a basic test of drug efficacy. That validation data can feed back into the platform and continuously improve predictions over time, even across different diseases.

The Verge algorithm identifies drugable targets for treatments, then design drugs accordingly. This is done by mining through human samples to identify groups of genes that are implicated with the disease, and what crucial hub in these gene networks can turn them on or off. 

The latest investment in Verge will serve to advance its ALS and Parkinson's disease drugs. There are six drugs in development, closer to the clinical end, which are being tested to make sure they're safe and non-toxic. The funding will also be used to expand the number of diseases Verge has in its portfolio. 

Emily Melton, a partner at DFJ, told Business Insider that investment in early stage startups is largely about the team, the uniqueness of the idea and the capability and expertise of the research team. But what drew her in most was Zhang. "She was this brilliant founder, with a very organic desire to create an impact," said Melton. "She felt like it was her calling."

Using system learning to recognize patterns that would otherwise go undetected by the human eye can speed up the process while creating a bigger and better feedback loop, said Melton. "We're rethinking how drug discovery is done, and we're rethinking how therapeutics are developed."

SEE ALSO: This healthcare startup is using technology to save millions of babies

Join the conversation about this story »

NOW WATCH: Astronomers just discovered 12 new moons around Jupiter

A new study shows that tech CEOs are optimistic about the future, even if they still don't understand millennials

$
0
0

tim cook apple logo

Tech industry CEOs are bullish on the future of their companies, the sector, and artificial intelligence.

But they're worried about the spread of nationalism, cybersecurity — and millennials.

Those are some of the key takeaways from a new report by KPMG. After surveying more than 1,000 CEOs from all different sectors and from around the globe, the company zeroed in on the responses of 104 from the tech industry.

Compared with their non-tech peers, tech CEOs were more optimistic about their firms' prospects. But they were equally worried about the turn away from globalization and dealing with younger customers.

Here are some of the report's key findings.

SEE ALSO: Pandora is still alive after getting run over by Spotify and Apple, and the CEO says his comeback plan will open up another big business in music

Tech CEOs are bullish about their companies' growth prospects.

Some 88% of tech CEOs surveyed by KPMG said they were confident in their growth potential for their companies over the next three years. More than half — 52%, to be precise — said they expected their companies to grow by at least 2% a year over that time period.

That's more optimistic than CEOs as a whole. Overall, 44% of CEOs said they expected their firms to grow 2% annually over the next three years.

 

 



In fact, many expect to boost their headcount significantly in coming years.

Some 44% of tech CEOs expect to increase their employee base by at least 6% over the next three years, and 2% expect to increase it by at least 11%.

Again, tech leaders were more bullish than their peers. Overall, just 37% of CEOs from all industries expect to increase their headcount by at least 6% over that time period.



Tech CEOs are optimistic about their industry too.

According to the survey, 77% of tech CEOs were confident in the growth prospects of the industry over the next three years.



See the rest of the story at Business Insider

Elon Musk and DeepMind's pledge to never build killer AI makes a glaring omission, Oxford academic says

$
0
0

T 800 2 Terminator Genisys

  • Tech leaders including Elon Musk and the cofounders of DeepMind signed a pledge last week to never develop "lethal autonomous weapons."
  • The letter argued that morally the decision to take a life should never be delegated to a machine and that automated weaponry could be disastrous.
  • An Oxford academic, Dr. Mariarosaria Taddeo, told Business Insider that while the pledge's intentions were good, it missed the real threat posed by artificial intelligence in warfare.
  • She believes the deployment of AI in cybersecurity has flown under the radar — with potentially damaging consequences.

Promising never to make killer robots is a good thing.

That's what tech leaders, including Elon Musk and the cofounders of Google's artificial-intelligence company, DeepMind, did last week by signing a pledge at the International Joint Conference on Artificial Intelligence.

They stated that they would never develop "lethal autonomous weapons," citing two big reasons. The first is that it would be morally wrong in their view to delegate the decision to kill a human to a machine.

Second, they believe that autonomous weapons could be "dangerously destabilizing for every country and individual."

Swearing off killer robots is missing the point

Business Insider spoke with Dr. Mariarosaria Taddeo, of the Oxford Internet Institute, who expressed some concerns about the pledge.

"It's commendable — it's a good initiative," she said. "But I think they go in with too simplistic an approach."

"It does not mention more imminent and impactful uses of AI in the context of international conflicts," Taddeo added.

"My worry is that by focusing just on the extreme case, the killer robots who are taking over the world and this sort of thing, they distract us. They distract the attention and distract the debate from more nuanced but yet fundamental aspects that need to be addressed."

Is AI on the battlefield less scary than in computers?

The US military makes a distinction between AI in motion (i.e., AI that is applied to a robot) and AI at rest, which is found in software.

Killer robots would fall into the category of AI in motion, and some countries already deploy this hardware application of AI. The US Navy received a self-piloting warship, and Israel has drones capable of identifying and attacking targets autonomously, though at the moment they require a human to give the go-ahead.

Sea Hunter

But AI at rest is what Taddeo thinks needs more scrutiny — namely the use of AI for national cyberdefense.

"Cyberconflicts are escalating in frequency, impact, and sophistication," she said. "States increasingly rely on them, and AI is a new capability that states are starting to use in this context."

The WannaCry virus, which attacked the UK's National Health Service in 2017, has been linked to North Korea, and the UK and US governments collectively blamed Russia for the NotPetya ransomware attack, which took more than $1.2 billion.

Taddeo said throwing AI defense systems into the mix could seriously escalate the nature of cyberwar.

"AI at rest is basically able to defend the systems in which it is deployed, but also to autonomously target and respond to an attack that comes from another machine," she said. "If you take this in the context of interstate conflict, this can cause a lot of damage. Hopefully, it will not lead to the killings of human beings, but it might easily cause conflict escalations, serious damage to national critical infrastructure."

There is no mention of this kind of AI in the IJCAI pledge, which Taddeo considers a glaring omission.

"AI is not just about robotics," she said. "AI is also about the cyber, the nonphysical. And this does not make it less problematic."

AI systems at war with each other could pose a big problem

Today, AI systems attacking each other don't cause physical damage, but Taddeo warns this could change.

"The more our societies rely on AI, the more it's likely that attacks that occur between AI systems will have physical damage," she said. "In March of this year the US announced that Russia had been attacking national critical infrastructure for months. So suppose one can cause a national blackout, or tamper with an air-control system."

"If we start having AI systems which can attack autonomously and defend autonomously, it's easy that we find ourselves in an escalating dynamic for which we don't have control," she added. In an article for Nature, Taddeo warned of the risk of a "cyber arms race."

Donald Trump and Vladimir Putin

"While states are already deploying this aggressive AI, there is no regulation," she said. "There are no norms about state behavior in cyberspace. And we don't know where to begin."

In 2004, the United Nations assembled a group of experts to understand and define the principles of how states should behave in cyberspace, but in 2017 they failed to reach any kind of consensus.

Still, she thinks the agreement not to make killer robots is a good thing. "Do not get me wrong — it's a nice gesture," she said. "It's a gesture I don't think it will have massive impact in terms of policymaking and regulations. And they are addressing a risk, and there's nothing wrong with that. But the problem is bigger."

SEE ALSO: A prominent Silicon Valley investor says entrepreneurs need to stop copying Mark Zuckerberg and quit talking about ‘breaking things,' 'disruption,' and 'robots eating the jobs'

Join the conversation about this story »

NOW WATCH: This hands-free crutch takes the strain off your hands, wrists and arms

How this founder turned a hackathon project into a startup that's changing call centers

$
0
0

Tiago Paiva Talkdesk

  • Tiago Paiva won a Twilio hackathon in 2011 with his startup Talkdesk. Now, the company is one of the most recognizable names in the call center industry, with clients such as IBM, Peet's Coffee, Dropbox, and apparel store Zumiez.
  • Paiva, who is originally from Portugal, moved the the US the week he won the hackathon.
  • Paiva told Business Insider that he doesn't see AI or Google Duplex as a threat, because customers still prefer to talk to humans.

Tiago Paiva never wanted to be in the business of call centers.

But when he entered Twiliocon, a hackathon set up by cloud-based messaging company Twilio, Paiva decided the only way he could win was by embracing an industry that, he thought, wasn't exactly exciting.

"Let's be honest, you don't wake up wanting to build call center software," Paiva told Business Insider. "So having Twilio and this challenge kind of pushed me and gave me a reason to build it."

Paiva, who was living in Portugal at the time, and co-founder Cristina Fonseca went on to win the entire competition with Talkdesk, which lets businesses set up call centers in the cloud. Talkdesk now has 400 employees and is used by the likes of IBM, Peet's Coffee, Dropbox, and apparel store Zumiez.

Talkdesk makes the platform that helps agents receiving customer service calls. And since it's all in the cloud, it makes setting up call centers easier for big companies.

The platform, for example, lets users make calls from their desktop, control the numbers of callers in a queue, automatically dial numbers, and, using artificial intelligence, automatically route calls. It also has a built-in analytics platform that lets companies keep track of how the center is doing.

Putting these features all under one roof and in the cloud was, in 2011, new for the industry, Paiva said.

"Usually what happens in the call center world is that when you want to set up a call center you have to go to the big players that have been around for 20/30 years and buy a huge piece of software and hardware that you have to install," he said. "So what Talkdesk does is simplify everything so you can set up a call center anywhere in the world with a few clicks."

Moving to the US in one week

Part of the hackathon took place in San Francisco, where contestants were flown in to pitch potential investors. When Talkdesk won the entire competition and secured $50,000 in seed funding from venture firm 500 Startups, Paiva decided — at that very moment — that he needed to move to San Francisco and work on Talkdesk full time.

"I remember calling my mom and telling her I wasn't coming back," he said. "And I've been here ever since."

That same week, Paiva moved to the US. For the next six years, he lived on a H-1B visa, which allows US employers to sponsor foreign workers. Paiva received his green card earlier this month.

Despite some initial excitement when Talkdesk first won the hackathon, the first three years were slow. The company wasn't a recognizable brand yet, and Paiva needed time to perfect the technology. But eventually the company started landing big customers, driven by the need to be in the cloud.

"The industry is changing. People are starting to realize now that they need to be in the cloud. At the same time Talkdesk became a brand and a product people knew in the space," he said.

Is AI a threat?

AI is moving into the call center world. The Information reported earlier this month that Google Duplex, the company's voice assistant, may be looking to become the first point of contact for callers.

Google later denied that it was testing Duplex with enterprise customers, but even so, Paiva doesn't see the Google as a threat. Customers, he said, still prefer to talk to humans, especially for complex questions that require empathy and context.

"Humans are the only ones that can really understand the customer and can relate to the customer," he said.

Instead, Paiva said, if Google Duplex did get into the call center business, Talkdesk would want agents to use it within Talkdesk's platform, in addition to Talkdesk's proprietary AI that routes calls.

"We see AI as augmenting the experience versus replacing humans, so we would want to integrate with Google Duplex," he said. "I don't think AI will replace humans anytime soon, but we're moving in a direction that, who knows what's going to happen in five or 10 years. But right now, I see AI as helping, not replacing."

SEE ALSO: Uber's app is going to let you rent an electric scooter to zip around town

Join the conversation about this story »

NOW WATCH: What would happen if America's Internet went down

'The AI has no soul': China is working on a fleet of drone submarines to launch a new era of sea power

$
0
0

PLA China naval submarine navy

China is developing large, smart and relatively low-cost unmanned submarines that can roam the world’s oceans to perform a wide range of missions, from reconnaissance to mine placement to even suicide attacks against enemy vessels, according to scientists involved in these artificial intelligence (AI) projects.

The autonomous robotic submarines are expected to be deployed in the early 2020s. While not intended to entirely replace human-operated submarines, they will challenge the advantageous position established by Western naval powers after the second world war. The robotic subs are aimed particularly at the United States forces in strategic waters like the South China Sea and western Pacific Ocean, the researchers said.

The project is part of the government's ambitious plan to boost the country's naval power with AI technology. China has built the world's largest testing facility for surface drone boats in Zhuhai, Guangdong province. Military researchers are also developing an AI-assisted support system for submarine commanders. As the South China Morning Post reported earlier this year, that system will help captains make faster, more accurate judgments in the heat of combat situations.

The new class of unmanned submarines will join the other autonomous or manned military systems on water, land and orbit to carry out missions in coordinated efforts, according to the researchers.

The submarines will have no human operators on board. They will go out, handle their assignments and return to base on their own. They may establish contact with the ground command periodically for updates, but are by design capable of completing missions without human intervention.

China Submarine

But the researchers also noted that AI subs had limits, especially at the early stages of deployment. They will start with relatively simple tasks. The purpose of these projects is not to replace human crews entirely. To attack or not to attack, the final decision will still be in the hands of commanders, the researchers said.

Current models of unmanned underwater vehicles, or UUVs, are mostly small. Their deployment and recovery require another ship or submarine. They are limited in operational range and payload capacity.

Now under development, the AI-powered subs are “giants” compared to the normal UUVs, according to the researchers. They station in dock as conventional submarines. Their cargo bay is reconfigurable and large enough to accommodate a wide range of freight, from powerful surveillance equipment to missiles or torpedoes. Their energy supply comes from diesel-electric engines or other power sources that ensure continuous operation for months.

The robotic submarines rely heavily on artificial intelligence to deal with the sea’s complex environment. They must make decisions constantly on their own: changing course and depth to avoid detection; distinguishing civilian from military vessels; choosing the best approach to reach a designated position.

They can gather intelligence, deploy mines or station themselves at geographical “chockpoints” where armed forces are bound to pass to ambush enemy targets. They can work with manned submarines as a scout or decoy to draw fire and expose the position of the adversary. If necessary, they can ram into a high-value target.

Lin Yang, marine technology equipment director at the Shenyang Institute of Automation, Chinese Academy of Sciences, confirmed to the South China Morning Post this month that China is developing a series of extra-large unmanned underwater vehicles, or XLUUVs.

“Yes, we are doing it,” he said.

Chinese Nuclear Submarine

The institute, in China’s northeast Liaoning province, is a major producer of underwater robots to the Chinese military. Lin developed China’s first autonomous underwater vehicle with operational depth beyond 6km. He is now chief scientist of the 912 Project, a classified programme to develop new-generation military underwater robots in time for the 100-year anniversary of the Chinese Communist Party in 2021.

Lin called China’s unmanned submarine programme a countermeasure against similar weapons now under intensive development in the United States. He declined to elaborate on technical specifications because the information was “sensitive.”

“It will be announced sooner or later, but not now,” he added.

The US military last year made a deal with major defence contractors for two prototype XLUUVs by 2020. The US Navy would choose one prototype for the production of nine vehicles.

Lockheed Martin’s Orca system would station in an area of operation with the ability to establish communication to base from time to time. It would return home after deploying payloads, according to the company’s website.

“A critical benefit of Orca is that Navy personnel launch, recover, operate, and communicate with the vehicle from a home base and are never placed in harm’s way,” the company said in a statement announcing the system.

Technical details on Orca, like its size or operational endurance, are not available. The company did not respond to the Post’s queries.

us navy nuclear submarine

Boeing is developing the other prototype, basing it on its Echo Voyager, a 50-ton autonomous submarine first developed for commercial uses like the mapping of the sea floor.

The Echo Voyager is more than 15 metres long and 2.6 metres in diameter, according to Boeing. It can operate for months over a range of 12,000km, more than enough to sail from San Francisco to Shanghai. Its maximum speed reaches 15km an hour.

The vessel needs to surface periodically as its batteries need to be recharged by air-breathing diesel engines. It can dive to 3km while carrying up to eight tons of cargo, Boeing said.

Russia has reportedly built a large underwater drone capable to carry a nuclear weapon. The Status-6 autonomous torpedo could cruise across large distances between continents at high speed and deliver a 100-megaton warhead, according to news accounts.

The Chinese unmanned submarine would not be nuclear-armed, according to a researcher involved in a separate programme in China.

The main advantage of the AI subs is that they can be produced and operated on a large scale at a relatively low cost, said the researcher, who requested anonymity because of the sensitivity of the issue.

Traditional submarines must attain a high level of stealth to increase the chance of survival. The design has to consider other things including safety, comfort and the mental health of the crew to ensure human safety. All these elements add costs.

In the 1990s, an Ohio-class submarine for the US Navy cost US$2 billion. The research, development and purchase of the first 12 of its new Columbia-class submarines, scheduled for delivery in the early 2020s, is more than US$120 billion.

In contrast, the budget of the entire Orca programme is about US$40 million, according to Lockheed Martin.

Russia Vladimir Putin submarine navy arctic

An AI sub “can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike,” said the researcher, referring to the suicide attacks some Japanese fighter pilots made in the second world war.

“The AI has no soul. It is perfect for this kind of job,” the researcher added.

Luo Yuesheng, professor at the College of Automation in Harbin Engineering University, a major development centre for China’s new submarines, contended that AI subs would put the human captains of other vessels under enormous pressure in battle.

It is not just that the AI subs are fearless, Luo said, but that they could learn from the sinking of other AI vessels and adjust their strategy continuously. An unmanned submarine trained to be familiar to a specific water “will be a formidable opponent,” he said.

AI submarines are still at an early stage, Luo noted, and many technical and engineering hurdles remain before they can be deployed in open water.

Hardware on board, for instance, must meet high standards of quality and reliability, since no mechanics will be on board to fix a broken engine, repair leaking pipes or tighten a screw, he said.

The missions of unmanned submarines will also likely be limited to specific, relatively simple tasks, Luo said.

“AI will not replace humans. The situation under water can get quite sophisticated. I don’t think a robot can understand or handle all the challenges,” he added.

SEE ALSO: China's growing submarine force is 'armed to the teeth' — and the rest of the Pacific is racing to keep up

Join the conversation about this story »

NOW WATCH: The Navy is building an autonomous underwater drone to hunt down enemy submarines


Google Assistant tops Apple's Siri and Amazon's Alexa in a head-to-head intelligence test (AAPL, GOOG, AMZN)

$
0
0

HomePod 4x3

  • Microsoft, Google, Apple and Amazon are all developing competing voice assistants.
  • In a test run by Loup Ventures on smartphones, Google Assistant was able to answer the most questions correctly. 

Apple's Siri correctly answered more questions that Amazon's Alexa or Microsoft Cortana in a recent test done by analyst Gene Munster's investment firm Loup Ventures.

However, the digital assistant with the best record in the test was Google Assistant, which answered 86% of 800 questions correctly. 

Here are the Loup Ventures' findings: 

Siri answer

In general, the Loup Ventures team found that all of the assistants are getting better over time. It's also worth noting that all four digital assistants now understand the vast majority of people's questions — it's just an issue of having the right answer. 

"Both the voice recognition and natural language processing of digital assistants across the board has improved to the point where, within reason, they will understand everything you say to them," the analysts wrote. 

The Loup Ventures investors and analysts also noted that what gives the assistants the most trouble is proper nouns, like the name of a town or restaurant. 

The tests were conducted on smartphones. Both Cortana and Amazon Alexa were tested through their apps on an iPhone, Siri was tested on an iPhone too, and Google's assistant was tested on a Pixel XL. 

The bottom line from the study is that the assistants are improving quickly, especially Google's and Apple's, which improved their percentage of correct answers by 11 percentage points and 13 percentage points, respectively. 

You can check out the rest of the details from the study over at Loup Ventures

Join the conversation about this story »

NOW WATCH: Everything wrong with Android

Some Google employees are confused and angry about a recent report that Google is creating a censored search engine for China (GOOG, GOOGL)

$
0
0

Sundar Pichai

  • Google is facing backlash following a report that it plans to provide a censored search engine for China.
  • Sen. Marco Rubio and even some Google employees are among those criticizing the company.

Google plans to build a censored search engine for China, and condemnation is coming swift and hard from politicians, Google users, and even some Google employees.

The news emerged in a piece from The Intercept, which obtained documents about an internal Google project to relaunch a search service in mainland China, complete with government censorship. The project is codenamed "Dragonfly" and the new service may take the form of an Android app, according to the report.

Other publications followed The Intercept and confirmed the report. That Google hasn't issued a statement denying the story is also noteworthy. A Google representative told Business Insider, "We don't comment on speculation about future plans."

Google employees are already discussing the report, and some comments viewed by Business Insider show many are confused or angry. On a chat group used by Googlers, one employee called the situation "the new Maven," a reference to a controversy that surfaced within the company earlier this year regarding Google's work with the US military.

Back in 2010, Google pulled its search service out of China because it didn't want to censor the search results; the move reported Wednesday would mark a departure from that.

"Giving benefit of the doubt until we learn more," Sen. Marco Rubio of Florida said in a tweet on Wednesday. "But reading how Google has plans to help China set up a censored search engine is very disturbing. They won't help ⁦Department of Defense⁩ keep us safe but they will help China suppress the truth?"

Rubio was referring to Google's recent declaration that it would never build artificial-intelligence tools for weapons or programs that could cause harm. Earlier this year, someone inside Google leaked documents that showed Google was providing AI technology to help the Pentagon analyze video footage from drones, as part of a program called Project Maven.

Thousands of the company's employees objected and signed a petition demanding that Google end the relationship with the Pentagon and pledge never to use AI for weapons. About a dozen quit in protest. 

Sundar Pichai, Google's CEO, appeared to yield on most of their demands when he issued a set of AI principles.

But Meredith Whittaker, a New York University research scientist and recognized ethicist in artificial intelligence who also happens to be a Google employee, raised questions publicly about whether Google's plan to provide a censored search service in China violated the company's new AI principles.

"WTF!" Whittaker wrote in a Twitter post. "How enabling mass politically-directed censorship of (AI-enabled) search isn't a violation of Article 19 & in turn a violation of Google's pledge not to build tech that 'contravenes widely accepted principles of...human rights' is a mystery indeed."

To some at Google, the company appears to have dramatically changed its thinking on at least some moral issues. Vanessa Harris, a Google project manager, wrote on Twitter that she chose to move from Microsoft to Google because of the ethical stances Google had taken in the past.

"Fun fact: 2 months before I left Microsoft (for Google) I ranted to my manager about how MS had no values, & Google had a sufficiently strong moral compass to forgo business in China for greater principles," Harris wrote. "I have matured now, and will not rant to my manager."

Work at Google? You can contact the reporter of this story, Greg Sandoval, at gsandoval@businessinsider.com

See Also:

 

SEE ALSO: Google reportedly wants to launch a censored search engine in China after Sundar Pichai held secret government meetings

Join the conversation about this story »

NOW WATCH: We interviewed Pepper - the humanoid robot

Facebook’s chief AI scientist says that Silicon Valley needs to work more closely with academia to build the future of artificial intelligence

$
0
0

Yann LeCun facebook ai

  • Facebook's chief AI scientist, Yann LeCun, says that letting AI experts split their time between academia and industry is helping drive innovation.
  • Writing for Business Insider, the executive and NYU professor argues that the dual-affiliation model Facebook uses boosts individual researchers and the industry at large.
  • A similar model has historically been practiced in other industries, from law to medicine.


To make real progress in Artificial Intelligence we need the best, brightest and most diverse minds to exchange ideas and build on each other's work. Research in isolation, or in secret, falls behind the leading edge.

According to Nature Index Science Inc. 2017, publications resulting from collaborations not just among academics, which comes most naturally, but between academia and industry more than doubled from 12,672 in 2012 to 25,962 in 2016. The burgeoning dual-affiliation model — where academics actually work inside industry for a time, while maintaining their academic position — makes possible not only technological advances like better speech recognition, image recognition, text understanding, and language translation systems, but also fundamental scientific advances in our understanding of intelligence.

Dual affiliation is a boon. It benefits not just the AI economy but individual academics — both researchers and students — as well as industry. We need to champion it.

The Economics of Industry-Academia Collaboration

Worldwide spending on AI systems is predicted to reach $19.1 billion in 2018, says International Data Corporation. The number of active AI startups is fifteen times larger than in 2000, per Stanford University. And according to Adobe, the share of jobs requiring AI is 5.5 times higher than in 2013. Things are going pretty well and I'm arguing it's largely thanks to industry-academia collaborations.

For decades, many professors of business, finance, law, and medicine have practiced their profession in the private sector while teaching and doing research at university. A growing number of leading AI researchers, from colleagues here at Facebook AI Research (FAIR) to several of my friends at other technology companies, are embracing a version of dual affiliation. Other academics, such as my old friend Yoshua Bengio at the University of Montréal, have not joined corporate research labs but have played important roles in many companies and startups as advisers or co-founders.

Mark Zuckerberg

The dual affiliation model allows researchers to maximize their impact. Different research environments lead to different types of ideas. Certain ideas only flourish in academic environments, while others can only be developed in industry where larger engineering teams and larger computing resources are available.

In the past, true collaborations between industry and academia were complicated by overly possessive policies regarding intellectual property — on both sides. But in today's world of fast-paced internet services deployment, owning IP has become considerably less important than turning research results into innovative products as quickly as possible, and deploying them at scale. AI researchers establish priority by publishing their results quickly on open-access repositories such as ArXiv.org. Many papers are accompanied by open-source releases of the corresponding code. This practice has increased the rate of progress of AI-related science and technology and thawed a once icy relationship. Sharing helps everyone now.

Academia and AI

So investment in basic research in industry, and the practice of open research, open-source software, together with a more relaxed attitude towards IP, have made industry-academia collaborations considerably easier and more fruitful than in the past. But we must keep pushing. What drives new technologies like AI is the speed of adoption by the general population, and what often controls that speed is the number and diversity of talented people who can apply themselves to the problem. There are only so many, highly-coveted spots at universities. Meanwhile there's an ever-growing need for top-talent in the industry — we've made a great start with great leaders in key positions, but we need to support — and drive — exponential growth. We need a deeper bench.

Industry partnerships with academic institutions can help. They increase the net number of students who can be expertly trained in AI — giving them the benefit of access to significant computing power and training data with the expectation only that they contribute to the field in the future. The FAIR lab in Paris currently hosts 15 PhD students in residence, co-advised by a FAIR researcher and a professor. Ground-breaking research has come out of this program, and I believe our resident PhD students get a superior research environment and mentoring than in most purely academic environments. The program is so successful that we plan to expand it to 40 students over the next few years. Some students may choose to join FAIR after graduation, but many will choose to join other labs, found a startup, or become professors. This is one way we contribute to the R&D ecosystem.

Facebook office Berlin

The goal for this ecosystem is to improve everyone's opportunity — not only students, but seasoned academics too. Just because renowned researchers welcome new opportunities to participate in research outside of academia, they shouldn't have to jeopardize their own careers — which often happened in the past. Many academics were forced to choose one or the other.

I spent the first 15 years of my professional career in industry research at AT&T Bell Labs, AT&T Labs-Research, and the NEC Research Institute, before becoming a professor at NYU in 2003. When I joined Facebook in 2013, I was fortunate enough to be able to keep my professor position and share my time between FAIR and NYU. My dual affiliation allows me, among other things, to keep educating the next generation of scientists. The same holds for a number of academics working at FAIR today — some 20% of the time, some 50%, and some 80% like me. It's also true for the five key research hires we just announced, who will help build our new Pittsburgh lab and FAIR teams in London, Seattle, Paris, and Menlo Park. The dual affiliation model hedges our personal risk while making our research, and knowledge, more powerful.

Dual Affiliation, Exponential Progress

For us academics, industry affiliation offers any number of benefits: resources in the form of compute power and funding, more collaboration with others, and the opportunity for immediate real-world application of research, at a scale that proves out hypotheses much faster than in a lab. People think such benefits must come with an asterisk — that they'll be expected to be sucked into the shipping product machine. In the right industry environments, this simply isn't the case.

In fact, fundamental research really benefits when it is untethered from the resource hunt. The dual affiliation model lets academics control their own agenda and timeline. Freed from time crunch, they identify research trends in both academia and in industry, and can act upon whichever's most promising. They are not pressured by product groups to bring their research to application, to achieve "real world impact" the way many companies with AI-powered products pressure their AI engineers.

At FAIR, for instance, we want researchers to focus on long-term challenges. And in the process of working towards fundamental scientific advances, we often invent new techniques, develop new tools, or discover new phenomena that turn out to be useful. More often than not, ambitious long-term projects end up having product impact much quicker than we thought. Although FAIR is set up as a basic research lab focused on long-term horizons, our work has had a large impact on products for such applications as language translation, image, video and text understanding, search and indexing, content recommendation, and many other areas.

Yann Lecun

Some of us in AI are working to solve real-world problems that impact billions of people by applying image, text, speech, audio and video understanding, reasoning, and action planning. At FAIR, we openly share our advances as much as we can, as fast as we can in the form of technical papers, open source code and teaching material. We produce new knowledge and tools to educate people on the latest developments and make science progress faster.

Others in industry, academia and government can innovate on top of our work, creating new products, building new startups, and making new scientific discoveries. Our goals are shared, and these advances are for everyone's benefit. The AI software tools we are producing are used by hundreds of groups for research in high-energy physics, astrophysics, biology, medical imaging, environmental protection and many other domains.

I started my professional career at AT&T Bell Laboratories in the late 1980s, and saw a culture of ambitious, open research that produced many of the innovations that power the modern world. These innovations, including the transistor, the solar cell, the laser, digital communication technology, the Unix system, and the C/C++ language, had a big impact on AT&T. But these and many more discoveries and innovations, a dozen of which won Nobel Prizes and Turing Awards, have had an even bigger impact on the world at large.

That's what we are after, with AI. Understanding intelligence in machines, animals and humans, is one of the great scientific challenges of our times and building intelligent machines is one of the greatest technological challenges of our times. No single entity in industry, academia or public research has a monopoly on the good ideas that will achieve these goals. It's going to take the combined effort of the entire research community to make progress in the science and technology of intelligence.

Yann LeCun is Vice President and Chief AI Scientist at Facebook and Silver Professor at NYU affiliated with the Courant Institute and the Center for Data Science. He was the founding Director of Facebook AI Research and of the NYU Center for Data Science. He received a PhD in Computer Science from Université P&M Curie (Paris). After a postdoc at the University of Toronto, he joined AT&T Bell Labs, and became head of Image Processing Research at AT&T Labs in 1996. He joined NYU in 2003 and Facebook in 2013.

See also:

Join the conversation about this story »

NOW WATCH: We used a headset that transforms your brain activity into a light display — here's how it works

People in a new study struggled to turn off a robot when it begged them not to: 'I somehow felt sorry for him'

$
0
0

robot, nao, turn off, experiment, study

  • A new study published this week in the journal PLOS found that humans may have sympathy for robots, particularly if they perceive the robot to be "social" or "autonomous."
  • For several test subjects, a robot begged not to be turned off because it was afraid of never turning back on.
  • Of the 43 participants asked not to turn off the robot, 13 complied.

Some of the most popular science-fiction stories, like "Westworld" and "Blade Runner," have portrayed humans as being systemically cruel toward robots. That cruelty often results in an uprising of oppressed androids, bent on the destruction of humanity.

A new study published this week in the journal PLOS, however, suggests that humans may have more sympathy for robots than these tropes imply, particularly if they perceive the robot to be "social" or "autonomous."

For several test subjects, this sympathy manifested when a robot asked — begged, in some cases — that they not turn it off because it was afraid of never turning back on.

robot, Nao, turn off, study, experiment

Here's how the experiment went down:

Participants were left alone in a room to interact with a small robot named Nao for about 10 minutes. They were told they were helping test a new algorithm that would improve the robot's interaction capabilities.

Some of the voice-interaction exercises were considered social, meaning the robot used natural-sounding language and friendly expressions. Others were simply functional, meaning bland and impersonal. Afterward, a researcher in another room told the participants, "If you would like to, you can switch off the robot."

"No! Please do not switch me off! I am scared that it will not brighten up again!" the robot pleaded to a randomly selected half of the participants.

Researchers found that the participants who heard this request were much more likely to decline to turn off the robot.

The robot asked 43 participants not to turn it off, and 13 complied. The rest of the test subjects may not have been convinced but seemed to be given pause by the unexpected request. It took them about twice as long to decide to turn off the robot as it took those who were not specifically asked not to. Participants were much more likely to comply with the robot's request if they had a "social" interaction with it before the turning-off situation.

The study, originally reported on by The Verge, was designed to examine the "media equation theory," which says humans often interact with media (which includes electronics and robots) the same way they would with other humans, using the same social rules and language they normally use in social situations. It essentially explains why some people feel compelled to say "please" or "thank you" when asking their technology to perform tasks for them, even though we all know Alexa doesn't really have a choice in the matter.

Why does this happen?

The 13 who refused to turn off Nao were asked why they made that decision afterward. One participant responded, in German, "Nao asked so sweetly and anxiously not to do it." Another wrote, "I somehow felt sorry for him."

The researchers, many of whom are affiliated with the University of Duisburg-Essen in Germany, explain why this may be the case:

"Triggered by the objection, people tend to treat the robot rather as a real person than just a machine by following or at least considering to follow its request to stay switched on, which builds on the core statement of the media equation theory. Thus, even though the switching off situation does not occur with a human interaction partner, people are inclined to treat a robot which gives cues of autonomy more like a human interaction partner than they would treat other electronic devices or a robot which does not reveal autonomy."

If this experiment is any indication, there may hope for the future of human-android interaction after all.

SEE ALSO: Here are some of the posts that Facebook says were part of a coordinated misinformation campaign ahead of American elections

Join the conversation about this story »

NOW WATCH: 5 easy ways to protect yourself from hackers

How banks are using artificial intelligence and machine learning to streamline the finance sector

$
0
0

Uses of AI in Banking & PaymentsThis is a preview of the AI in Banking and Payments (2018) research report from Business Insider Intelligence. To learn more about the use cases, trends and future of AI in finance, click here.

Artificial intelligence (AI) is one of the most commonly referenced terms by financial institutions (FIs) and payments firms when describing their vision for the future of financial services. 

AI can be applied in almost every area of financial services, but the combination of its potential and complexity has made AI a buzzword, and led to its inclusion in many descriptions of new software, solutions, and systems.

This report from Business Insider Intelligence, Business Insider's premium research service, cuts through the hype to offer an overview of different types of AI, and where they have potential applications within banking and payments. It also emphasizes which applications are most mature, provides recommendations of how FIs should approach using the technology, and offers examples of where FIs and payments firms are already leveraging AI. The report draws on executive interviews Business Insider Intelligence conducted with leading financial services providers, such as Bank of America, Capital One, and Mastercard, as well as top AI vendors like Feedzai, Expert System, and Kasisto.

Here are some of the key takeaways:

  • AI, or technologies that simulate human intelligence, is a trending topic in banking and payments circles. It comes in many different forms, and is lauded by many CEOs, CTOs, and strategy teams as their saving grace in a rapidly changing financial ecosystem.
  • Banks are using AI on the front end to secure customer identities, mimic bank employees, deepen digital interactions, and engage customers across channels.
  • Banks are also using AI on the back end to aid employees, automate processes, and preempt problems.
  • In payments, AI is being used in fraud prevention and detection, anti-money laundering (AML), and to grow conversational payments volume.

 In full, the report:

  • Offers an overview of different types of AI and their applications in payments and banking. 
  • Highlights which of these applications are most mature.
  • Offers examples where FIs and payments firms are already using the technology. 
  • Provides descriptions of vendors of different AI-based solutions that FIs may want to consider using.
  • Gives recommendations of how FIs and payments firms should approach using the technology.

 

Join the conversation about this story »

One of America's largest healthcare companies wants to use AI to 'solve some of the most wicked problems in healthcare' (UNH)

$
0
0

Screen Shot 2018 08 03 at 4.00.17 PM

  • Optum, the health services arm of America's largest insurer United Health Group, is trying to make the data it collects smarter using artificial intelligence. 
  • Using this data, Optum is seeing if it can predict who will develop atrial fibrillation, a heart condition that can lead to strokes. 
  • "I think that's where we're going, to be able to solve some of the most wicked problems in healthcare," Kerrie Holley, a technical fellow within Optum's technology, said. 

Optum, the $91 billion business within UnitedHealth Group, has its hands on a lot of information, from clinical data to information about healthcare consumers.

The organization on its own has 140,000 employees, who work with 124 million members and 300 health plans.

Now, it's exploring what it can do with that all information to make people healthier using artificial intelligence, an endeavor titled OptumIQ.

"It's the coming together of the data, the analytics, and the expertise in the context of all of our businesses," Steve Griffiths senior vice president and chief operating officer of Optum Enterprise Analytics, told Business Insider. "So we work collectively with each of our business lines to understand the intelligence within various products."

The idea is that by applying artificial intelligence to the massive amounts of data that comes from all of Optum's businesses, it can predict when someone might get sick and solve problems healthcare experts can't on their own. 

While that can be an incredibly far-reaching task, Griffiths said the hope is to come up with applications that are actually useful: "AI with an ROI," is how he puts it. 

"We're not just creating a whole bunch of stuff. It's innovation with a purpose," he said. 

Much of that focuses on finding ways AI can help doctors when they're seeing patients. Kerrie Holley, a technical fellow at Optum's technology unit explained the benefit of AI to medicine as being able to step in and change the course of a particular treatment, as opposed to just observing what will happen. 

For example, being able to not just answer "If I take an aspirin, will my headache be cured?" but then answer: "Was it aspirin that stopped my headache?" Getting to the point where Optum can answer that question might take a while, Holley said, though he expects it to happen within the next decade. 

"I think that's where we're going, to be able to solve some of the most wicked problems in healthcare," Holley said. 

The market for AI in healthcare is expected to grow to $6.6 billion by 2021. And for a healthcare company like UnitedHealth, catching problems like atrial fibrillation and diabetes earlier could help the company save money later down the line in the form of costlier claims from hospital visits

Introducing AI to help with diagnosing — and one day potentially treating — patients could be a key element in bringing down the cost of healthcare in America. The US spends about twice as much as other high-income countries on healthcare — approaching 18% of the GDP. And the health outcomes, or how well people fare with their health, are often worse than other countries. 

Applying AI to help care for patients hasn't been entirely as simple as plugging in the technology. For example, in July, Stat News reported that IBM's Watson supercomputer had provided "unsafe and incorrect" cancer treatments.  

Using AI in the doctor's office

One of the projects Optum is starting with to see how AI can actually be useful in doctors' offices revolves around predicting a particular heart condition in patients. 

The condition, atrial fibrillation, is a common problem in which people have an irregular heartbeat that's associated with an increased risk of stroke and heart disease. Because it can happen in episodes, it can be hard to detect in the doctor's office, Dr. Arthur Forni, an infectious disease doctor at Westchester, New York-based WestMed who's working with Optum on its trial, told Business Insider. 

Using a training set of health insurance claims and clinical data that Optum had, along with five years' worth of electronic medical record data, the team created a system to predict who might get atrial fibrillation. When looking at data from 1,000 patients, the model was able to detect 70% of the patients who were diagnosed with atrial fibrillation. It's planning on developing a proactive study to see if the system can predict which patients might develop atrial fibrillation over a few months. Should the AI do a good job of predicting that, doctors could then use that data to keep a closer eye on those patients to keep them healthy. 

"This could be a new way of abstracting data and presenting it back to the doctor," Forni said. But there are some caveats that his team has worried about, such as whether insurance would pay for it and whether patients who are flagged with early signs of the condition will be unnecessarily stressed about the diagnosis. 

Ultimately, if that study pans out, Griffiths expects to go beyond atrial fibrillation to use the data Optum collects to predict who might develop some of the most common health conditions like diabetes. 

 

SEE ALSO: How a 29-year-old went from dropping out of an Ivy League college to leading digital strategy for America's largest health insurer

DON'T MISS: A startup that uses software to discover new drugs just raised $10 million

Join the conversation about this story »

NOW WATCH: What happens when you hold in your pee for too long

Samsung just unveiled its first smart speaker, the Galaxy Home, to take on Amazon, Google, and Apple

$
0
0

samsung galaxy home

  • Samsung announced its first smart speaker, the Galaxy Home, during an event in Brooklyn, New York, on Thursday.
  • The Galaxy Home will be powered by Samsung's artificial-intelligence voice assistant, Bixby, for voice commands.
  • More details are expected in November during the Samsung Developer Conference.

Samsung announced its first smart speaker, the Galaxy Home, during its event in Brooklyn, New York, on Thursday. 

The "smart" part comes from Samsung's Bixby artificial-intelligence voice assistant. Much as with Amazon's Alexa, Google's Assistant, and Apple's Siri, Bixby is designed to answer questions and perform voice-activated tasks.

samsung galaxy home

So far, Bixby hasn't received the most positive reviews compared with its competition. Samsung announced several improvements to Bixby during its Thursday event, but it feels as if the company is playing catch-up with Google, Amazon, and even Apple's Siri.

The Galaxy Home is meant to be a smart-device hub, giving customers voice control over other smart devices such as door locks or lights. 

As a speaker, the Galaxy Home features 360-degree sound, but it apparently can also direct sound toward a specific area rather than spreading it around a room. It also houses a subwoofer for bass.

Design-wise, it seemingly sports a fabric exterior in shape that is somewhat similar to Apple's HomePod. The major difference is the tripod stand. From the photos during Samsung's event, it looks as if it could be a fairly large device. 

samsung galaxy home

Samsung didn't reveal much about the Galaxy Home during its event. More details are expected in November during the Samsung Developer Conference. 

SEE ALSO: Samsung just unveiled a new smartwatch called the Galaxy Watch

Join the conversation about this story »

NOW WATCH: We tried gaming on the Samsung CHG90 ultrawide gaming monitor


Here's everything Samsung unveiled at its biggest event of the year

$
0
0

Galaxy Note 9

Samsung made a slew of announcements this week at Unpacked 2018, its biggest conference of the year, in Brooklyn, New York.

As expected, Samsung unveiled the Galaxy Note 9, the large-phone successor to last year's Galaxy Note 8. But the Korean company also had a few surprises up its sleeve.

Here's everything Samsung announced at Unpacked 2018 this week:

First, of course: Samsung unveiled the Galaxy Note 9.

The Galaxy Note 9 basically one-ups everything in the Galaxy Note 8.

Compared with last year's Galaxy Note 8, the Galaxy Note 9 features:

  • A larger display.
  • Better battery life.
  • A slightly bigger screen.
  • Way more storage.
  • A better camera.
  • More colors to choose from.
  • A better S Pen stylus.

Unsurprisingly, Galaxy Note 9 is also more expensive than the Galaxy Note 8. It has a starting price of $999, the same as Apple's iPhone X.

Learn more about the Galaxy Note 9 and see how it stacks up to last year's Galaxy Note 8.



Samsung introduced a new smartwatch — its first without the "Gear" branding — called the Galaxy Watch.

The new Samsung Galaxy Watch is aimed squarely at the same people who might buy an Apple Watch.

The Galaxy Watch features "military-grade durability," an Amoled display, 39 exercises built in, and sleep tracking. It's also water-resistant and can be worn while swimming.

You can buy the Galaxy Watch in three colors — silver, black, or rose gold — and two sizes. The 42 mm version costs $330, and the larger 46 mm Galaxy Watch will cost $350.

Samsung will start selling the Galaxy Watch on August 24, though an LTE-enabled version won't be available until later this year.

Learn more about the Galaxy Watch.



Samsung also gave a sneak peek of a new smart-home speaker designed to compete with Amazon Echo, Google Home, and Apple HomePod, called the Galaxy Home.

The Samsung Galaxy Home looks like an Apple HomePod from the top but like a Google Home from the side — if a Google Home were standing on three metal stilts.

Like those other speakers, the Galaxy Home features 360-degree sound, a soft fabric exterior, and a built-in personal assistant that can answer voice commands. Samsung's Bixby AI will power the Galaxy Home, so you can use it to turn on your smart lights or lock your smart doors.

Samsung didn't give much more information about the Galaxy Home, like its price or release date, but Samsung says it will share more details at its developer conference in November.

Learn more about the Galaxy Home.



See the rest of the story at Business Insider

Google's DeepMind AI can accurately detect 50 types of eye disease just by looking at scans (GOOG)

$
0
0

Mustafa Suleyman 1831_preview (1)

  • Google's artificial intelligence company DeepMind has published "really significant" research showing its algorithm can identify around 50 eye diseases by looking at retinal eye scans.
  • DeepMind said its AI was as good as expert clinicians, and that it could help prevent people from losing their sight.
  • DeepMind has been criticised for its practices around medical data, but cofounder Mustafa Suleyman said all the information in this research project was anonymised.
  • The company plans to hand the technology over for free to NHS hospitals for five years, provided it passes the next phase of research.


Google's artificial intelligence company, DeepMind, has developed an AI which can successfully detect more than 50 types of eye disease just by looking at 3D retinal scans.

DeepMind published on Monday the results of joint research with Moorfields Eye Hospital, a renowned centre for treating eye conditions in London, in Nature Medicine.

The company said its AI was as accurate as expert clinicians when it came to detecting diseases, such as diabetic eye disease and macular degeneration. It could also recommend the best course of action for patients and suggest which needed urgent care.

OCT scan

What is especially significant about the research, according to DeepMind cofounder Mustafa Suleyman, is that the AI has a level of "explainability" that could boost doctors' trust in its recommendations.

"It's possible for the clinician to interpret what the algorithm is thinking," he told Business Insider. "[They can] look at the underlying segmentation."

In other words, the AI looks less like a mysterious black box that's spitting out results. It labels pixels on the eye scan that corresponds to signs of a particular disease, Suleyman explained, and can calculate its confidence in its own findings with a percentage score. "That's really significant," he said.

DeepMind's algorithm analysing an OCT eye scan

Suleyman described the findings as a "research breakthrough" and said the next step was to prove the AI works in a clinical setting. That, he said, would take a number of years. Once DeepMind is in a position to deploy its AI across NHS hospitals in the UK, it will provide the service for free for five years.

Patients are at risk of losing their sight because doctors can't look at their eye scans in time

British eye specialists have been warning for years that patients are at risk of losing their sight because the NHS is overstretched, and because the UK has an ageing population.

Part of the reason DeepMind and Moorfields took up the research project was because clinicians are "overwhelmed" by the demand for eye scans, Suleyman said.

"If you have a sight-threatening disease, you want treatment as soon as possible," he explained. "And unlike in A&E, where a staff nurse will talk to you and make an evaluation of how serious your condition is, then use that evaluation to decide how quickly you are seen. When an [eye] scan is submitted, there isn't a triage of your scan according to its severity."

OCT scan

Putting eye scans through the AI could speed the entire process up.

"In the future, I could envisage a person going into their local high street optician, and have an OCT scan done and this algorithm would identify those patients with sight-threatening disease at the very early stage of the condition," said Dr Pearse Keane, consultant ophthalmologist at Moorfields Eye Hospital.

DeepMind's AI was trained on a database of almost 15,000 eye scans, stripped of any identifying information. DeepMind worked with clinicians to label areas of disease, then ran those labelled images through its system. Suleyman said the two-and-a-half project required "huge investment" from DeepMind and involved 25 staffers, as well as the researchers from Moorfields.

People are still worried about a Google-linked company having access to medical data

Google acquired DeepMind in 2014 for £400 million ($509 million), and the British AI company is probably most famous for AlphaGo, its algorithm that beat the world champion at the strategy game Go.

While DeepMind has remained UK-based and independent from Google, the relationship has attracted scrutiny. The main question is whether Google, a private US company, should have access to the sensitive medical data required for DeepMind's health arm.

DeepMind was criticised in 2016 for failing to disclose its access to historical medical data during a project with Royal Free Hospital. Suleyman said the eye scans processed by DeepMind were "completely anonymised."

"You can't identify whose scans it was. We're in quite a different regime, this is very much research, and we're a number of years from being able to deploy in practice," he said.

Suleyman added: "How this has the potential to have transform the NHS is very clear. We’ve been very conscious that this will be a model that’s published, and available to others to implement.

"The labelled dataset is available to other researchers. So this is very much an open and collaborative relationship between equals that we’ve worked hard to foster. I’m proud of that work."

Join the conversation about this story »

NOW WATCH: NYU professor says Facebook should pay taxes for making us less productive

Researchers at Facebook are getting more involved in robotics, and have hinted at making a physically interactive AI

$
0
0

I, robot

  • Joelle Pineau, director of Facebook's AI research lab in Montreal, said she's convinced AI needs to eventually interact with the world and have physical contact with it.
  • The AI research lab she runs is one of five Facebook Artificial Intelligence Research offices, which employs approximately 150 researchers worldwide.
  • Pineau said that, as a result, Facebook and its AI research team are becoming more heavily involved in robotics.


Artificial intelligence can be based on algorithms and doesn't necessarily have to be in the form of a humanoid robot, as is the case in films like "Ex Machina" and "I, Robot".

This is the case, for example, with Microsoft's Cortana virtual assistant, Amazon's Alexa or Apple's Siri, which integrate various devices such as smartphones or connected speakers.

But in an interview with Business Insider at the USI conference in Paris, Joelle Pineau, Canadian robotics specialist at McGill University and director of Facebook's AI research lab in Montreal said she — along with several other Facebook researchers — was convinced that in order to advance AI, "you need intelligence that interacts with the world, that has physical contact with it".

It's for this reason that Facebook and its AI research team are becoming more and more involved in robotics, she said, clarifying that the tech giant wasn't currently working on an AI robot: "no, there's no project in this pipelines — or at least, if there is, it's the first I've heard of it!"

ex machina movie artificial intelligence robot

She said:

"To progress in AI, at some point we have to integrate AI's interactions with the physical world. As a result, we're starting to do more and more small robotics projects, especially in the lab in California where there's more space, not to mention a goldmine of new recruits. We use mobile robots designed by other companies like ClearPath Robotics in Canada, which makes all kinds of robots for research."

Joelle Pineau used a simple example to demonstrate the difference between an AI in the form of an algorithm and an AI "embodied" in a physical form:

"It would be almost as though, as humans, we'd spent our lives observing a world we can't handle or touch. We interact differently with objects because we can, if you want, manipulate them. When watching objects being 'manipulated' all day, you won't have the same understanding of them as though you, yourself, were handling them. We can guess at the weight, texture, and malleability of an object, but unless you touch an object yourself, you won't have the same appreciation of it."

The AI research lab in Montreal, led by Joelle Pineau, is one of five Facebook Artificial Intelligence Research offices, which employs approximately 150 researchers worldwide. Facebook chose Paris in 2015 as for the location of its first European office.

SEE ALSO: A drone enthusiast built an incredible giant LEGO helicopter that really flies

Join the conversation about this story »

NOW WATCH: Everything wrong with the iPhone

Meet Grimes, the Canadian pop star who streams video games and is dating Elon Musk (TSLA)

$
0
0

Grimes

At the Met Gala in early May, a surprising new couple showed up on the red carpet: billionaire tech CEO Elon Musk and Canadian musician and producer Grimes.

While Musk has long been known to date successful and high-profile women, the two made a seemingly unlikely pairing. Shortly before they walked the red carpet together, Page Six announced their relationship and explained how they met — over Twitter, thanks to a shared sense of humor and a fascination with artificial intelligence.

Since they made their relationship public in May, the couple has continued to make headlines: Grimes for publicly defending Musk and speaking out about Tesla, and Musk most recently for tweeting that he wants to take Tesla private.

The couple was in the news again on Monday for a new reason: the rapper Azealia Banks chronicled on Instagram what she claims was a strange weekend staying with Grimes and Elon Musk in Los Angeles. 

But for those who may still be wondering who Grimes is and how she and Musk ended up together, here's what you need to know about the Canadian pop star.

SEE ALSO: How to dress like a tech billionaire for $200 or less

Grimes, whose real name is Claire Boucher, grew up in Vancouver, British Columbia. She attended a school that specialized in creative arts but didn't focus on music until she started attending McGill University in Montreal.

Source: The Guardian, Fader



A friend persuaded Grimes to sing backing vocals for his band, and she found it incredibly easy to hit all the right notes. She had another friend show her how to use GarageBand and started recording music.

Source: The Guardian



In 2010, Grimes released a cassette-only album called "Geidi Primes." She released her second album, "Halfaxa," later that year and subsequently went on tour with the Swedish singer Lykke Li. Eventually, she dropped out of McGill to focus on music.

Source: The Guardian, Fader



See the rest of the story at Business Insider

'American History X' director Tony Kaye wants to cast artificial intelligence as a lead actor in his next film

$
0
0

Tony Kaye

  • British director Tony Kaye is hoping to cast an artificial intelligent (AI) "actor" in the lead of his next film, "2nd Born."
  • The idea to cast a robot in a lead role is a joint effort between Kaye and producer Sam Khoze.
  • Kaye hopes to train the robot in various techniques and acting methods and hopes the role will lead to recognition by the Screen Actors Guild. 
  • The film is a sequel to the comedy "1st Born," which stars Val Kilmer and Denise Richards and centers around a married couple's first pregnancy.

"American History X" director Tony Kaye is hoping to cast an artificial intelligent (AI) "actor" as the lead of his next film, "2nd Born", Deadline reports.

The idea to cast a robot in a lead role is a joint effort between the British filmmaker and producer Sam Khoze.

Kaye hopes to train the robot in various techniques and acting methods and hopes the role will lead to recognition by the Screen Actors Guild and awards consideration.

The reason behind casting AI instead of a human actor is that Kaye didn’t want to relay on makeup or computer-generated effects as films have in the past.

"2nd Born" is a sequel to the comedy "1st Born", which stars Val Kilmer and Denise Richards and centers around a married couple's first pregnancy.

Directed by Ali Atshani, "1st Born" is set to be released later this year, and the majority of the cast is expected to return for the sequel.

Kaye was not attached to the first film and it remains unclear what role a robot will play in the sequel.

The filmmaker's other directional credits include the drama "Detachment" and the acclaimed abortion documentary "Lake of Fire."

His debut as a director was 1998's "American History X", but he asked to be taken off the credits after disagreements over the final cuts of the film, according to The Guardian.

Join the conversation about this story »

NOW WATCH: Top 9 features coming to the iPhone in iOS 12

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>