Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

A 'human swarm' has figured out where Jeff Bezos should donate his money

$
0
0
  • Jeff BezosJeff Bezos tweeted in June that he wanted short-term, high-impact ways to do social good.
  • An artificial intelligence platform put 46,000 suggestions to a vote, using a method inspired by how bees swarm.
  • Universal access to clean drinking water was the winner.

In mid-June, Amazon CEO Jeff Bezos made an open request on Twitter for ways he could dip his toes in the philanthropy world. More than 47,000 people responded, suggesting ideas from dog training for PTSD victims to improving education in the developing world.

But Louis Rosenberg says he has overwhelming evidence that universal access to drinking water is the real genius idea.

Rosenberg is the CEO and founder of Unanimous AI, an artificial intelligence platform that purports to make super-intelligent decisions based on the wisdom of the crowd. The company takes its cues from the animal kingdom, in which birds form flocks, fish form schools, and bees form swarms, all in effort to make smarter choices to survive, Rosenberg said.

"They're literally smarter together when they converge as a system on answers," Rosenberg told Business Insider. Unanimous AI mimics this model by creating online "swarms" where people can tackle any question thrown at them and use their collective wisdom to converge on a single answer.

When the company has held these swarms, the group has correctly predicted the winners of the 2015 Oscars, the first four horses of the 2016 Kentucky Derby in 2016, and the eight teams that would make it to the 2016 MLB playoffs, including the Chicago Cubs' victory.

Rosenberg wanted to see how a swarm might respond to Bezos' call for philanthropic ideas. The company pulled each of the then-46,000 replies and distilled them to 200 broad categories of ideas. Then it asked 300 users (or "scouts") to rank the 200 ideas on a scale of 1 to 10. In nature, scouts are bees that go searching for new homes and present their ideas to the hive. Scores of 7.0 or higher were included for final consideration.

Groups of roughly 100 people went through a series of swarms to eliminate solutions it saw as poor, ultimately arriving at six big ideas: universal access to clean water, cancer treatment assistance, health clinics for the poor, essential equipment for rural hospitals, free medicine for the poor, and mobile health clinics.

During the live deliberation, some people changed their answers while others held firm, Rosenberg said. Ultimately, about a minute went by before the swarm converged on access to clean drinking water.

Rosenberg admitted that there are no guarantees the swarm's decision is necessarily the best, and he doesn't have empirical studies to bolster swarm decision making as the ideal approach for business decisions. But what he does have is the company's track record and the confidence that nature has found a method humans can implement themselves.

He said there is also the element of repeatability. When his company performs the same swarms over and over, it tends to see the same results emerge. This suggests something inherently virtuous about the power of the swarm over the individual, Rosenberg said.

"This really wasn't close," he said. "There was a very, very strong sentiment in the swarm intelligence that universal access to clean drinking water was the issue that was the best use of the funds and you could have the biggest impact and the most long-lasting impact and also the most immediate impact."

SEE ALSO: Jeff Bezos is now the world's richest person — and he could redefine philanthropy

Join the conversation about this story »

NOW WATCH: We tried Amazon's $50 tablet — here's what it's like


Here's how millennial investors are trading Nvidia ahead of earnings (NVDA)

$
0
0

Nvidia Founder, President and CEO Jen-Hsun Huang delivers a keynote address at CES 2017 at The Venetian Las Vegas on January 4, 2017 in Las Vegas, Nevada. CES, the world's largest annual consumer technology trade show, runs from January 5-8 and is expected to feature 3,800 exhibitors showing off their latest products and services to more than 165,000 attendees

Nvidia is on a tear this year, up nearly 70% on the back of impressive artificial intelligence technology and a boost from cryptocurrency mining.

The company is set to report its second-quarter results after the bell on Thursday with Wall Street expecting earnings of $0.70 per share on revenue of $1.963 billion, according to data from Bloomberg.

Among traders who use the popular investing app, Robinhood, Nvidia is the 14th most popular stock on the platform.

According to Robinhood data, users of the platform are buying shares of Nvidia 5% more than they are selling them. Before the company's first quarter earnings, investors were much more bearish and sold 21% more than they bought. Robinhood doesn't offer specific information about user trading habits, such as the number of shares traded and dollar amount of transactions.

Millennial investors are the ones leading the bullish pack. Investors younger than 30 are buying 11% more than selling. Older investors are more bearish, selling 2% more than they are buying.

Meanwhile, JPMorgan thinks the hype around the company has gone too far, and suggested buying puts as a means for protecting gains as Nvidia's stock has risen 69.25% this year. 

Click here to watch Nvidia's price move in real time...

nvidia stock price

SEE ALSO: Investing in Nvidia all boils down to one question

Join the conversation about this story »

NOW WATCH: Wells Fargo Funds equity chief: Tech stocks are 'overvalued,' but you should still buy them

Google's DeepMind is teaching its artificial intelligence how to sleep (GOOG)

$
0
0

Demis DeepMind

Google has been pretty far ahead of the curve when it comes to its artificial intelligence research. The world was shocked when its AI beat a top human player at the game of Go. More recently the company taught AI to use imagination and make predictions. The latest trick in Google's machine-learning research? Naps.

Google is making its AI more human — to a startling degree. It's taught DeepMind how to sleep. In a recent blog post the company said:

"At first glance, it might seem counter-intuitive to build an artificial agent that needs to 'sleep' – after all, they are supposed to grind away at a computational problem long after their programmers have gone to bed. But this principle was a key part of our deep-Q network (DQN), an algorithm that learns to master a diverse range of Atari 2600 games to superhuman level with only the raw pixels and score as inputs. DQN mimics "experience replay", by storing a subset of training data that it reviews "offline", allowing it to learn anew from successes or failures that occurred in the past."

DeepMind researchers are teaching computers how to learn. Neural-networks, AI, machine-learning algorithms – all the buzz words you've heard – what it boils down to is teaching a computer how to figure something out on its own.

Self-driving cars need to make decisions about traffic, data-analysis algorithms have to decide how to group information segments, and AI needs to be able to think like a person. Otherwise, what's the point?

Google's new method means even if a computer is using its full functional resources to figure a problem out, it can save information to dream about later, while it's offline.

It doesn't have to be working on a problem to solve it. It'll fail at something, go offline, and then be able to succeed at that task once it's back online.

In the future, when your computer goes into sleep-mode, it might be plotting its next victory.

Join the conversation about this story »

NOW WATCH: Here's everyone left on Arya Stark's kill list on 'Game of Thrones'

Nvidia crushed earnings and is still dropping (NVDA)

$
0
0

NVIDIA CEO Jensen Huang

Nvidia crushed its earnings and is still seeing its stock price plummet.

Nvidia is down 8.39% in premarket trading after trumping Wall Street's expectations for earnings and revenue. The company brought in $1.10 per share, higher than estimates of $0.82, on revenue of $2.23 billion, which was higher than the $1.96 billion expected.

Despite crushing its earnings, Nvidia's shares still faltered.

"Overall, we think expectations were simply too high," Mitch Steves, an analyst at RBC Capital Markets, said in a note to clients.

Steves remains bullish on Nvidia and says that the quarterly results are strong.

Data centers are one of the most exciting areas of growth for the company, with revenue up 176% year-on-year during a product transition, according to Steves. The company recently released its Volta processor, which is significantly faster than previous generations and should add to Nvidia's share of the data center market.

Nvidia's other business all grew also. Gaming revenue was up 51.9%, pro visualization grew 9.8% and automotive grew 19.3% year over year.

Nvidia is up 61.49% this year, including the post-earnings drop.

Click here to read more about Nvidia's and see a live stock price...

Nvidia stock price

SEE ALSO: Investing in Nvidia all boils down to one question

Join the conversation about this story »

NOW WATCH: Wells Fargo Funds equity chief: Companies were being rendered obsolete long before Amazon emerged

Elon Musk: Artificial intelligence presents 'vastly more risk than North Korea'

$
0
0

Elon Musk

Elon Musk tweeted some warnings about artificial intelligence on Friday night.

"If you're not concerned about AI safety, you should be. Vastly more risk than North Korea," Musk tweeted after his $1 billion startup, OpenAI, made a surprise appearance at a $24 million video game tournament Friday night, beating the world's best players in the video game, "Dota 2."

Musk claimed OpenAI's bot was the first to beat the world's best players in competitive eSports, but quickly warned that increasingly powerful artificial intelligence like OpenAI's bot — which learned by playing a "thousand lifetimes" of matches against itself — would eventually need to be reined in for our own safety.

"Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too," Musk said in another tweet on Friday night.

Musk has previously expressed a healthy mistrust of artificial intelligence. The Tesla and SpaceX CEO warned in 2016 that, if artificial intelligence is left unregulated, humans could devolve into the equivalent of "house cats" next to increasingly powerful supercomputers. He made that comparison while hypothesizing about the need for a digital layer of intelligence he called a "neural lace" for the human brain.

"I think one of the solutions that seems maybe the best is to add an AI layer," Musk said. "A third, digital layer that could work well and symbiotically" with the rest of your body," Musk said during Vox Media's 2016 Code Conference in Southern California.

Nanotechnologists have already been working on this concept.

Musk said at the time: "If we can create a high-bandwidth neural interface with your digital self, then you’re no longer a house cat.”

Jillian D'Onfro contributed to this report.

SEE ALSO: Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer

Join the conversation about this story »

NOW WATCH: Everything we know about the new iPhone that Apple will announce in September

AI and CGI will transform information warfare, boost hoaxes, and escalate revenge porn

$
0
0
  • Humans can generally trust what they see and hear — but that won't be the case for long.
  • Advances in AI and CGI will soon make it possible for anyone to create photorealistic video and audio.
  • Experts say it will transform information warfare, allowing the creation of sophisticated propaganda and misinformation.
  • The tech's impact will be profound, turbocharging everything from fake news and hoaxes to revenge porn and DIY entertainment.

pope joan woodcutHoaxes and trickery are almost as old as human history.

When the Roman Republic first conquered the Italian peninsula between 500-200 BC, it was known to send fake refugees into enemy cities to"[subvert] the enemy from within.""Pope Joan" was believed to be a woman who allegedly tricked her way into become pope in the Middle Ages by pretending to be a man — but the entire story is now viewed as fake, a fictional yarn spun centuries after her purported reign.

"Vortigern and Rowena," a play that debuted in 1798, was initially touted as a lost work of William Shakespeare— but was in fact a forgery created by William Henry Ireland. And in the 1980s, the Soviet Union attempted to damage the United States' reputation and sow discord among its allies by spreading the myth that American scientists had created AIDS in a military laboratory, in an "active measures" disinformation campaign called "Operation INFEKTION."

Some fringe historians even believe that almost 300 years of medieval history were a hoax— invented retrospectively by the Holy Roman Emperor Otto III for political purposes in 1,000 AD.

But humanity is now rapidly approaching the holy grail of hoaxes: Tools that will allow anyone to easily create fraudulent, photo-realistic video and audio.

Thanks to advances in artificial intelligence (AI) and computer-generated imagery (CGI) technology, over the coming decade it will become trivial to produce fake media of public figures and ordinary people saying and doing whatever hoaxers can dream of — something that will have immense and worrying implications for society.

In a previous feature, Business Insider explored how the tech will make it far more difficult to verify news media— boosting "fake news" and exacerbating mistrust in the mainstream media. But experts now say that its effects will be felt far more broadly than just journalism. 

It will open up worrying new fronts in information warfare, as hostile governments weaponise the technology to sow falsehoods, propaganda, and mistrust in target populations. The tools will be a boon to malicious pranksters, giving them powerful new tools to bully and blackmail, and even produce synthetic "revenge porn" featuring their unwilling targets. And fraud schemes will become ever-more sophisticated and difficult to detect, creating uncertainty as to who is on the other end of any phone call or video-conference.

This may sound sensational, but it's not science fiction. This world is right around the corner — and humanity desperately needs to prepare itself.

The technology is basic — but not for long

Right now, the technology required to easily produce fake audio and video is in its infancy. It exists mainly in the form of tech demos, research projects, and apps that have yet to see a commercial release — but it hints at the world to come.

A few examples: In July, researchers at the University of Washington used AI to produce a fake video of President Barack Obama speaking, built by analysing tens of hours of footage of his past speeches. (The audio used also came from an old speech.)

The tech to do this live already exists. In 2016, "Face2face" researchers were able to take existing video footage of high-profile political figures including George W. Bush, Vladimir Putin, and Donald Trump, and make their facial expressions mimic those of a human actor, all in real time.

People are also working to spoof human speech. Voice-mimicking software called Lyrebird can take audio of someone speaking and use it to synthesise a digital version of that person's voice — something it showed off to disconcerting effect with demos of Hillary Clinton, Obama, and Trump promoting it. It's in development, and Adobe, the company behind Photoshop, is also developing similar tools under the name Project Voco.

The next generation of information warfare

In early August 2016, the US had an international crisis on its hands, and Americans were beginning to panic. As many as 10,000 armed police had surrounded the US Incirlik airbase in Turkey, and Twitter users were worrying that the situation could rapidly escalate — perhaps even with the nuclear weapons on the base falling into the hands of the demonstrators.

FILE PHOTO: Russian President Vladimir Putin attends the Navy Day parade in St. Petersburg, Russia, July 30, 2017. REUTERS/Alexander Zemlianichenko/Pool/File Photo Except, it didn't really happen like this. As The Daily Beast reported, the reality was a peaceful protest of around 1,000 people. Russian state propaganda outlets Russia Today and Sputnik pushed the false narrative, aided by thousands of English-language tweets sent from accounts identified as bots controlled by the Russian government, Foreign Policy Research Institute fellow Clint Watts told the US Senate Intelligence Committee in 2017.

This is an example of Russia's longstanding policy of "active measures"— spreading misinformation for propaganda purposes or to help it achieve its strategic objectives. Gregory C. Allen, an adjunct fellow at the Center for a New American Security, argues that these efforts from Russia — and others like them — will receive a powerful shot in the arm from developments in CGI and AI.

"We have seen foreign governments be more than willing to rely on ... propaganda in the text real and in the fabricated imagery realm," he told Business Insider. "They have demonstrated their willingness to sprint as fast as they can in this exact direction, and making use of every tool that is available to them."

The future could see authoritarian states using forged media to help generate dissent in the populations of rival countries, much like what happened at Incirlik — and to discredit and damage political opposition at home.

Allen also discussed the national security implications of artificial intelligence in a recent paper, warning: "We will struggle to know what to trust. Using cryptography and secure communication channels, it may still be possible to, in some circumstances, prove the authenticity of evidence. But, the 'seeing is believing' aspect of evidence that dominates today — one where the human eye or ear is almost always good enough — will be compromised."

The tech is a bonanza for fraudsters

A popular technique employed by modern scammers is "CEO fraud"— an email sent to a company employee, masquerading as from the CEO or another executive, asking them to make a payment to an account or take another action.

These kinds of attacks will soon have a whole new line of attack: Voice.

Imagine your boss calls you up, and asks you to make a transaction, or send over a password or a confidential document. It's clearly her voice, she knows who you are, and you might even make some small talk. Today, no-one would think anything was amiss.

This is because, Allen says, you "are currently using voice as an authentication technology, but [you] don't think of it as an authentication technology because it's just a background of human life that you can trust."

But in five or so years, that trust may have evaporated — replaced by a mistrust of what you hear on the phone and even see with your own eyes in a video-conference: "In the very near future it's not going to be something that you can rely on. Likewise, a video forgery techniques get further along, the same will be true if you were to have a video chat with someone."

Old people and those are less tech-literate will be particularly vulnerable, Francis Tseng, a copublisher of The New Inquiry (who curates a project tracking how technology can distort reality) suggested: "Many people deal with their parents or grandparents falling prey to phone scams ... And an easy rule of thumb to tell them is 'don't give out private information to anyone you don't know!'. With these voice synthesis technologies, someone could easily forge a phone call from you or another relative."

It will turbo-charge fake news

We're already living in an era of "fake news." US President Donald Trump frequently lashes out online at the "phony" news media. Hoax news outlets have been created by Macedonian teenagers to make a quick buck from ad revenue, their stories spreading easily through platforms like Facebook. Public trust in the professional news media has fallen to an all-time low.

When anyone can throw together a video of a politician or celebrity saying whatever they want, it seems likely to engender further mistrust — and allow hoaxes to spread more easily than ever before.

obama fake news cgi

And there's a flipside to this: It will also cast some doubts on even legitimate footage. If a politician or celebrity is caught saying or doing something untoward, there will be an increasing chance that the person could dismiss the video as being fabricated.

In October, Trump's presidential campaign was rocked by the "Access Hollywood" tape — audio of his discussing groping women in vulgar terms. What if he could have semi-credibly claimed the entire thing was just an AI-powered forgery?

It will transform cyberbullying

This technology won't just be misused to pursue political and strategic objectives, or to defraud businesses: It will be a weapon for bullies, capable of inflicting arbitrary cruelty.

In the hands of children, it seems likely to be misused to hijack the image of victims', and to animate it for malicious purposes. A child's digital avatar might be made to confess their love for another, embarrassing them — or their voice could confess to a misdemeanour, landing them in trouble with school authorities.

Justin Thies, who helped develop Face2face, predicted it would "lift cyberbullying to a whole new level."

A spokesperson for child protection charity NSPCC acknowledged the danger: "Emerging technologies, such as AI and CGI, pose both potential risks and opportunities to young people and we must make sure they do not leave children and young people exposed to danger and harassment online.

"We know that cyber-bullying can be particularly devastating to young people as it doesn't stop in the playground and follows them home so they feel they cannot escape."

It will create a new category of sexual crimes

Jennifer LawrenceIn August 2014, hundreds of intimate photos of dozens of celebrities were released online— causing a media frenzy, and the creation of huge online communities dedicated to sharing the images. That the photos were stolen and being shared without the consent of the subjects did little to dampen many sharers' enthusiasm — even as Jennifer Lawrence, one of the victims, described it as a "sex crime" and a "sexual violation."

The episode indicates there is likely to be significant interest in on-demand pornography produced using these technologies in the years ahead, regardless of whether the subjects of these CGI films give permission.

"Revenge porn" websites already exist dedicated to cataloguing and sharing the intimate photos and videos of non-celebrities, and it seems likely that media-editing technology will be used to produce material featuring "ordinary" people, as well as the rich and famous — bringing with it the widespread risk of shame and blackmail.

A whole new world of entertainment awaits

jurassic parkNot every use case of this tech will be negative, however. The internet is already home to a vibrant remix culture — just look at the Reddit community "Photoshop Battles" — and photorealistic video-editing tools may well spark a huge wave of DIY creativity.

"There could be a lot of interesting IP cases if amateur filmmakers start synthesizing films using the likenesses of celebrities and start profiting off that. I can imagine a whole culture of bootleg films produced in this way," Tseng said.

The tech that powers face-modifying filters in apps like Snapchat is "primitive compared to the Hollywood CGI or today, but it's actually significantly more advanced than the Hollywood CGI of the Eighties," Allen said. "So what we're seeing is the state-of-the-art capabilities slowly come down in price and availability such that amateurs have access to ultimately what are rather impressive capabilities."

The tech likely to be used by the established entertainment industry as well as amateurs, Tseng suggested: "We've also seen movies adapt their scripts for certain markets (e.g. the 'Red Dawn' remake changing the villains from China to North Korea). There is already a practice of filming scenes to be slightly different for different markets but this technology could lead to it on a much larger scale, where even individuals experience a version of a film totally personalized for them."

Just look at "Star Wars: Rogue One" for an example of how this tech will be employed by Hollywood studios in years to come. Peter Cushing reprised his role as Grand Moff Tarkin — even though he had been dead for 22 years. His image was reconstructed using CGI overlaid on a real actor.

peter cushing cgi rogue one star wars

This is all right around the corner

This is all currently theoretical. But it won't be long until it becomes a reality.

"I think we are one to two years away from these sorts of forgeries, especially in audio where progress is a little bit easier," Allen said. "One to two years away from forgeries being able to fool the untrained ear and somewhere between five to 10 years away from them being able to evade certain types of forensic analysis."

So how do we prepare? Journalists and organisations will have to rely increasingly on cryptography to "sign" media, so it can be verified when required. Big platforms like Facebook will have a roll to play in policing for fraudulent material, Face2face's Justus Thies argues: "Social-media companies as well as the classical media companies have the responsibility to develop and setup fraud detection systems to prevent spreading / shearing of misinformation." And it will force ordinary people to be far more skeptical about the media they consume.

In some cases, "it may be possible to come up with a video format that simply rejects editing," Allen suggested. "But this will still be a suboptimal solution compared to what we have now ... in the best case scenario, this results in there [being] trained experts who can discern the most likely version of the truth, and that is just so far away from where we are today which is amateurs can rely upon their own eyes to discern the truth."

We don't realise just how lucky we've been

These advances mean that humanity is rapidly approaching the end of a unique period in human history. We "live in an amazing time where the tech for documenting the truth is significantly more advanced than the tech for fabricating the truth. This was not always the case. If you think back to the invention of the printing press, and early newspapers, it was just as easy to lie in a newspaper as it was to tell the truth," Allen said.

"And with the invention of the photograph and the phonograph, or recorded audio, we now live in a new technological equilibrium where — provided you have the right instruments there — you can prove something occurred ... we thought that was a permanent technological outcome, and it is now clear that is a temporary technological outcome. And that we cannot rely on this technological balance of truth favouring truth forever."

Join the conversation about this story »

NOW WATCH: British special forces are testing out a bulletproof combat helmet that looks like something Boba Fett would wear

Google just hired a former Apple star engineer after his short stint at Tesla (GOOGL, TSLA)

$
0
0

Chris Lattner

Chris Lattner, the engineer credited with creating Swift, Apple's super-popular programming language, has landed at Google, he announced Monday on Twitter.

Lattner will be working on Google Brain, Google's major artificial intelligence project.

Lattner's career has been the subject of much interest and some controversy in the past year. At Apple, he was the lead caretaker of Swift, one of the company's most successful software projects ever. That's why it was a surprise when he announced in January he was leaving Apple for Tesla, even though Tesla is known for poaching Apple engineers.

At Tesla, Lattner led the company's troubled Autopilot program. Latter loved the job and the work but he and Tesla CEO Elon Musk didn't get along, a person with knowledge of the matter told Business Insider. After months of butting heads — and only around six months on the job — he left the company in a decision that was "mutual," this person said.

All this means that Lattner is going to Google with an armload of qualifications and a fan base that could help the search giant snare even more big talent for its AI efforts.

Google is betting big on AI tech like the Brain. Like the rest of the tech industry, Google believes that AI and its cousin, machine learning, will drive the tech industry's future. It sees its highly regarded AI technology as its ace-in-the-hole for its young-but-growing cloud computing division. Google hopes its AI technology will eventually help it best Amazon Web Service, its big rival in the business.

SEE ALSO: Elon Musk was the reason one of Apple's most famous developers left Tesla after only 6 months

SEE ALSO: How a laid-off woman in her 50s learned to code and launched a whole new career

Join the conversation about this story »

NOW WATCH: All the nasty things inside a pimple — and why you should stop popping them

A fund betting on robots and AI is crushing it — and it's targeting millennial investors (BOTZ)

$
0
0

artificial intelligence baidu

When Jay Jacobs, director of research at Global X, and his team were looking to start new theme-based exchange-traded funds last year, a robotics and artificial intelligence ETF just made sense.

"I think a lot of times the finance world gets lost in its own jargon of risk adjusted returns and Sharpe ratios and risk factors," Jacobs told Markets Insider. "The story behind robotics and AI is very straightforward to everybody."

Jacobs is a chief mind behind the BOTZ, an exchange-traded fund from Global X which launched in September of 2016. BOTZ invests in companies that gain a majority of their revenue from robotics and artificial intelligence. The fund's market capitalization recently crossed over the $300 million mark.

The explosive growth of BOTZ makes sense. It combines the red-hot ETF market with skyrocketing tech stocks. Jacobs says it's the fastest growing funds he's been involved with in his four plus years with Global X. With returns of around 39.1% since the fund's inception last year, the growth is hardly surprising. 

BOTZ is comprised of 29 companies spread across four sub categories: industrial automation, non-industrial robotics, unmanned vehicles and artificial intelligence. It's weighted by market cap, with no single company comprising more than 8% of the fund, and no less than 0.3%, according to Jacobs.

The largest holding is currently Mitsubishi, followed by Nvidia and Keyence Corp, each making up about 7.5% of the fund. Those top three holdings are up an average of 77.4% since the inception of BOTZ. 

Global X has positioned BOTZ to be popular among a younger investing crowd. A strong majority of millennials, about 83%, are interested in thematic investing, compared to only 31% of the general population, according to a study done by the firm. When creating BOTZ, Jacobs said the team had millennial investors in mind. 

"We see that younger generations are the trendsetters, so if we see that millennials are the ones saying [AI] is real ... that's meaningful and it's going to start working its way up the chain," Jacobs said.

mit team robot helios

Formatting the fund as an ETF made sense as well. The ETF market for stocks has grown by 500% in the last eight years, in part because it allows for easy access to themes like AI and robotics. Investing in the fund is as easy as buying a stock.

"You get international exposure, which is critical for robotics," Jacobs said. "You get diversified exposure."

There are drawbacks to the Global X approach. Almost half of the fund's holdings are based in Japan, meaning events in the country could have an outsized effect on the fund. The fund is also missing some major players in AI, like Facebook and Google, which are leaders in artificial intelligence technology but excluded from the fund because they don't derive most of their revenue from the theme.

Still, a thematic fund like BOTZ allows investors to bet on a general idea instead of a specific company, which investors seem to like.

After all, "tech is only going to get better," Jacobs said.

SEE ALSO: The world's hottest investment market looks like a bubble — but it still has a ways to go

Join the conversation about this story »

NOW WATCH: Stocks have shrugged off Trump headlines to hit new highs this week


Box is now plugged into Google's AI, letting you easily search through images without needing to tag them first (BOX)

$
0
0

Product shot 1

Presumably one of the major benefits of artificial intelligence is its ability to perform tasks faster and with less complaining than a human can. So it only makes sense that one of the first exercises to go extinct is the arduous task of tagging and sorting image files. 

Box —a $2.48 billion content management company — has partnered with Google Cloud Vision to apply Google Images search technology to Box's storage technology. The feature launched in beta mode Thursday for free. 

The feature, called Box Image Recognition, is opt-in. Enabling it means that Google has access to your Box account, but a Box representative said that data won't be cached and Google deletes images once they're analyzed.

It works by tagging images as they're uploaded, making swaths of visual data searchable with key terms. If you're looking for a photo of pants, for example, just type "pants" into the Box search bar, and voila. 

Google Cloud Vision is so well trained from years of machine learning and enormous data sets that it can even tag with abstract phrases that capture the essence of a photo. 

Product shot 2Rand Wacker, VP of Product Marketing at Box, said the technology can also read text.

A company could use it to automatically run a background check, Wacker suggested, with just a photograph of a driver's license. The AI can read the ID, and then automatically kick off the process defined for drivers licenses, if a company so chooses. 

Longterm, Wacker said that Box intends to work with multiple other search partners for more robust capabilities. Different image recognition technologies are fine-tuned for different data, and might work better for some industries than others.

Box Image Recognition isn't without competitors. Amazon's Rekognition, for example, was trained by analyzing the billions of images uploaded daily into Prime Photos. 

SEE ALSO: Intel made modest progress in its diversity but says it's on track to meet its goals

Join the conversation about this story »

NOW WATCH: A former HR exec who reviewed over 40,000 résumés says these 7 résumé mistakes annoy her

The world's top artificial intelligence companies are pleading for a ban on killer robots

$
0
0

elon musk

A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.

Both scientists and industry are worried.

The world’s top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.

An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the world’s biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.

Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the world’s pre-eminent gathering of experts in artificial intelligence and robotics.

The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.

"Lethal autonomous weapons threaten to become the third revolution in warfare," the letter says.

"Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

"These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."

Signatories of the 2017 letter include:

  • Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
  • Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
  • Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
  • Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
  • Jü rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
  • Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.

The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.

"Nearly every technology can be used for good and bad, and artificial intelligence is no different," says Walsh.

"It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

"We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons," he added."

Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.

"We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability," he says.

"The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale."

The letter:

An Open Letter to the United Nations Convention on Certain Conventional Weapons 
As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations. We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.

We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

FULL LIST OF SIGNATORIES (by country):

  • Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
  • Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.
  • Charles Gretton, founder of Hivery, Australia. Brad Lorge, founder & CEO of Premonition.io, Australia
  • Brenton O’Brien, founder & CEO of Microbric, Australia.
  • Samir Sinha, founder & CEO of Robonomics AI, Australia.
  • Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
  • Peter Turner, founder & MD of Tribotix, Australia.
  • Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
  • Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.
  • James Chow, founder & CEO of UBTECH Rob otics, China.
  • Robert Li, founder & CEO of Sankobot, China.
  • Marek Rosa, founder & CEO of GoodAI, Czech Republic.
  • Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
  • Markus Järve, founder & CEO of Krakul, Estonia.
  • Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
  • Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
  • Raul Bravo, founder & CEO of DIBOTICS, France.
  • Raphael Cherrier, founder & CEO of Qucit, France.
  • Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.
  • Charles Ollion, founder & Head of Research at Heuritech, France.
  • Anis Sahbani, founder & CEO of Enova Robotics, France.
  • Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
  • Marcus Frei, founder & CEO of NEXT.robotics, Germany
  • Kirstinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
  • Fahad Azad, founder of Robosoft Systems, India.
  • Debashis Das, Ashish Tupate, Jerwin Prabu, founders (incl. CEO ) of Bharati Robotics, India.
  • Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
  • Pranay Kishore, founder & CEO of Phi Robotics Research, India.
  • Shahid Memom, founder & CTO of Vanora Robots, India.
  • Krishnan Nambiar & Shahid Memon, founders, CEO & C TO of Vanora Robotics, India.
  • Achu Wilson, founder & CTO of Sastra Robotics, India.
  • Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.
  • Parsa Ghaffari, founder & CEO of Aylien, Ireland.
  • Alan Holland, founder & CEO of Keelvar Systems, Ireland.
  • Alessandro Prest, founder & CTO of LogoGrab, Ireland.
  • Alessio Bonfietti, founder & CEO of MindIT, Italy.
  • Angelo Sudano, founder & CTO of ICan Robotics, Italy.
  • Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot
  • Corporation, Japan.
  • Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
  • Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
  • Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
  • Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
  • Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at
  • BlueEye Robotics, Norway.
  • Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
  • Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
  • Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
  • Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.
  • Jasper Horrell, founder of DeepData, South Africa.
  • Toni Ferrate, founder & CEO of RO – BOTICS, Spain.
  • José Manuel del Río, founder & CEO of Aisoy Robotics, Spain. Victor Martin, founder & CEO of Macco Robotics, Spain.
  • Timothy Llewellynn, founder & CEO of nViso, Switzerland.
  • Francesco Mondada, founder of K – Team, Switzerland.
  • Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders,
  • President & CEO of Nnaisense, Switzerland.
  • Satish Ramachandran, founder of AROBOT, United Arab Emirates.
  • Silas Adekunle, founder & CEO of Reach Robotics, UK.
  • Steve Allpress, founder & CTO of FiveAI, UK.
  • Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
  • Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
  • Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
  • Daniel Hulme, founder & CEO of Satalia, UK.
  • Charlie Muirhead & Tabitha Goldstaub, founders & CEO of Cognitio nX, UK.
  • Geoff Pegman, founder & MD of R U Robots, UK.
  • Mustafa Suleyman, founder & Head of Applied AI, DeepMind, UK.
  • Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.
  • Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
  • Brian Gerkey, founder & CEO of Open Source Robotics, USA.
  • Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.
  • Henry Hu, founder & CEO of Cafe X Technologies, USA.
  • Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
  • Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
  • Brian Mingus, founder & CTO of Latently, USA.
  • Mohammad Musa, founder & CEO at Deepen AI, USA.
  • Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
  • Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
  • Erik Nieves, founder & CEO of PlusOne Robotics, USA.
  • Steve Omohundro, founder & President of Possibility Research, USA.
  • Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
  • Dan Reuter, found & CEO of Electric Movement, USA.
  • Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA. Dan Rubins, founder & CEO of Legal Robot, USA.
  • Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
  • Andrew Schroeder, founder of WeRo botics, USA.
  • Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.
  • Martin Spencer, founder & CEO of GeckoSystems, USA.
  • Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
  • Michael Stuart, founder & CEO of Lucid Holdings, USA.
  • Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.

Join the conversation about this story »

NOW WATCH: Here's the best way to watch the solar eclipse if you don't have special glasses

Microsoft's AI is getting crazily good at speech recognition

$
0
0

matt smith doctor who actor british speaking microphone mouth

Microsoft's speech recognition efforts have hit a significant milestone.

It can now transcribe human speech with a 5.1% error rate, Microsoft technical fellow Xuedong Huang wrote in a blog post— the same error rate as humans.

Microsoft actually thought it hit this point last year, when it reached 5.9%, the word error rate it had measured for humans. But then other researchers carried out separate studies and pegged the human error level as slightly lower, 5.1%.

But it has now achieved this — reducing its error rate by 12%, and using AI techniques like "neural-net based acoustic and language models." Another innovation was to take into account the context of the speech to make better guesses as to what unclear words are, like humans do.

For example: It might not be clear from the audio whether someone is saying "that's not fair" or "that's not fur." Traditionally, this ambiguity might lead to transcription errors. But now the speech recognition tech can look at context for clues. If it's a speech about the risks of gambling, then it's probably "that's not fair"; if it's a conversation about fabrics, "that's not fur" probably fits better.

"Reaching human parity with an accuracy on par with humans has been a research goal for the last 25 years," Xuedong Huang wrote. But in practice, Microsoft still faces significant challenges. "such as achieving human levels of recognition in noisy environments with distant microphones, in recognizing accented speech, or speaking styles and languages for which only limited training data is available." 

So while Microsoft's tech is impressive, it won't be on a par with humans in all real-world situations just yet.

The researcher added: "Moreover, we have much work to do in teaching computers not just to transcribe the words spoken, but also to understand their meaning and intent. Moving from recognizing to understanding speech is the next major frontier for speech technology."

Join the conversation about this story »

NOW WATCH: Everything we know about the new iPhone that Apple will announce in September

Microsoft's voice-recognition tech is now better than even teams of humans at transcribing conversations (MSFT)

$
0
0

Xuedong Huang

In October 2016, in a big milestone for artificial intelligence, Microsoft unveiled a system that can transcribe the contents of a phone call as well or better than human professionals.

But while Microsoft's system had fewer transcription errors than the average human transcriptionist, it still couldn't best a team of trained humans. So, the world of academia fired back with a new challenge: Lower the error rate to below what human teams can do. 

Now Microsoft has done just that. In a blog entry on Sunday, Xuedong Huang, Microsoft Research's chief speech scientist, reported that the company had broken even that barrier.

It's a major milestone, Huang wrote. And it gives the company a sound foundation to go from mere transcription to understanding the meaning of what's being said, he said. Speech recognition is a fundamental building block for building more robust artificial intelligence.

"Moving from recognizing to understanding speech is the next major frontier for speech technology," Huang wrote.

Microsoft's voice recognition system has been improving rapidly. Transcription accuracy is judged by error rates; i.e., the portion of words a system gets wrong out of a given recording of speech. That error rate is determined using Switchboard, a standard test for voice transcription accuracy widely used in the industry, including by IBM and Google.

As recently as September 2016, Microsoft's error rate, according to Switchboard, was 6.3%, which means that out of every 100 words the system was getting more than 6 wrong. By comparison, a single human transcriptionist has an average error rate of 5.9%, and a team of trained humans clocks in with an error rate of around 5.1%.

Microsoft matched the former error rate in October and just beat the latter.

That's far sooner than the company expected. Indeed, back in 2015, Huang himself told Business Insider that building a system capable of surpassing a human at transcription was "four to five years away." Less than two years later, we're well past that point.

Still, challenges remain. Microsoft's transcription system is patterned after the audio coming from a nice, stable landline telephone, Geoffrey Zweig, formerly a principal researcher at the company, told Business Insider last October. The next frontier for voice recognition is to accurately transcribe speech even when it's coming over a lousy cell connection or an echoing McDonalds drive-thru speaker.

Speech science "still has many challenges to address, such as achieving human levels of recognition in noisy environments with distant microphones, in recognizing accented speech, or speaking styles and languages for which only limited training data is available," Huang wrote in his blog post on Sunday.

SEE ALSO: Microsoft built technology that's better than a human at understanding a conversation

Join the conversation about this story »

NOW WATCH: 6 things in tech today that Bill Gates accurately predicted back in 1999

Apple has revealed how it makes its AI assistant Siri's voice sound how it does (AAPL)

$
0
0

olivia palermo actress

If you've ever used an iPhone, you're almost certainly familiar with Siri, Apple's virtual assistant.

It can answer question, fulfill tasks, and manage calendars — most commonly with a soothing female voice.

But how does Apple make Siri sound like it does?

Thanks to some new research papers published by the Californian technology company, we now have a better idea. (We first saw them via The Register.)

Most commonly using a soothing female voice, Siri answers questions, fulfills tasks and manages users' calendars. But it comes in a number of voice options — male or female, with accents including American, British, and Australian — and these are based on human voice actors.

In selecting these actors, "first and foremost, a voice must be perceived as being compatible with the Siri personality,"Apple engineers wrote.

They don't elaborate on exactly what the "Siri personality" is — but it typically comes across as restrained, neutral and professional, with the occasionally dryly delivered joke for those who know what to ask.

Once a suitable voice talent is found, between 10 and 20 hours is recorded of their voice. "The recording scripts vary from audio books to navigation instructions, and from prompted answers to witty jokes,"Apple's Siri team wrote in a blog post.

"Typically, this natural speech cannot be used as it is recorded because it is impossible to record all possible utterances the assistant may speak." As such, it's then chopped up into constituent blocks that can be put together using to generate new speech — even words that the actors never uttered.

The tricky part is constructing Siri's speech in a way that sounds natural and "human"— and to do this, Apple uses a number of artificial intelligence (AI) techniques. The company's researchers go into more technical detail on how they achieve this in newly published papers.

These developments seem to be paying off: In tests, Apple wrote, "the new voices were rated clearly better in comparison to the old ones." You can can hear examples of how Siri has evolved from iOS 9 to iOS 11 at the bottom of the page.

Apple has historically been highly secretive, rarely talking about its inner workings. but in December 2016, it announced it would allow its artificial intelligence researchers to start publishing their work publicly and engage more in the broader academic community, as it tries to attract more AI experts to join the company.

Here's the full paper on Siri:

Join the conversation about this story »

NOW WATCH: We tried the $10-a-month movie theater service MoviePass — and it's more trouble than we expected

Bank of America Merrill Lynch has become the latest bank to implement AI (BAC)

$
0
0

Large FI Invesment PriorityThis story was delivered to BI Intelligence "Fintech Briefing" subscribers. To learn more and subscribe, please click here.

Bank of America Merrill Lynch (BAML) has revealed that it is implementing enterprise software fintech HighRadius' artificial intelligence (AI) solution to speed up receivables reconciliation for the bank's large business clients. (Receivables refers to all debts and unsettled transactions owed to a company by its debtors and customers.)

Large companies with numerous customers often receive payments without accompanying contextual information, like which customer or debtor it's come from, or precisely what the payment is for, which makes balancing a company's books, i.e. reconciling, a lengthy and resource-intensive task.

HighRadius' solution uses AI, machine learning, and optical character recognition to identify a payer, match them to an uncontextualized payment, and match that to an open receivable. Moreover, it gives companies the option of sending an automatic prompt to customers whose debts are outstanding. By leveraging this solution, BAML aims to reduce costs for its large business clients.

The bank's move is the latest development in a growing trend of AI deployment by big banks. The world's leading banks are applying AI across a diverse range of business areas, of which receivables is only the latest. Banks are taking advantage of improvements in the technology's data-crunching abilities in spaces like credit scoring to improve their risk assessment methodologies, Nordnet is using the technology to boost its customer service, NatWest is deploying AI to improve its compliance procedures, and JPMorgan Chase announced it's using AI for automated trading at its European business in early August.

The wide range of areas where banks are already using AI indicates there are few aspects of their business that wouldn't benefit from the technology, and suggests we will see many more applications of it going forward.

What is less clear is which suppliers of the technology will come out on top. Significantly, in each of the use cases mentioned above, banks are leveraging an external suppliers' AI solution, rather than developing their own in-house, likely due to a dearth of tech talent and to keep costs low. To date, banks have been turning either to startups like HighRadius, James, and Recordsure, or to solutions from incumbent software providers like IBM, for their AI needs.

This means that, for now, it's unclear which of these camps will be able to secure a lead in this market. Ultimately, the most attractive AI solutions to banks will probably be those that have the most robust financial and data security features in place, so we will probably see both types of provider focusing heavily on security to gain a competitive lead.

Traditional consumer lenders, like banks and credit unions, have historically served segments of the population on which they can conduct robust risk assessments. 

But the data they collect from these groups is limited and typically impossible to analyze in real time, preventing them from confirming the accuracy of their assessments. This restricts the demographic segments they can safely serve, and creates an inconvenient experience for potential borrowers.

This has hobbled legacy lenders at a time when alternative lending firms — which pride themselves on precision risk assessment and financial inclusion — are taking off. These rivals are starting to break into a huge untapped borrower market — some 64 million US consumers don’t have a conventional FICO score, and 10 million of those are prime or near-prime consumers. 

Incumbents can get in on the game by tapping into new developments in the credit scoring space, like psychometric scoring, which use data besides borrowing history to measure creditworthiness, and by integrating new technologies, like artificial intelligence (AI), to improve the accuracy of conventional risk assessment methods. There are still risks attached to these cutting-edge methods and technologies, but if incumbent lenders are aware of them, and take steps to mitigate them, the payoff from implementing these new tools can be huge.

Maria Terekhova, research associate for BI Intelligence, Business Insider's premium research service, has put together a report on the digital disruption of credit scoring that:

  • Outlines the drivers behind incumbent lenders' growing awareness and adoption of credit scoring disruptions.
  • Looks at the current range of methods and technologies changing the face of credit scoring.
  • Explains what incumbent lenders stand to gain by adopting these disruptions.
  • Discusses the risks still attached to these disruptions, and how incumbents can manage them to reap the rewards.
  • Gives an overview of what the credit scoring landscape of the future will look like, and how incumbents can prepare themselves to stay relevant.

To get the full report, subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND more than 250 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> Learn More Now

You can also purchase and download the report from our research store.

Join the conversation about this story »

AI IN E-COMMERCE: How artificial intelligence can help retailers deliver the highly personalized experiences shoppers desire

$
0
0

AI In Retail Investments

This is a preview of a research report from BI Intelligence, Business Insider's premium research service. To learn more about BI Intelligence, click here.

One of retailers' top priorities is to figure out how to gain an edge over Amazon. To do this, many retailers are attempting to differentiate themselves by creating highly curated experiences that combine the personal feel of in-store shopping with the convenience of online portals. 

These personalized online experiences are powered by artificial intelligence (AI). This is the technology that enables e-commerce websites to recommend products uniquely suited to shoppers, and enables people to search for products using conversational language, or just images, as though they were interacting with a person. 

Using AI to personalize the customer journey could be a huge value-add to retailers. Retailers that have implemented personalization strategies see sales gains of 6-10%, a rate two to three times faster than other retailers, according to a report by Boston Consulting Group (BCG). It could also boost profitability rates 59% in the wholesale and retail industries by 2035, according to Accenture. 

In a new report from BI Intelligence, we illustrate the various applications of AI in retail and use case studies to show how this technology has benefited retailers. It assesses the challenges that retailers may face as they implement AI, specifically focusing on technical and organizational challenges. Finally, the report weighs the pros and cons of strategies retailers can take to successfully execute AI technologies in their organization.

Here are some key takeaways from the report:

  • Digitally native retailers are setting new standards for the customer journey by creating highly curated experiences through the use of AI. This has enabled them to cater to consumers' desire to interact with mobile apps and websites as they would with an in-store sales representative.
  • By mimicking the use of AI among e-commerce pureplays, brick-and-mortars can implement similar levels of personalization. AI can be used to provide personalized websites, tailored product recommendations, more relevant product search results, as well as immediate and useful customer service.
  • However, there are several barriers to AI adoption that may make implementation difficult. By and large, these hurdles stem from a general unpreparedness of legacy retailers' systems and organizational structures to handle the huge troves of data AI solutions need to be effective.
  • For many retailers, successfully leveraging AI will require partnering with third parties. Because of the barriers involved, employing an in-house strategy can be extremely costly and difficult. This has led to the rise of AI commerce startups, which can provide a more cost-effective approach to overhauling the customer experience.

In full, the report: 

  • Provides an overview of the numerous applications of AI in retail, using case studies of how retailers are currently gaining an advantage using this technology. These applications include personalizing online interfaces, tailoring product recommendations, increasing the relevance of shoppers search results, and providing immediate and useful customer service.
  • Examines the various challenges that retailers may face when looking to implementing AI, which typically stems from data storage systems being outdated and inflexible, as well as organizational barriers that prevent personalization strategies from being executed effectively.
  • Gives two different strategies that retailers can use to successfully implement AI, and discusses the advantages and disadvantages of each strategy.

To get the full report, subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and over 100 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >>Learn More Now

You can also purchase and download the full report from our research store.

Join the conversation about this story »


The $2,500 answer to Amazon's Echo could make Japan's sex crisis even worse

$
0
0

Japan has a sex problem. The country's birthrate is shrinking year after year, to the point where deaths are outpacing births.

Simply put, Japan's population is decreasing.

Japanese birthrate

But let's be clear: Population change is a complicated subject affected by many factors.

Western media often correlates the decline in Japan's population size with recent studies of Japanese sexual habits and marriage. A 2016 study by the National Institute of Population and Social Security Research in Japan, for instance, found that "almost 70 percent of unmarried men and 60 percent of unmarried women are not in a relationship."

But just because people aren't in relationships doesn't mean they don't want companionship, of course. And that's where something like Gatebox comes in.

Gatebox AI

Yes, that is an artificially intelligent character who lives in a glass tube in your home. Her name is Azuma Hikari, and she's the star of Gatebox — a $2,500 Amazon Echo-esque device that acts as a home assistant and companion.

Here's what we know:

SEE ALSO: Japan's sex problem is so bad that people are quitting dating and marrying their friends

DON'T MISS: Japan's huge sex problem is setting up a 'demographic time bomb' for the country

A Japanese company named Vinclu created the Gatebox.

It's about the size of an 8-inch by 11-inch piece of paper, according to Vinclu. And there's a good reason for that: The device is intended to be "big enough for you to be able to put right beside you." You'll understand why you'd want a Gatebox so close soon enough.



The Gatebox is similar to Amazon's Echo — it's a voice-powered home assistant.

The Gatebox has a microphone and a camera because you operate it using your voice.

For now, it will respond only to Japanese; the company making Gatebox says it's exploring other language options. Considering that preorder units are available for both Japan and the US, we'd guess that an English-language option is in the works.



Gatebox does a lot of the same stuff that Echo does — it can automate your home in various ways, including turning on lights and waking you up in the morning.



See the rest of the story at Business Insider

Robot caregivers for the elderly could be just 10 years away

$
0
0

robot

Despite innovations that make it easier for seniors to keep living on their own rather than moving into special facilities, most elderly people eventually need a hand with chores and other everyday activities.

Friends and relatives often can't do all the work. Growing evidence indicates it's neither sustainable nor healthy for seniors or their loved ones. Yet demand for professional caregivers already far outstrips supply, and experts say this workforce shortage will only get worse.

So how will our society bridge this elder-care gap? In a word, robots.

Just as automation has begun to do jobs previously seen as uniquely suited for humans, like retrieving goods from warehouses, robots will assist your elderly relatives. As a robotics researcher, I believe artificial intelligence has the potential not only to care for our elders but to do so in a way that increases their independence and reduces their social isolation.

Personal robots

In the 2004 movie "I, Robot," the robot-hating protagonist Del Spooner (played by Will Smith) is shocked to discover a robot in his grandmother's house, baking a pie. You may have similar mental images: When many people imagine robots in the home, they envision mechanized domestic workers doing tasks in human-like ways.

In reality, many of the robots that will provide support for older adults who "age in place"— staying at home when they might otherwise be forced to relocate to assisted living or nursing homes — won't look like people.

Instead, they will be specialized systems, akin to the Roomba, iRobot's robotic vacuum cleaner and the first commercially successful consumer robot. Small, specific devices are not only easier to design and deploy, they allow for incremental adoption as requirements evolve over time.

Seniors, like everyone else, need different things. Many need help with the mechanics of eating, bathing, dressing, and standing up — tasks known as "activities of daily living." Along with daily help with cooking and managing their medications, they can benefit from a robotic hand with more intermittent things such as doing the laundry and getting to the doctor's office.

robot hand

That may sound far-fetched, but in addition to vacuuming robots can already mop our floors and mow our lawns. Experimental robots help lift people into and out of chairs and beds, follow recipes, fold towels, and dispense pills. Soon, autonomous (self-driving) cars will ferry people to appointments and gatherings.

The kinds of robots already available include models that drive, provide pet-like social companionship, and greet customers. Some of these technologies are already in limited trials in nursing homes, and seniors of course can already rely on their own Roombas.

Meanwhile, robot companions may soon help relieve loneliness and nudge forgetful elders to eat on a regular schedule.

Scientists and other inventors are building robots that will do these jobs and many others.

Pepper robot

Round-the-clock care

While some tasks remain out of reach of today's robots, such as inserting IVs or trimming toenails, mechanical caregivers can offer clear advantages over their human counterparts.

The most obvious one is their capacity to work around the clock. Machines, unlike people, are available 24/7. When used in the home, they can support aging in place.

Another plus: Relying on technology to meet day-to-day needs like mopping the floor can improve the quality of time elders spend with family and friends. Delegating mundane chores to robots also leaves more time for seniors to socialize with the people who care about them, and not just for them.

And since using devices isn't the same as asking someone for help, relying on caregiving robots may lead seniors to perceive less lost autonomy than when they depend on human helpers.

Interacting with robots

This brave new world of robot caregivers won't take shape unless we make them user-friendly and intuitive, and that means interaction styles matter. In my lab, we work on how robots can interact with people by talking with them. Fortunately, recent research by the Pew Research Center shows that older adults are embracing technology more and more, just like everyone else.

Now that we are beginning to see robots that can competently perform some tasks, researchers like Jenay Beer, an assistant professor of computer science and engineering at the University of South Carolina, are trying to figure out which activities seniors need the most help with and what kinds of robots they might be most willing to use in the near term.

To that end, researchers are asking questions like:

But the fact is we don't need all the answers before robots begin to help elders age in place.

Looking ahead

After all, there's no time to lose.

The Census Bureau estimated that 15% of Americans — nearly one in six — were aged 65 or older in 2016, up from 12% in 2000. Demographers anticipate that by 2060 almost one in four will be in that age group. That means there will be some 48 million more elderly people in the US than there are now.

I believe robots will perform many elder-care tasks within a decade. Some activities will still require human caregivers, and there are people for whom robotic assistance will never be the answer. But you can bet that robots will help seniors age in place, even if they won't look like butlers or pastry chefs.

Cynthia Matuszek is an assistant professor of computer science and electrical engineering at the University of Maryland, Baltimore County.

This article was originally published on The Conversation. Read the original article.

SEE ALSO: A Japanese hotel run almost entirely by robots is expanding to 100 locations — here's what it's like to stay there

Join the conversation about this story »

NOW WATCH: MIT professor: Why we shouldn’t be afraid of robots taking our jobs

Mizuho boosts its financial crime fighting skills with AI

$
0
0

AI Supported Tech

This story was delivered to BI Intelligence "Fintech Briefing" subscribers. To learn more and subscribe, please click here.

Artificial intelligence (AI)-powered solutions are increasingly gaining traction in the banking industry, with organizations deploying them across ever-more areas of their businesses.

Japanese-based Mizuho Bank has now joined this trend, announcing last week that it will test IBM's new AI-based regtech solution, Financial Crimes Due Diligence with Watson, as it seeks to improve its ability to prevent and detect financial crime like money laundering (ML) and financing of terrorism (FT). Mizuho will first deploy the solution, which automates the retrieval and analysis of data used in detecting financial crime, in Singapore.

Regulators are getting tougher on banks when it comes to financial crime. Globally, we've seen a swathe of new regulation designed to boost prevention and detection of ML and FT.

  • The EU's 4th Money Laundering Directive (4MD) came into force in June 2017 and introduced more stringent criteria for financial crime risk assessments by a wide range of financial institutions.
  • The Basel Committee on Banking Supervision (BCBS) also issued new guidelines for global banks on how to incorporate ML and FT into their overall risk assessments in June.
  • In terms of domestic regulators, the Monetary Authority of Singapore (MAS) has been particularly active — it fined Credit Suisse $900,000 in May for breaching anti-money laundering (AML) guidelines as part of its biggest ever probe into ML. This may well have influenced Mizuho's decision to test IBM's solution in the country first.

As requirements get tougher, banks will increasingly turn to AI-based solutions to meet them. With the number of regulations increasing, so is the volume of data banks have to sift through, and the alternative to AI-powered products for many banks is to hire more compliance employees. That's not only expensive, but assumes that employees using legacy, manual processes will actually be enough to ensure compliance — an increasingly unlikely scenario. Consequently, AI-powered tools that automate and accelerate the collection and analysis of data related to regulatory changes will likely be deployed at a rapid pace. 

Sarah Kocianski, senior research analyst for BI Intelligence, Business Insider's premium research service, has compiled a detailed report on U.S. fintech regulation that:

  • Examines the current regulatory landscape in the U.S. 
  • Explains how it is negatively affecting the fintech industry.
  • Outlines the initiatives currently in play from major regulatory agencies. 
  • Considers the future of U.S. fintech regulation and its potential impact on the fintech sector. 

To get the full report, subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and more than 250 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. » Learn More Now

You can also purchase and download the full report from our research store.

Join the conversation about this story »

Researchers taught AI to write totally believable fake reviews, and the implications are terrifying

$
0
0
  • Researchers have used AI to develop software that can write extremely believable fake online reviews.
  • The tech is a major threat to sites like Yelp and Amazon — and the businesses that rely on them.
  • And it hints at a worrying future where AI is capable of writing sophisticated texts, undermining public trust and spreading fake news.

robot cigarette greeting hello hand face sepia

For many people, online reviews are the first port of call when looking for a restaurant and hotel.

As such, they've become the lifeblood for many businesses — a permanent record of the quality of their services and products. And these businesses are constantly on the watch for unfair or fake reviews, planted by disgruntled rivals or angry customers.

But there will soon be a major new threat to the world of online reviews: Fake reviews written automatically by artificial intelligence (AI).

Allowed to rise unchecked, they could irreparably tarnish the credibility of review sites — and the tech could have far broader (and more worrying) implications for society, trust, and fake news.

"In general, the threat is bigger. I think the threat towards society at large and really disillusioned users and to shake our belief in what is real and what is not, I think that's going to be even more fundamental," Ben Y. Zhao, a professor of computer science at the University of Chicago, told Business Insider.

Fake reviews are undetectable — and considered reliable

Researchers from the University of Chicago (including Ben Zhao) have written a paper ("Automated Crowdturfing Attacks and Defenses in Online Review Systems") that shows how AI can be used to develop sophisticated reviews that are not only undetectable using contemporary methods, but are also considered highly reliable by unwitting readers.

The paper will be presented at the ACM Conference on Computer and Communications Security later this year.

Here's one example of a synthesised review: "I love this place. I went with my brother and we had the vegetarian pasta and it was delicious. The beer was good and the service was amazing. I would definitely recommend this place to anyone looking for a great place to go for a great breakfast and a small spot with a great deal."

There's nothing immediately strange about this review. It gives some specific recommendations and believable backstory, and while the last phrase is a little odd ("a small spot with a great deal"), it's still an entirely plausible human turn-of-phrase.

ben y zhao professor computer science university of chicagoIn reality, though, it was generated using a deep learning technique called recurrent neural networks (RNN), after being trained with thousands of real online reviews that are freely available online. (Scroll down to test yourself with more examples of fake reviews.)

The researchers wrote that the synthesised reviews were "effectively indistinguishable" from the real deal: "We [carried] out a user study (=600) and [showed] that not only can these fake reviews consistently avoid detection by real users, but they provide the same level of user-perceived 'usefulness' as real reviews written by humans."

That the reviews are considered not just believable but "useful" is a big deal: It shows they are fulfilling their purpose of maliciously influencing human opinions.

The reviews are also only rarely picked up by plagiarism detection software, especially when they are configured to prioritise uniqueness. They're generated character-by-character, rather than just swapping out words in existing reviews. "It remains hard to detect machine-generated reviews using a plagiarism checker without inadvertently flagging a large number of real reviews," the researchers wrote. "This shows that the RNN does not simply copy the existing reviews from the training set."

The tech isn't being used by real people — yet

There's already a burgeoning underground industry for human-written fake reviews. If you know where to look and have some cash in the bank, you can pay people to write positive reviews for your business — or negative ones for your rivals.

But AI-generated reviews has the potential to "disrupt that industry," Zhao said.

While an attacker might previously have to pay significant sums for high-quality reviews (between $1 and $10 per review on Yelp), they can now generate thousands free of charge, and space them out so they don't attract suspicion — drastically increasing the threat that fake reviews pose.

Zhao said he hasn't seen any examples of AI being used to generate malicious fake reviews in the real world just yet.

But it would take someone "reasonably technically proficient""not very long at all" to build a similar system to the one the researchers developed, requiring nothing but some normal, off-the-shelf computer hardware and a database of real reviews (which can be easily found online).

fake yelp review forum

It's an existential threat to review sites, but there are potential defences

Fake reviews produced on an industrial scale pose a major threat to companies like user-submitted review site Yelp, which sells itself on the reliability and helpfulness of its reviews. If any given review on a website could be fake, who would trust any of them?

Retailers like Amazon are also at risk — though Zhao points out that it can at least check if someone has bought the product they are reviewing. (Amazon and Yelp did not immediately respond to requests for comment as to whether they've seen computer-written reviews on their platforms yet, and how they see the tech evolving.)

robots restaurant food server waiter plates aiBut the researchers did find a potential way to fight this attack. While fake reviews might look identical to a real one to a human, there are subtle differences that a computer program can detect, if it knows to look — notably the distribution of characters (letters — a, b, c, d, and so on).

The fake reviews are derived from real reviews, and there is some information lost in the process. The fake reviews prioritise fluency and believability — so less noticeable features like character distribution take a hit.

"The information loss incurred during the training would propagate to the generated text," the researchers wrote, "leading to statistically detectable difference in the underlying character distribution between the generated text and human text."

There are ways attackers can try and get round this, Zhao said, by buying more expensive computer hardware and using it to generate more sophisticated neural networks. But employing these kind of defences, at the very least, "raises the bar for attackers," making it harder for most them to slip through.

If the cost of a successful attack is pushed up to the point that all but the most ardent attackers are put off, "that'll effectively be a win," he said. "That is all security does, raise the bar for attackers. You can't ever stop the truly determined and resourceful attacker."

This doesn't stop with reviews...

Reviews are in some ways an ideal place to start testing text-synthesing technology. They have a clearly defined purpose and direction, they're on a single subject, they follow a fairly standard structure, and they're short (the longer a fake review, the higher the chances of a mistake that gives the game away, Zhao said).

But that won't be where this tech ends.

"In general, the threat is bigger. I think the threat towards society at large and really disillusioned users and to shake our belief in what is real and what is not, I think that's going to be even more fundamental," Zhao said. "So we're starting with online reviews. Can you trust what so-and-so said about a restaurant or product? But it is going to progress.

"It is going to progress to greater attacks, where entire articles written on a blog may be completely autonomously generated along some theme by a robot, and then you really have to think about where does information come from, how can you verify ... that I think is going to be a much bigger challenge for all of us in the years ahead."

Business Insider has explored this theme before in features looking at the potential of developments in AI and computer generated imagery (CGI) to inflame "fake news," as well as impacting more broadly in areas ranging from information warfare to "revenge porn."

Zhao's message, he said, is "simple": "I want people to pay attention to this type of attack vector as very real an immediate threat," urging companies like Yelp and Amazon to start thinking about defences, if their engineers aren't already.

The professor hopes that "we call more attention to designing not only defences for this particular attack, but more eyeballs and minds looking at the threats of really, really good AI from a more mundane perspective.

"I think so many people are focused on the Singularity and Skynet as a very catchy danger of AI, but I think there are many more realistic and practically impactful threats from really, really good AI and this is just the tip of the iceberg."

He added: "So I'd like folks in the security community to join me and look at these kind of problems so we can actually have some hope of catching up. I think the ... speed and acceleration of advances in AI is such that if we don't start now looking at defences, we may never catch up."

Can you tell if a review is real?

Lastly, here are six reviews. A number of them were generated by the University of Chicago researchers' neural network, while the others are real reviews cited in their paper. Can you tell which ones are real and which are fake?

1. Easily my favorite Italian restaurant. I love the taster menu, everything is amazing on it. I suggest the carpaccio and the asparagus. Sadly it has become more widely known and becoming difficult to get a reservation for prime times.

2. My family and I are huge fans of this place. The staff is super nice and the food is great. The chicken is very good and the garlic sauce is perfect. Ice cream topped with fruit is delicious too. Highly recommended!

3. I come here every year during Christmas and I absolutely love the pasta! Well worth the price!

4. Excellent pizza, lasagna and some of the best scallops I've had. The dessert was also extensive and fantastic.

5. The food here is freaking amazing, the portions are giant. The cheese bagel was cooked to perfection and well prepared, fresh & delicious! The service was fast. Our favorite spot for sure! We will be back!

6. I have been a customer for about a year and a half and I have nothing but great things to say about this place. I always get the pizza, but the Italian beef was also good and I was impressed. The service was outstanding. The best service I have ever had. Highly recommended.

The answers?

1 is real, 2 is fake, 3 and 4 are real, and 5 and 6 are fake.

Here's the full paper: "Automated Crowdturfing Attacks and Defenses in Online Review Systems":

Join the conversation about this story »

NOW WATCH: Scientists are testing out a device that could heal organs and brain injuries in seconds

Amazon and Microsoft want their AI assistants to be friends. Here's what that really means. (MSFT, AMZN, AAPL, GOOG, GOOGL)

$
0
0

Jeff Bezos Bill Gates Tennis

On Wednesday, Microsoft and Amazon made a surprise announcement: Cortana and Alexa, their respective AI-based voice assistants, will work together.  

Or, as Amazon CEO Jeff Bezos succinctly put it in a tweet: "Alexa has made a new friend."

For anyone following the rise of artificial intelligence and the spread of virtual assistants into our everyday lives, this feels like a big moment. 

Your home could soon be inhabited by multiple virtual beings — each already capable of talking to you — now also communicating with each other. 

It sounds like science fiction. And there's no question that this will bring about some cool new experiences and functionality when it takes effect at the end of 2017.

In real life, though, it's going to be a lot longer, if ever, before this AI friendship really pays off for you, the customer. Here's why.

The big idea 

The idea, say the two companies, is to play to each virtual assistant's strengths. 

Microsoft claims 145 million monthly active Cortana users, and Alexa-powered Amazon Echo devices dominate the still very young market for smart speakers. 

Alexa is good at (surprise, surprise) letting you shop on Amazon. And Alexa has already emerged as the central concierge for a veritable menagerie of smart home products, from smart locks to refrigerators to lightbulbs. 

Microsoft, by contrast, pitches Cortana as the ideal assistant for the tech-savvy professional: It's plugged in to the Office 365 productivity suite, so it has a view into your calendar and Word and Excel documents. In the not-so-distant future, Microsoft has said, Cortana and LinkedIn will even integrate to tell you about the people in your next meeting.

Let the two AIs play together, and you get some nice benefits.

You'll be able to use your Amazon Echo (or other Alexa device) to talk to Cortana, for example. And you can  use your Windows 10 PC (or your phone's Cortana app) to talk to Alexa.

Just say "Alexa, open Cortana," or "Cortana, open Alexa," and your device will hand over control to the appropriate virtual assistant.  

Satya Nadella

But there are some significant limitations, as the New York Times reports. The assistants will be walled off from each other, almost entirely. So if you're using Microsoft's Cortana on your Amazon Echo Dot, and you want to play music from your Amazon Prime account, you'll have to switch back to Alexa.

This makes strategic sense — Amazon probably doesn't want Microsoft to see its customers' shopping behavior. And Microsoft has its own data that it doesn't want Alexa accessing directly. But from a user experience perspective, it stinks. Imagine needing to ask one specific member of your household every time you want to turn on the TV, and somebody else to dim the lights. 

I can't imagine that a lot of people out there will actually remember to switch between their assistants. Research released earlier in 2017 shows that while people will try lots of Alexa "skills," or apps, they don't really stick with them. And as it stands in this first version, Cortana is essentially just another Alexa skill. 

Eventually, Amazon CEO Jeff Bezos told the New York Times, the goal is for Alexa (or Cortana) to automatically route the right question to the right assistant, without your needing to think about it. The idea is that one assistant might be for your personal life, and one for your professional life. 

This is when things will really get interesting.

The long road ahead

AI interoperability is a grand idea, and something that Amazon and Microsoft will probably brag about a lot in the months and years to come. 

And it's easy to understand why they're so excited. amazon alexa lg refrigerator smart fridge

Amazon and Microsoft both missed out on the smartphone boom, relegating them to providing apps and services for other companies' platforms. The rise of the voice assistant represents a whole new platform; a change in the status quo that both Microsoft and Amazon are hoping to exploit.

With their powers combined, it gives both assistants footholds into new markets — a vital hedge as Apple and Google go on the offensive with their own Siri and Google Assistant heading into the holiday shopping season.

Despite the shortcomings of the current Alexa-Cortana partnership, Microsoft and Amazon could be on track to solve a huge existential threat to the future of technology. The explosion of virtual assistants has set loose a slew of technologies including Alexa, Cortana, Siri, Google Assistant, Samsung Bixby, and maybe even Facebook's M, that are spreading through your home — first, with speakers, then, with voice-controlled tablets, and next, home appliances.

Amazon Echo

It means that there's going to be a war for your home: Your toaster may use a different voice assistant than your fridge, which may be incompatible with all the home entertainment system in your living room. When you say "hello" to your home, it may answer back in a veritable chorus of different voices. 

That's the kind of chaotic scenario that nobody wants.

One obvious solution is to buy gadgets that support only one company's particular assistant, similar to today's iPhone or Windows ecosystems. But with the overall virtual assistant market still very much in flux, it may be a while before things settle down to the point where there are any real "safe," future-proofed options.

That makes the  automatic voice assistant aggregation envisioned by Bezos the sanest way to deal with the explosion of intelligence in the living room and office. But this system will only live up to its true potential and catch on with consumers if the gang of virtual assistants are able talk to each other on their own, without too many constraints. And for now, that's still science fiction.

SEE ALSO: Amazon and Microsoft team up to make their AI assistants Alexa and Cortana talk to each other

Join the conversation about this story »

NOW WATCH: Turns out the Amazon Echo Dot makes an amazing car infotainment system

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>