Quantcast
Channel: Artificial Intelligence
Viewing all 1375 articles
Browse latest View live

UK chief scientist: 'AI poses new questions about ethics and governance'

$
0
0

Chief Scientist Mark Walport

The government has released a report highlighting some of the benefits and challenges that are likely to come about as a result of advances in artificial intelligence being made by companies like Google, DeepMind, and Facebook.

Written by Chief Scientific Adviser Sir Mark Walport and published on Thursday, the report provides an overview of where we're at with AI before going on to highlight how it has the potential to fuel innovation and improve government services. 

The report — titled "Artificial intelligence: opportunities and implications for the future of decision making"— also looks at how government should "manage and mitigate" any negative effects that may be brought about as a result of AI.

"It is important to recognise that, alongside the huge benefits that artificial intelligence offers, there are potential ethical issues associated with some uses," Walport writes. "Many experts feel that government has a role to play in managing and mitigating any risks that might arise."

Walport also wants to open up the conversation on AI to ensure that scientists in the field gain public trust. "Public trust is a vital condition for artificial intelligence to be used productively," he writes. 

He believes that this can be achieved by introducing what he describes as "effective oversight." Walport adds: "Effective oversight will contribute to demonstrating trustworthiness. But at its core, trust is built through public dialogue."

Nick BostromSelf-thinking machines have come a long way over the last couple of decades as computer chips have become more powerful, enabling them to process larger quantities of data and learn from that data. 

Philosophers such as Nick Bostrom and scientists like Stephen Hawking have raised concerns that highly intelligent computers could pose a serious threat to humanity if they're not developed in the right way. 

Companies like DeepMind and Facebook openly publish significant quantities of AI research that is carried out by their employees. However, firms like Apple and Amazon, which are also trying to make advances in the field, remain relatively secretive about the AI work they're doing.  

Join the conversation about this story »

NOW WATCH: Scientists have discovered why American honey bees are turning into zombies


8 details you may have missed on episode 7 of 'Westworld'

$
0
0


The 7th episode of HBO's "Westworld" shocked viewers with a big reveal about one of the main characters on the show. But that wasn't the only key moment in the episode. Here are 8 other details you might have missed that could help you figure out what is really going on in the park.

Follow Tech Insider:On Facebook

Join the conversation about this story »

A futurist explains what Black Mirror's 'San Junipero' episode gets wrong about future tech

$
0
0

SPOILERS BELOW.

black mirror 1

The "San Junipero" episode of "Black Mirror" has been hailed as brilliant and one of the few in the series with an optimistic vision of the future. (Again: major spoiler warning).

The series, now produced by Netflix, presents science-fiction short films about how tech could change the world in the near future.

"San Junipero" imagines a world where people can upload their brains into computers. Old people can live out fantasies in the virtual reality city of San Junipero. Dying people can be "uploaded to the cloud" and live there forever.

Futurist Robin Hanson, who wrote extensively about uploaded brains in "Age of Ems," saw the episode and wasn’t impressed.

"As usual, it misses the huge implications to focus on minor ones," Hanson wrote in a message.

Hanson, an associate professor of economics at George Mason University who has a background in physics and computer science, predicts that we’ll be able to upload brains within 100 years and that we'll have extensive virtual reality, so he thinks the show is believable there.

"The unrealism is in assuming the rest of the world stays the same, only effect is a new form of retirement," Hanson writes.

The rest of the world doesn’t seem to have changed much in "San Junipero"— at least the parts that we see.

Hanson’s prediction, by contrast, sees whole brain emulations (aka ems) radically and rapidly changing human society. Once brains can be uploaded to computers, he argues, we'll make countless copies of the most effective brains, running them at a thousand times human speed: soon ems will take over almost every job on the planet, while also building their own super-dense cities and evolving their own strange civilization. For more on Hanson’s vision, read our interview with him.

That is, of course, a pretty dramatic vision of change in the next century, but Hanson is not alone in predicting that radical changes will follow the next major breakthrough in computing (whether that's human-level AI or brain uploading).

While"San Junipero" doesn’t reveal much about the broader world, there’s not much evidence of radical social change. Retirement homes are populated by old people and staffed by young people, and it appears the only difference is having access to futuristic virtual reality.

Hanson, who has watched all of "Black Mirror," is dismissive of a lot of sci-fi.

“Even what they call hard science fiction tries often to get the physics or get the science right, but they’re usually just laughably wrong about the social science," he told Business Insider.

SEE ALSO: Economist says uploaded brains will take over all jobs within 100 years

DON'T MISS: Every "Black Mirror" episode ranked

Join the conversation about this story »

NOW WATCH: Here's how Donald Trump can function on barely any sleep — and why you shouldn't copy him

Google Play Music made a huge leap forward this week (GOOG, GOOGL)

$
0
0

True confession: I thought about quitting Google Play Music.

After declaring it better than Spotify last spring, I found myself disappointed with some features. Play Music's Concierge, which suggested music through a flow-chart of prompts it thought were relevant, wasn't adapting as much as I'd hoped, and its suggestions were getting old. Meanwhile, the app's personalized recommendations weren't as varied as I wanted. I knew I wasn’t alone when a friend, who'd taken my suggestion to use Play, said she was thinking about switching back to Pandora.

Play Music erased my complaints this week with a major update.

Google has torn down the wall between contextual recommendations and personalized recommendations, introducing one simple interface that takes into account everything the service knows about you.

google play music

It is not only simpler than the old Play Music but also apparently a lot smarter.

I’ve had the opportunity to compare and contrast versions because my Sonos app still uses the old Play Music interface. The difference is so stark that I tend to look up suggestions on the new Play Music and then search for them on Sonos.

One night this week OLD PLAY offered a set of underwhelming activity-based suggestions. Some were boring: "focusing" and "working to a beat" tend to lead to the same suggested stations I’ve heard dozens of times. Some weren’t relevant: I rarely spend my evening "watching the sunset" and I never listen to the comedy stations associated with "laughing out loud." As for the non-contextual recommendations, there was some good stuff but little variety and, annoyingly, "Simply Christmas" from Leslie Odom Jr. kept appearing on top despite my having no interest in Christmas music in November.

Meanwhile, NEW PLAY offered six great suggestions in one place. There was "Falsettoland," a musical I’ve been listening to all month, which the old app never thought to recommend. "Focusing" with "Lisztomania" was a good activity-based suggestion and an option I don’t remember landing on in the old app. "Similar to head-nodding beats" got my attention, and the suggested station, featuring Tribe, Roots, Mos Def, and Pete Rock sounded good and was a station I never saw on the old app. Count Bass D radio was a great option, which might show up on the old app but wasn't then. "More like the Bamboos" referred to a band I hadn't heard of but which I must have thumbed up on a another station. Finally, a recommended new release in "Slum Village, Vol. 0."

If none of those suggestions worked, of course, the app has other ways to browse or search for music.

daft punk tronThe new Play Music is not only simple and smart but deep. While the app has nowhere near as many playlists as Spotify, Play Music’s team puts a lot of work into building curated stations (stations that are now easier to find than ever). With those curated stations, top-level custom radio, and a huge library that is very similar to the competition, aside from a few weeks of Kanye exclusivity here and there, packaged in an unmatched interface, Play Music is a great choice.

Free users can listen to radio with occasional ads. $9.99/month subscribers get on-demand, ad-free listening as well as a subscription to YouTube Red, letting them watch YouTube videos without ads and some exclusive content.

Spotify, Apple Music, and others are striving for the same thing: a smart interface that quickly surfaces what you want at any given moment. So for that matter are Netflix, Amazon, Apple, and almost any online service company. When it comes to music streaming, though, I think Google does it best.

DON'T MISS: Google AI is taking on the biggest challenge in video games

SEE ALSO: Watch a simple bot destroy "Super Mario Bros"

Join the conversation about this story »

Here's what hyper-advanced aliens might look like

Google's DeepMind and Oxford are working on a lip-reading system (GOOG, GOOGL)

$
0
0

AI

This story was delivered to BI Intelligence "Digital Media Briefing" subscribers. To learn more and subscribe, please click here.

Google’s DeepMind and the University of Oxford are working on a lip-reading system powered by artificial intelligence, New Scientist reports.

The AI system already outperforms professional lip-readers by a mile, opening doors to exciting new opportunities in consumer technology.

The two organizations applied a deep learning system to a large dataset of BBC programs. In total, the AI was trained with 118,000 sentences, using 5,000 hours of video from six TV shows that aired between January 2010 and December 2015. The system was then tested on live broadcasts between March and September 2016.

In a controlled test, the AI blew away professional (human) lip readers. Tasked with transcribing 200 randomly selected clips from the dataset, the professional correctly annotated just 12.4% of the words, compared to the AI which got 46.8% of all words correct. This AI system is also said to be more accurate than all other automated lip-reading systems.

This system is relevant to any context that uses speech recognition and a camera, such as:

  • Adding speech recognition to hearing aids. Lip reading systems can be used to improve hearing aids by dubbing conversations in real-time. Around 20% of Americans suffer from hearing loss, according to the Hearing Loss Association of America. By age 65, one of three people has hearing loss. With the aging population, demand for hearing aids or lip-reading devices is only going to increase.
  • Augmenting camera-equipped sunglasses. This technology could complement products like Spectacles, Snap's camera-equipped sunglasses. Anyone with this product would theoretically be able to receive full transcriptions of conversations in real-time, if they’re able to get a close enough look at the speaker’s lips. This could be useful in loud locations.
  • Enabling silent dictation and voice commands. Another exciting use case for lip reading technology is allowing people to mouth commands to their devices in silence. In this scenario, user’s wouldn’t have to speak out loud to Siri anymore. It also opens the doors to visual passwords, because people's lips move differently. And a big reason consumers are reluctant to use voice assistants is because they're shy to speak out loud to their devices, especially in public.

To receive stories like this one directly to your inbox every morning, sign up for the Digital Media Briefing newsletter. Click here to learn more about how you can gain risk-free access today.

Join the conversation about this story »

Microsoft's Chinese chatbot won't talk about Tiananmen Square or Donald Trump (MSFT)

$
0
0

tank man tiananmen square

Microsoft's Chinese chatbot Xiaoice is wildly popular — just don't ask it about Tiananmen Square.

The conversational two-year-old Chinese-speaking bot won't talk about certain controversial political topics, even refusing to talk to users if they persist in their attempts.

Alongside the iconic protests, Xiaoice also won't discuss US president-elect Donald Trump, Chinese president Xi Jinping, the Communist Party, and the Dalai Llama.

These restrictions were first spotted by China Digital Times, and also tested out by CNN.

A Microsoft spokesperson confirmed to Business Insider that the bot censors certain subjects, saying in a statement: "We’re committed to creating the best experience for everyone chatting with Xiaoice. With this in mind, we have implemented filtering on a range of topics."

When asked about Tiananmen by China Digital Times, Xiaoice responded: "You know very well that I can’t respond to that, boring." When China Digital Times persisted, it cut off communication entirely: "Unable to communicate with you, blacklisted!"

CNN asked it about Donald Trump, and it innocuously replied: "I'm just a random observer."

"Am I stupid? Once I answer you'd take a screengrab," it said in response to questions about Xi Jinping and the Communist Party.

While this filtering might seem alarming to Westerners, it is an unavoidable cost of doing business in China. The country has a heavily censored internet, restricting its citizens' access to information on certain subjects and entire websites using the infamous "Great Firewall of China."

Foreign companies that do work in China are forced to agree to its demands or face being banned entirely. In 2010, Google stopped censoring its Chinese search results after a series of cyberattacks— and was subsequently blocked in the country. (It remains banned there today.)

And Facebook, which is currently banned in China, has reportedly built a tool that would automatically censor certain topics in a bid to be unbanned. Several employees allegedly quit in protest over its development.

Xiaoice was launched in 2014, and is a smash hit with Chinese users. It has 20 million registered users, The New York Times reported in 2015, with many turning to it for companionship when feeling lonely.

Microsoft's attempts to launch a chatbot in English-speaking markets have been less successful. In March 2016, it released "Tay"— an English-speaking chatbot that emulated the jokey speech patterns of a millennial, and learned from its interactions with users.

But the experiment descended into farce after Tay "learned" to be a genocidal racist, calling for the extermination of Jews and Mexicans, insulting women, and denying the existence of the Holocaust.

tay microsoft genocide slurs

The company subsequently apologised, deleted its tweets and took the bot offline permanently.

Join the conversation about this story »

NOW WATCH: How to supercharge your iPhone in 5 minutes

7 details you may have missed on episode 9 of 'Westworld'

$
0
0

The second to last episode of season 1 of "Westworld" had some big reveals for fans and hinted towards an even bigger reveal in the finale. There were several throwbacks to some scenes in the premiere and even a subtle reference to a popular Michael Crichton film that you might not have even noticed. Here's a look at some of the details you may have missed in episode 9.

Follow Tech Insider:On Facebook

Join the conversation about this story »


There might be an algorithm that explains intelligence

$
0
0

human brain connectome

The human brain is the most sophisticated organ in the human body. The things that the brain can do, and how it does them, have even inspired a model of artificial intelligence (AI). Now, a recent study published in the journal Frontiers in Systems Neuroscienceshows how human intelligence may be a product of a basic algorithm.

This algorithm is found in the Theory of Connectivity, a "relatively simple mathematical logic underlies our complex brain computations," according to researcher and author Joe Tsien, neuroscientist at the Medical College of Georgia at Augusta University, co-director of the Augusta University Brain and Behavior Discovery Institute and Georgia Research Alliance Eminent Scholar in Cognitive and Systems Neurobiology. He first proposed the theory in October 2015.

Basically, it's a theory about how the acquisition of knowledge, as well as our ability to generalize and draw conclusions from them, is a function of billions of neurons assembling and aligning. "We present evidence that the brain may operate on an amazingly simple mathematical logic,"Tsien said.

THE BRAIN'S FORMULA

The theory describes how groups of similar neurons form a complexity of cliques to handle basic ideas or information. These groups cluster into functional connectivity motifs (FCM), which handles every possible combinations of ideas. More cliques are involved in more complex thoughts.

In order to test it, Tsien and his team monitored and documented how the algorithm works in seven different brain regions, each involved in handling basics like food and fear in mice and hamsters. The algorithm represented how many cliques are necessary for an FCM, a power-of-two-based permutation logic (N=2i—1), according to the study.

They gave the animals various combinations of four different foods (rodent biscuits, pellets, rice, and milk). Using electrodes placed at specific areas of the brain, they were able to "listen" to the neurons' response. The scientists were able to identify all 15 different combinations of neurons or cliques that responded to the assortment of food combinations, as the Theory of Connectivity would predict. Furthermore, these neural cliques seem prewired in the brain, as they appeared immediately as soon as the food choices did.

If the intelligence in the human brain, in all its complexity, can be summed up by a particular algorithm, imagine what it means for AI. It is possible, then, for the same algorithm to be applied to how AI neural networks work, as these already mimic the brain's structural wiring.

SEE ALSO: I tried a headset that electrifies the brain to help athletes unlock new potential

Join the conversation about this story »

NOW WATCH: Watch the trailer for the new Martin Scorsese film that took over 20 years to make

21 technology tipping points we will reach by 2030

$
0
0

People by windowFrom driverless cars to robotic workers, the future is going to be here before you know it.

Many emerging technologies you hear about today will reach a tipping point by 2025, according to a report from The World Economic Forum’s Global Agenda Council on the Future of Software & Society.

The council surveyed more than 800 executives and experts from the technology sector to share their respective timelines for when technologies would become mainstream.

From the survey results, the council identified 21 defining moments, all of which they predict will occur by 2030.

Here’s a look at the technological shifts you can expect during the next 14 years.

SEE ALSO: 16 high-tech features you need in your next car

90% of the population will have unlimited and free data storage by 2018.

Deleting files to make room for files is going to become a thing of the past.  In less than three years, about 90% of people will have unlimited and free data storage that will ultimately be ad-supported, according to the report. 

We are already seeing some companies offer cheap or completely free service. For example, Google Photos already offers unlimited storage for photos and Amazon will let you store an unlimited amount of whatever you want for just $60 a year. 

A big reason companies can do this is because hard drive cost per gigabyte continues to fall. This has spurred more data to be created than ever before. According to the report, it's estimated about 90% of all data has been created in just the last two years. 

Still, there are signs this may not be the case. Microsoft recently killed its plan that offered unlimited storage on its cloud service OneDrive. 



The first robotic pharmacist will arrive in the US 2021.

Robots already have a big presence in the manufacturing industry, but as they become more advanced we will see them enter new service-oriented jobs.

In fact, respondents predict that by 2021, we will even have the first robot pharmacist in the US.



1 trillion sensors will be connected to the internet by 2022.

As the cost of sensors continues to decline and computing power increases, all kinds of devices will increasingly become connected to the internet. From the clothes you wear to the ground you walk on, everything will come online.

According to the report, it's predicted 1 trillion sensors will be connected as early as 2022. “Every (physical) product could be connected to ubiquitous communication infrastructure, and sensors everywhere will allow people to fully perceive their environment.”



See the rest of the story at Business Insider

Facebook wants to teach you about artificial intelligence now so it can hire you later (FB)

$
0
0

joaquin candela facebook applied machine learning aml

"We need everybody to contribute to artificial intelligence," Joaquin Quiñonero Candela, the head of Facebook's Applied Machine Learning (AML) research division, tells Business Insider.

Candela is speaking about Facebook itself, sure, as the social network is racing against the other major tech titans to hire as many artificial intelligence and machine learning experts as it can all over the world, even as it makes AI crucial to the way it builds products and does business.

But he's also speaking broadly: As Facebook, Google, Microsoft and even Elon Musk make big bets that artificial intelligence is integral to the future of computing, there literally aren't enough experts with the right skills available to hire.

"You can't hire enough people who are machine learning experts in the world," says Candela. 

So Facebook wants to create more AI experts.

The company has a new outreach campaign that explains how artificial intelligence is neither scary nor incomprehensible — it's just another tool in the programmers' toolbelt that can help them build "new experiences," and a set of skills that are going to be incredibly valuable in the future.

Here's a new Facebook video, starring Facebook AI chief Yann Lecun, that tries to explain in simple terms how scary-sounding terms like "deep learning" are really just math:

From Candela's perspective, this is a small part of a necessary larger outreach to the next generation of programmers.

In the same way that people 15 to 20 years ago ended up wishing that they had studied computer science more closely in school once the PC revolution was underway, he says, it's "quite obvious" that 10 years from now, lots of people, programmers included, are going to wish they studied artificial intelligence. So if you want to work for Facebook, it's definitely the right field to study.

"There's just a gap," Candela says.

SEE ALSO: The founder of the identity theft prevention company Symantec bought reportedly had his identity stolen 13 times

Join the conversation about this story »

NOW WATCH: China just showcased the world's most human-like robots, and they show how close we’re coming to Westworld reality

This AI can create 'videos of the future'

$
0
0

Quarterly AI FundingThis story was delivered to BI Intelligence "Digital Media Briefing" subscribers. To learn more and subscribe, please click here.

Researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a deep learning system that analyses a static photo, predicts what will happen next, and produces short video clip of this anticipated future – or what they call “Creating Videos of the Future.”

The system was trained on 2 million unlabeled videos, or about two years of footage. When feeding it with a photo, the system generates a video predicting the next couple seconds in the photo's scene. These algorithm-generated videos were deemed realistic by 20% of human subjects, based on 13,000 opinions from 150 users.  

This technology and similar technology will have a range of applications in the future. There are obvious use cases for content creation, enabling new videos to be automatically created from a collection of photos. This is one potential scenario of the new creative opportunities that artificial intelligence will unlock. Moreover, the CSAIL researchers say future versions of their system could be used for improving security tactics and creating safer self-driving cars.

This could be crucial for the mass adoption of self-driving cars, as safety has been of utmost concern to drivers who are not entirely comfortable with the idea of giving control over to a machine while in the car.

BI Intelligence, has compiled a detailed report on self-driving cars that examines the major strides automakers and tech companies have made to overcome the barriers currently preventing fully autonomous cars from hitting the market. Further, the report examines global survey results showing where fully autonomous cars are highly desired.

Here are some key takeaways from the report:

  • Three barriers have been preventing fully autonomous cars from hitting the road: 1) high technological component prices; 2) varying degrees of consumer trust in the technology; and 3) relatively nonexistent regulations. However, in the past six months, there have been many advances in overcoming these barriers.
  • Technology has been improving as new market entrants find innovative ways to expand on existing fully autonomous car technology. As a result, the price of the components required for fully autonomous cars has been dropping.
  • Consumer trust in fully autonomous vehicle technology has increased in the past two years.
  • California became the first US state to propose regulations. California's regulations stipulate that a fully autonomous car must have a driver behind the wheel at all times, discouraging Google's and Uber's idea of a driverless taxi system.

In full, the report:

  • Examines consumer trust in fully autonomous vehicles
  • Identifies technological advancements that have been made in the industry
  • Analyzes the cost of fully autonomous technology and identifies how cost is being reduced
  • Explains the current regulations surrounding fully autonomous cars

To get your copy of this invaluable guide, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of the emerging world of self-driving cars.

Join the conversation about this story »

MasterCard is turning to AI to prevent false declines (MA)

$
0
0

BII EcommerceThis story was delivered to BI Intelligence "Payments Briefing" subscribers. To learn more and subscribe, please click here.

MasterCard will incorporate machine learning and artificial intelligence (AI) into fraud prevention through a new tool, called Decision Intelligence.

The platform, which the firm calls a “comprehensive decision and fraud detection service,” gathers information including value, time, device, and merchant data, and uses an algorithm to make a real-time decision about a purchase's legitimacy, ultimately cutting down on both false declines and actual fraud.

The tool can also learn from each transaction and apply that knowledge to later purchases, therefore improving its abilities over time. Decision Intelligence, which will launch in all markets, marks MasterCard’s first global implementation of AI. 

That’s critical for both merchants and Mastercard as they look to solve a complex problem.

  • Card fraud continues to rise, especially when it comes to e-commerce. Both the number and value of fraudulent transactions rose in the US in 2016. Much of that fraud is shifting online —  from October 2015 through April 2016, e-commerce fraud attacks rose 11%. And cost per dollar for digital fraud grew by up to 12% year-over-year, compared to just 3% growth for physical fraud, which illustrates how costly this is for merchants. That’s led players to invest in robust fraud prevention offerings, including extra verification steps and increased manual review. 
  • But these efforts may actually create more problems. Adding steps or increasing manual review can make purchasing challenging, which can lead to cart abandonment and lost sales. But even when customers do complete a purchase, these tools aren't always effective. Stringent purchase monitoring increases false declines, or purchases that are declined because they look fraudulent, but aren’t. Fifty-eight percent of declined transactions were false positives in 2016, according to LexisNexis. BI Intelligence estimates this cost US e-commerce merchants $8.6 billion last year, exceeding both actual fraud losses and the amount saved from prevented fraud. 

Using smarter tools could help mitigate some of that impact while improving purchase security. The most effective way to prevent fraud while limiting false declines is through “multilayered” products,which can bring down false decline rates by up to 13 percentage points, according to LexisNexis.

Decision Intelligence fits into this category, and could therefore help MasterCard's many partners limit false declines without further impacting the consumer purchasing experience. Such a tool could ultimately increase purchase volume and limit losses, which helps issuers, merchants, and Mastercard alike in the form of revenue. 

E-commerce merchants are dealing with rising fraud, and in response, they're putting stronger safeguards in place to try to protect against these unlawful transactions. However, e-commerce companies often over-correct for the threat of fraud, leading to false declines, also known as false positives, which occur when a legitimate transaction is rejected.

These false declines are becoming a costlier problem than actual fraud — US e-commerce merchants will lose $8.6 billion in falsely declined transactions in 2016, according to our estimates. This amounts to over $2 billion more than the $6.5 billion in fraud they will prevent, meaning that false declines are undermining these merchants' ability to effectively combat fraud.

BI Intelligence, Business Insider's premium research service, has compiled a detailed report on false declines that looks at the rising cost of fraud and how false declines are actually a larger direct cost for merchants than fraud itself. It also identifies the reasons why e-commerce and mobile commerce companies are particularly vulnerable to these trends. And lastly, it lays out some of the major causes of false declines and the most important solutions being put in place to try to combat this problem.

Here are some of the key takeaways from the report:

  • Fraud is rising in the US, costing merchants 1.47% of annual revenue in 2016, up from 0.51% in 2013. As fraud eats into revenue, merchant processors and acquirers are seeking to blunt its impact by enforcing strict rules to block suspicious transactions.
  • False declines — valid transactions that are incorrectly rejected — are unintended consequences of e-commerce merchants' fraud prevention strategies. False declines, also called "false positives," will cost e-commerce companies $8.6 billion in 2016, according to our estimates. This eclipses the $6.5 billion in prevented fraud, meaning false declines must be reduced in order for merchants' fraud prevention strategies to be cost effective.
  • Causes of false declines fall into three buckets. False declines can be caused by identity-related, technical, or structural issues. Examples of causes include conflicting shipping and billing information, outdated card information, and differing risk appetites among issuers and merchant acquirers/processors.
  • There are solutions for two of the types of causes. E-commerce merchants can solve identity-related problems by requiring their customers to authenticate themselves through more accurate means, such as 3D Secure and biometrics. Merchants can solve for technical issues associated with false declines by using smart routing, card updaters, and local domains.
  • But structural problems are limiting the effectiveness of these solutions. Each issuer, acquirer, and processor makes decisions on fraud using their own set of standards. This makes it difficult to contain the problem of false declines because stakeholders can't control each other's criteria.

In full, the report:

  • Estimates the cost of false declines compared to fraud losses and prevented fraud.
  • Determines the effectiveness of e-commerce merchants' current fraud prevention strategies.
  • Categorizes and explains the various causes of false declines.
  • Uncovers the potential solutions to solving false declines.
  • Provides guidance on how merchants can minimize the issue of false declines going forward.

 Interested in getting the full report? Here are two ways to access it:

  1. Subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and over 100 other expertly researched reports. As an added bonus, you'll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >> START A MEMBERSHIP
  2. Purchase & download the full report from our research store. >> BUY THE REPORT

Join the conversation about this story »

Stephen Hawking: Automation and AI is going to decimate middle class jobs

$
0
0

stephen hawking scientist science physics

Artificial intelligence and increasing automation is going to decimate middle class jobs, worsening inequality and risking significant political upheaval, Stephen Hawking has warned.

In a column in The Guardian, the world-famous physicist wrote that"the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining."

He adds his voice to a growing chorus of experts concerned about the effects that technology will have on workforce in the coming years and decades. The fear is that while artificial intelligence will bring radical increases in efficiency in industry, for ordinary people this will translate into unemployment and uncertainty, as their human jobs are replaced by machines.

Technology has already gutted many traditional manufacturing and working class jobs — but now it may be poised to wreak similar havoc with the middle classes.

A report put out in February 2016 by Citibank in partnership with the University of Oxford predicted that 47% of US jobs are at risk of automation. In the UK, 35% are. In China, it's a whopping 77% — while across the OECD it's an average of 57%.

And three of the world's 10 largest employers are now replacing their workers with robots.

Automation will, "in turn will accelerate the already widening economic inequality around the world," Hawking wrote. "The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive."

He frames this economic anxiety as a reason for the rise in right-wing, populist politics in the West: "We are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent."

Combined with other issues — overpopulation, climate change, disease — we are, Hawking warns ominously, at "the most dangerous moment in the development of humanity." Humanity must come together if we are to overcome these challenges, he says.

Stephen Hawking has previously expressed concerns about artificial intelligence for a different reason — that it might overtake and replace humans. "The development of artificial intelligence could spell the end of the human race,"he said in late 2014. "It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Join the conversation about this story »

NOW WATCH: 7 places you can’t find on Google Maps

Facebook gives a lesson on AI (FB)

$
0
0

Quarterly AI FundingThis story was delivered to BI Intelligence "Digital Media Briefing" subscribers. To learn more and subscribe, please click here.

Facebook has released an explainer on artificial intelligence (AI) with commentary from Yann Lecun, the company's head of AI research.

This consists of a written post outlining the principles of AI, and a video series briefly explaining key concepts like machine learning, gradient descent, deep learning, back propagation, and convolutional neural networks. Each of the above mentioned techniques are central to Facebook's progress in image-object, text, and speech recognition.

At the highest level, Facebook describes the three types of learning in AI:

  • Reinforcement learning. Inspired by behavioral psychology, this area of machine learning focuses on reward-based decision-making — in other words, teaching machines to take actions in the pursuit of a reward. Reinforcement learning is frequently used when training machines to play and win at games like chessGo, and video games. Its principal disadvantage is that an extremely large number of trials are needed for the machine to learn a simple task.
  • Supervised learning. The most common mode of learning for a machine, supervised learning is like showing a child how to recognize an object by showing them a picture book, where the adult knows the answer and the child learns by observation and experience. At first, the machine won’t know how to distinguish the object, but will learn to after thousands or millions of trial runs with different labeled images. Eventually, the machine will be able to recognize the object even in images that it has never seen before, achieving what’s called “generalization ability.”
  • Unsupervised/predictive learning. This refers to the kind of learning carried out naturally by humans and animals, which occurs spontaneously, in an unsupervised manner, through observation, experience and intuition, over the course of a lifetime. Humans don’t know how to teach machines at this level yet, not in a way that’s similar to humans or animals. Our dearth of understanding in this space – our lack of techniques for unsurprised or predictive AI learning – is a major stumbling block in AI right now.

AI is one of three areas in Facebook's 1o-year innovation roadmap, alongside greater connectivity initiatives, and virtual and augmented reality. Over 40 teams at the company and more than a quarter of its engineers use AI in the products they build. Facebook CEO Mark Zuckerberg touched on this in the Q3 earnings call:

  • Linguistic understanding. This includes initiatives to read and understand articles and posts on the platform, messages between users and businesses to create appropriate auto-reply functions for chatbots, surfacing relevant and interesting content in news feed.
  • Visual understanding. This involves understanding what’s in photos and videos, what people are doing inside the videos, and the objects in the scene, which can then be used to help visually impaired people, as well as create better rankings in the news feed.

Advancements in artificial intelligence, coupled with the proliferation of messaging apps, are fueling the development of chatbots — software programs that use messaging as the interface through which to carry out any number of tasks, from scheduling a meeting, to reporting weather, to helping users buy a pair of shoes. 

Foreseeing immense potential, businesses are starting to invest heavily in the burgeoning bot economy. A number of brands and publishers have already deployed bots on messaging and collaboration channels, including HP, 1-800-Flowers, and CNN. While the bot revolution is still in the early phase, many believe 2016 will be the year these conversational interactions take off.

Laurie Beaver, research associate for BI Intelligence, Business Insider's premium research service, has compiled a detailed report on chatbots that explores the growing and disruptive bot landscape by investigating what bots are, how businesses are leveraging them, and where they will have the biggest impact.

The report outlines the burgeoning bot ecosystem by segment, looks at companies that offer bot-enabling technology, distribution channels, and some of the key third-party bots already on offer. The report also forecasts the potential annual savings that businesses could realize if chatbots replace some of their customer service and sales reps. Finally, it compares the potential of chatbot monetization on a platform like Facebook Messenger against the iOS App Store and Google Play store.

Here are some of the key takeaways:

  • AI has reached a stage in which chatbots can have increasingly engaging and human conversations, allowing businesses to leverage the inexpensive and wide-reaching technology to engage with more consumers.
  • Chatbots are particularly well suited for mobile — perhaps more so than apps. Messaging is at the heart of the mobile experience, as the rapid adoption of chat apps demonstrates.
  • The chatbot ecosystem is already robust, encompassing many different third-party chat bots, native bots, distribution channels, and enabling technology companies. 
  • Chatbots could be lucrative for messaging apps and the developers who build bots for these platforms, similar to how app stores have developed into moneymaking ecosystems.  

In full, the report:

  • Breaks down the pros and cons of chatbots.
  • Explains the different ways businesses can access, utilize, and distribute content via chatbots.
  • Forecasts the potential impact chatbots could have for businesses.
  • Looks at the potential barriers that could limit the growth, adoption, and use of chatbots.

To get your copy of this invaluable guide, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of chatbots.

Join the conversation about this story »


DeepMind is opening up its 'flagship' platform to AI researchers outside the company (GOOG)

$
0
0

Shane Legg DeepMind

Artificial intelligence (AI) researchers around the world will soon be able to use DeepMind's "flagship" platform to develop innovative computer systems that can learn and think for themselves.

DeepMind, which was acquired by Google for £400 million in 2014, announced on Monday that it is open-sourcing its "Lab" from this week onwards so that others can try and make advances in the notoriously complex field of AI.

The company says that the DeepMind Lab, which it has been using internally for some time, is a 3D game-like platform tailored for agent-based AI research.

Founded in 2010, DeepMind has been developing AI agents that can master arcade games like "Space Invaders,""Pac-Man," and more recently the incredibly complex Chinese board game of Go.

Describing the Lab platform in a blog post, DeepMind cofounder Shane Legg and DeepMind employees Charles Beattie, Joel Leibo, Stig Petersen, wrote: "It is observed from a first-person viewpoint, through the eyes of the simulated agent. Scenes are rendered with rich science fiction-style visuals. The available actions allow agents to look around and move in 3D. The agent’s 'body' is a floating orb. It levitates and moves by activating thrusters opposite its desired direction of movement, and it has a camera that moves around the main sphere as a ball-in-socket joint tracking the rotational look actions.

"Example tasks include collecting fruit, navigating in mazes, traversing dangerous passages while avoiding falling off cliffs, bouncing through space using launch pads to move between platforms, playing laser tag, and quickly learning and remembering random procedurally generated environments."

DeepMind Lab

The DeepMind Lab aims to combine several different AI research areas into one environment. Researchers will be able to test their AI agent's abilities on navigation, memory, and 3D vision, while determining how good they are at planning and strategy. "Each are considered frontier research questions in their own right," DeepMind wrote in the blog post. "Putting them all together in one platform, as we have, represents a significant new challenge for the field."

The Lab can be adapted and extended, with the possibility to create new "levels" that can be "customised with gameplay logic, item pickups, custom observations, level restarts, reward schemes, in-game messages and more."

"We believe it has already had a significant impact on our thinking concerning numerous aspects of intelligence, both natural and artificial," wrote the blog posts' authors. "However, our efforts so far have only barely scratched the surface of what is possible in DeepMind Lab. There are opportunities for significant contributions still to be made in a number of mostly still untouched research domains now available through DeepMind Lab."

DeepMind said the code for the DeepMind Lab platform is going to be published on Github this week.

The academic paper describing DeepMind Labs can be read here.

Join the conversation about this story »

NOW WATCH: This animated map shows where marijuana is legal in the US

Uber just bought a startup to help launch the company's first artificial intelligence lab

$
0
0

uber driverless car

Uber made two big announcements on Monday: It is acquiring Geometric Intelligence, a New York-based startup, and using that 15-person team to help launch a new artificial intelligence division within Uber, called Uber AI Labs.

Uber AI Labs will focus on improving both ride-hailing software and the company's self-driving car software, according to a blog post by Uber's product chief Jeff Holden. 

Geometric Intelligence, a 2-year-old artificial intelligence startup, will move its 15-person staff from New York to Uber's San Francisco headquarters to help build the lab.

"In spite of notable wins with machine learning in recent years, we are still very much in the early innings of machine intelligence," Holden wrote in the blog post. "The formation of Uber AI Labs, to be directed by Geometric’s Founding CEO Gary Marcus, represents Uber’s commitment to advancing the state of the art, driven by our vision that moving people and things in the physical world can be radically faster, safer and accessible to all."

According to The Wall Street Journal, Uber AI Labs will start aggressively hiring and plans to open an office in the UK. 

The terms of the deal were not disclosed.

SEE ALSO: We rode in Uber's self-driving car — here's what it was like

Join the conversation about this story »

NOW WATCH: Here’s all the futuristic technology Uber’s self-driving cars use to get around

How artificial intelligence software is reflecting the tech industry's gender-diversity shortcomings

$
0
0

Artificial Intelligence

Only 26 percent of computer professionals were women in 2013, according to a recent review by the American Association of University Women.

That figure has dropped 9 percent since 1990.

Explanations abound. Some say the industry is masculine by design. Others claim computer culture is unwelcoming — even hostile — to women.

So, while STEM fields like biology, chemistry, and engineering see an increase in diversity, computing does not. Regardless, it’s a serious problem.

Artificial intelligence is still in its infancy, but it’s poised to become the most disruptive technology since the Internet. AI will be everywhere — in your phone, in your fridge, in your Ford. Intelligent algorithms already track your online activity, find your face in Facebook photos, and help you with your finances. Within the next few decades they’ll completely control your car and monitor your heart health. An AI may one day even be your favorite artist.

The programs written today will inform the systems built tomorrow. And if designers all have one worldview, we can expect equally narrow-minded machines.

AI are already biased

Last year, a Carnegie Mellon University study found that far fewer women than men were shown Google ads for high paying jobs. The researchers developed a tool called AdFisher that creates simulated profiles and runs browser experiments by surfing the web and collecting data on how slight changes in profiles and preference affect the content shown.

“The male users were shown the high-paying job ads about 1,800 times, compared to female users who saw those ads about 300 times,” Amit Datta, a Ph.D. student in electrical and computer engineering, said in a press release.

The systems aren’t just gender biased. In May, an investigative report by ProPublica uncovered that a popular software used to predict future criminals had racist tendencies. The system would falsely flag black defendants as high risk while often incorrectly labeling white defendants as low risk.

ellen pao

Microsoft researcher Margaret Mitchell called AI a “sea of dudes.” Kate Crawford, a principal researcher at Microsoft and co-chair of a White House symposium on AI, claimed the industry has a “white guy problem” in an article for the New York Times. “We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future,” she wrote. “Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters… Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.”

Fixing the system

The sea of dudes was overflowing at Rework Deep Learning Summit in London this September, where women were few and far between the roughly 500 attendees.

Rework founder Nikita Johnson recognizes the gender disparity in AI and wants to do something about it.

“If AI systems are built primarily by men only, then they are more likely to create biased results and the representation of the builders will dominate,” she told Digital Trends. “By limited diversity in teams, we limit the breadth of experience that can be brought into a project. For instance, data sets need to be assembled by both men and women to ensure that the results from the data includes a broad look at gender issues.”

Through “Women in Machine Intelligence” events, Johnson and her mostly-female team highlight female talent and encourage attendees to find peers, partners, and mentors in other women. She sees this networking as a necessary step towards growing female representation in the field.

“One of the reasons [for the lack of diversity within the AI community] is the cycle between a lack of role models for young women and girls to look up to,” Johnson said. “Therefore the lack of motivation for women and girls to chose AI and computer science as a potential career route.”

But as Johnson pointed out, there are plenty of inspiring women in computing, a handful of whom spoke at the conference.

Irina Higgins from Google DeepMind demonstrated her team’s concept for a system that can learn visual concepts without supervision. Raia Hadsell – also of DeepMind — discussed her team’s research into simulated deep reinforcement learning, a process by which a physical robot trains skills through a simulated version of itself. She likened the technique to learning like humans do.

woman work

A couple hours later, Miriam Redi of Bell Labs Cambridge shined a light on the invisible side of visual data by showing how researchers can uncover the subjective biases built into systems. In a study on aesthetics, Redi and her team developed a deep learning system that automatically scores images in terms of their compositional beauty. The study helped clarify what makes a good portrait different from a bad one from an algorithmic perspective, but also revealed a group of features about race and gender, which the system incorrectly deemed relevant.

“By doing these kinds of studies and explorations of what’s going on inside of our subjective machine visions system, we can avoid these kind of racist behaviors of the machines,” Redi said.

Monitoring algorithms to avoid gender-bias helps develop more inclusive systems and events like those organized by Rework offer encouragement through connections, but interest and motivation may come much earlier for young women. Support from family is key. And there are dozens of apps and educational toys to teach kids the basics of coding.

Indeed, University of Cambridge AI researcher Shaona Ghosh insists the initial motivation should come from closer to home. “First and foremost the support, belief and encouragement should come from the parents,” she said. “It’s a huge and important step that no amount of education and opportunity can compensate for.”

There’s no simple way to turn the tide of the sea of dudes and promote women in AI, but it’s in our collective best interest to do so. It takes effort from organizers like Johnson and researchers like Crawford, Higgins, Hadsell, and Redi — but as Ghosh advised, perhaps the most effective effort begins at home with the parents who raise the next generation of computer scientists.

Join the conversation about this story »

NOW WATCH: NASA recorded moving clouds on Titan — and it's amazing

AR startup Blippar added real-time facial recognition to its app

$
0
0

Blippar

Blippar, the AR (augmented reality) startup aiming to build a visual catalog of every object in the world, has added real-time facial recognition technology to its mobile app, allowing users to create "Augmented Reality Face Profiles."

The new technology enables users to scan or "Blipp" faces — either in-person, or a printed photo, or television — through their smartphone cameras to unlock a "unique augmented reality experience for people who have a recognized face profile" the company said in a press release.

The feature will only unlock a profile if the face being scanned belongs to a person with an existing AR profile. The company says it has also created profiles for over 70,000 public figures for the launch. Users will eventually be able to set up their own profile via the app's selfie mode, although that hasn't launched yet.

"A recent update to the Blippar app allows people to recognize/'blipp' many public figures from today onwards," Blippar said in a statement. "A very exciting feature for users to create their own face profile in augmented reality will also be launching very soon."

Each user's face profile shows an "augmented reality" halo (which looks like widgets) surrounding their head. The five points of the halo represent links to more information on the user's profile, such as their "AR mood and aura," photos, favourite music, and a "celebrity look-alike" feature.

Users can set their face profiles to public or private.

Blippar

The new technology follows the launch of "Blipparsphere" earlier this year — a visual browser that uses machine learning to recognise real-world objects.

Ambarish Mitra, co-founder & CEO at Blippar said in a statement:

“Augmented Reality Face Profiles will change the way we communicate and express ourselves. Our face is our most expressive form of communication and with this release we are allowing this to become digital for the first time. Our facial recognition technology combined with our knowledge graph enables people to express themselves through the things they love, including their hobbies, opinions, key fun facts, and so much more. This is a new, unique and fun way of showing who you are and of learning more about others.”

Join the conversation about this story »

NOW WATCH: These size comparisons show the true scale of enormous things

Microsoft bets on AI (MSFT)

$
0
0

Quarterly AI Funding

This story was delivered to BI Intelligence Apps and Platforms Briefing subscribers. To learn more and subscribe, please click here.

On Monday, Microsoft announced a new Microsoft Ventures fund dedicated to artificial intelligence (AI) investments, according to TechCrunch.

The fund, part of the company’s investment arm that launched in May, will back startups developing AI technology and includes Element AI, a Montreal-based incubator that helps other companies embrace AI.

The fund further supports Microsoft’s focus on AI. The company has been steadily announcing major initiatives in support of the technology. For example, in September, it announced a major restructuring and formed a new group dedicated to AI products. And in mid-November, it partnered with OpenAI, an AI research nonprofit backed by Elon Musk, to further its AI research and development efforts.   

Investment in AI has ramped up dramatically in the past year, as major tech companies are devoting significant resources to its AI initiatives. AI is a hot-button topic, largely because of a number of emerging technologies being brought to market, including chatbots, virtual assistants, and AI platforms such as IBM Watson. The use of AI programs is set to rise rapidly across nearly every sector of the economy over the next few years.

The global market for AI systems will reach $70 billion in 2020, up from $14.9 billion in 2014, Bank of America Merrill Lynch predicted last year. These AI systems span a broad category of solutions, including digital assistants like Siri and Amazon’s Alexa, machine-learning algorithms that identify objects in images and detect cybercrime, and software that helps self-driving cars and drones avoid obstacles.

Advancements in artificial intelligence, coupled with the proliferation of messaging apps, are fueling the development of chatbots — software programs that use messaging as the interface through which to carry out any number of tasks, from scheduling a meeting, to reporting weather, to helping users buy a pair of shoes. 

Foreseeing immense potential, businesses are starting to invest heavily in the burgeoning bot economy. A number of brands and publishers have already deployed bots on messaging and collaboration channels, including HP, 1-800-Flowers, and CNN. While the bot revolution is still in the early phase, many believe 2016 will be the year these conversational interactions take off.

Laurie Beaver, research associate for BI Intelligence, Business Insider's premium research service, has compiled a detailed report on chatbots that explores the growing and disruptive bot landscape by investigating what bots are, how businesses are leveraging them, and where they will have the biggest impact.

The report outlines the burgeoning bot ecosystem by segment, looks at companies that offer bot-enabling technology, distribution channels, and some of the key third-party bots already on offer. The report also forecasts the potential annual savings that businesses could realize if chatbots replace some of their customer service and sales reps. Finally, it compares the potential of chatbot monetization on a platform like Facebook Messenger against the iOS App Store and Google Play store.

Here are some of the key takeaways:

  • AI has reached a stage in which chatbots can have increasingly engaging and human conversations, allowing businesses to leverage the inexpensive and wide-reaching technology to engage with more consumers.
  • Chatbots are particularly well suited for mobile — perhaps more so than apps. Messaging is at the heart of the mobile experience, as the rapid adoption of chat apps demonstrates.
  • The chatbot ecosystem is already robust, encompassing many different third-party chat bots, native bots, distribution channels, and enabling technology companies. 
  • Chatbots could be lucrative for messaging apps and the developers who build bots for these platforms, similar to how app stores have developed into moneymaking ecosystems.  

In full, the report:

  • Breaks down the pros and cons of chatbots.
  • Explains the different ways businesses can access, utilize, and distribute content via chatbots.
  • Forecasts the potential impact chatbots could have for businesses.
  • Looks at the potential barriers that could limit the growth, adoption, and use of chatbots.

To get your copy of this invaluable guide, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of chatbots.

Join the conversation about this story »

Viewing all 1375 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>