- Most companies that are early users of artificial intelligence have one big concern about the technology— cybersecurity.
- That's not an idle concern; many say their AI systems have already been breached, according to a new study from Deloitte.
- Researchers have shown that machine-learning systems can be manipulated and can potentially leak valuable data, analysts from Deloitte said in a new report.
Artificial intelligence holds lots of promise among corporations, but early adopters of the technology also see some big dangers.
Bad AI could cause their companies to make strategic mistakes or engage in unethical practices, lead to legal liabilities, and even cause deaths. But the biggest risk early adopters see from the technology has to do with security, according to a new study from consulting firm Deloitte.
That concern is "probably well placed," analysts Jeff Loucks, Tom Davenport, and David Schatsky said in the report. Many of the companies that are already testing or implementing AI technologies have already experienced a security breach related to them.
"While any new technology has certain vulnerabilities, the cyber-related liabilities surfacing for certain AI technologies seem particularly vexing," the Deloitte analysts said in the report.
For its study, the firm surveyed some 1,100 executives at US companies from 10 different industries that are already testing or using AI in at least one of their business functions.
Many see security as a huge risk
As part of the study, Deloitte asked the executives to rank what they saw as the biggest risks related to the technology. By far, more ranked cybersecurity as their top choice than any other risk, and more than half of respondents ranked it in their top three.
The executives weren't just being nervous Nellie's. Many had already experienced the security of dangers of AI. Some 30% of those surveyed said their companies had experienced a security breach related to their AI efforts in the last two years.
The Deloitte study didn't discuss the types of breaches that the companies experienced or what the consequences of them were. But it did note that researchers have discovered various ways that hackers could compromise or manipulate AI systems to nefarious ends.
Machine learning systems can be manipulated
For example, researchers have shown that machine learning systems can be manipulated to reach incorrect conclusions when they're intentionally fed bad data, the Deloitte analysts said. For example, hackers could potentially fool a face-detection system by feeding it enough photos of an imposter to get it to recognize that person as the authorized user.
Such systems could also potentially expose companies intellectual property, the analysts said. If malicious actors were able to generate numerous interactions with a company's machine learning system, they could potentially use the results to reverse engineer the data on which the system was built, they said.
Because of such dangers, many of those surveyed said they had delayed, halted, or decided not to even begin some planned AI efforts, according to the study.
Now read:
- A new survey suggests Salesforce and SAP have an early lead over Amazon and Google in the next frontier in tech
- The head of tech of one of the world's largest consulting firms says the business world's most overhyped new technology is also the most important
- These charts show how pumped up HR departments are about AI — even if many of them are still relying on paper documents
- The best way to avoid killer robots and other dystopian uses for AI is to focus on all the good it can do for us, says tech guru Phil Libin
Join the conversation about this story »
NOW WATCH: Apple might introduce three new iPhones this year — here’s what we know