Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

Most companies using AI say their No.1 fear is hackers hijacking the technology, according to a new survey that found attacks are already happening

$
0
0

hacker cyber code

  • Most companies that are early users of artificial intelligence have one big concern about the technology— cybersecurity.
  • That's not an idle concern; many say their AI systems have already been breached, according to a new study from Deloitte.
  • Researchers have shown that machine-learning systems can be manipulated and can potentially leak valuable data, analysts from Deloitte said in a new report.

Artificial intelligence holds lots of promise among corporations, but early adopters of the technology also see some big dangers.

Bad AI could cause their companies to make strategic mistakes or engage in unethical practices, lead to legal liabilities, and even cause deaths. But the biggest risk early adopters see from the technology has to do with security, according to a new study from consulting firm Deloitte.

That concern is "probably well placed," analysts Jeff Loucks, Tom Davenport, and David Schatsky said in the report. Many of the companies that are already testing or implementing AI technologies have already experienced a security breach related to them.

"While any new technology has certain vulnerabilities, the cyber-related liabilities surfacing for certain AI technologies seem particularly vexing," the Deloitte analysts said in the report.

For its study, the firm surveyed some 1,100 executives at US companies from 10 different industries that are already testing or using AI in at least one of their business functions.

Many see security as a huge risk

As part of the study, Deloitte asked the executives to rank what they saw as the biggest risks related to the technology. By far, more ranked cybersecurity as their top choice than any other risk, and more than half of respondents ranked it in their top three.

Deloitte chart on top risks ranked by early adopters of artificial intelligence (AI)

The executives weren't just being nervous Nellie's. Many had already experienced the security of dangers of AI. Some 30% of those surveyed said their companies had experienced a security breach related to their AI efforts in the last two years.

The Deloitte study didn't discuss the types of breaches that the companies experienced or what the consequences of them were. But it did note that researchers have discovered various ways that hackers could compromise or manipulate AI systems to nefarious ends.

Machine learning systems can be manipulated

For example, researchers have shown that machine learning systems can be manipulated to reach incorrect conclusions when they're intentionally fed bad data, the Deloitte analysts said. For example, hackers could potentially fool a face-detection system by feeding it enough photos of an imposter to get it to recognize that person as the authorized user.

Such systems could also potentially expose companies intellectual property, the analysts said. If malicious actors were able to generate numerous interactions with a company's machine learning system, they could potentially use the results to reverse engineer the data on which the system was built, they said.

Because of such dangers, many of those surveyed said they had delayed, halted, or decided not to even begin some planned AI efforts, according to the study.

Deloitte chart on how early AI adopters are responding to cybersecurity concerns from 10/18 report

Now read:

SEE ALSO: Amazon's Alexa is getting smarter about sports — it can tell you the odds of the next NFL game and give you an update on your favorite teams

Join the conversation about this story »

NOW WATCH: Apple might introduce three new iPhones this year — here’s what we know


Viewing all articles
Browse latest Browse all 1375

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>