Quantcast
Channel: Artificial Intelligence
Viewing all articles
Browse latest Browse all 1375

The Tom Cruise deepfakes were hard to create. But less sophisticated 'shallowfakes' are already wreaking havoc

$
0
0

tom cruise BURBANK, CA - JANUARY 30: Tom Cruise onstage during the 10th Annual Lumiere Awards at Warner Bros. Studios on January 30, 2019 in Burbank. (Photo by Michael Kovac/Getty Images for Advanced Imaging Society)

Summary List Placement

The coiffed hair, the squint, the jaw clench, and even the signature cackle — it all looks and sounds virtually indistinguishable from the real Tom Cruise.

But the uncanny lookalikes that went viral on TikTok last month under the handle @deeptomcruise were deepfakes, a collaboration between Belgian visual-effects artist Chris Ume and Tom Cruise impersonator Miles Fisher.

The content was entertaining and harmless, with the fake Cruise performing magic tricks, practicing his golf swing, and indulging in a Bubble Pop. Still, the videos — which have racked up an average of 5.6 million views each — reignited people's fears about the dangers of the most cutting-edge type of fake media.

"Deepfakes seem to tap into a really visceral part of people's minds," Henry Ajder, a UK-based deepfakes expert, told Insider.

"When you watch that Tom Cruise deepfake, you don't need an analogy because you're seeing it with your own two eyes and you're being kind of fooled even though you know it's not real," he said. "Being fooled is a very intimate experience. And if someone is fooled by a deepfake, it makes them sit up and pay attention."

Read more: What is a deepfake? Everything you need to know about the AI-powered fake media

The good news: it's really hard to make such a convincing deepfake. It took Ume two months to train the AI-powered tool that generated the deepfakes, 24 hours to edit each minute-long video, and a talented human impersonator to mimic the hair, body shape, mannerisms, and voice, according to The New York Times.

The bad news: it won't be that hard for long, and major advances in the technology in recent years have unleashed a wave of apps and free tools that enable people with few skills or resources to create increasingly good deepfakes.

Nina Schick, a deepfake expert and former advisor to Joe Biden, told Insider this "rapid commodification of the technology" is already is wreaking havoc.

"Are you just really concerned about the high-fidelity side of this? Absolutely not," Schick said, adding that working at the intersection of geopolitics and technology has taught her that "it doesn't have to be terribly sophisticated for it to be effective and do damage."

The Defense Advanced Research Projects Agency (DARPA) is well aware of this diverse landscape, and its Media Forensics (MediFor) team is working alongside private sector researchers to develop tools that can detect manipulated media, including deepfakes as well cheapfakes and shallowfakes.

As part of its research, DARPA's MediFor team mapped out different types of synthetic media — and the level of skill and resources an individual, group, or an adversarial country would need to create it.

MediFor threat landscape.pptx

Hollywood-level productions — like those in "Star Wars: Rogue One" or "The Irishman"— require lots of resources and skill to create, even though they typically aren't AI-powered (though Disney is experimenting with deepfakes). On the other end of the scale, bad actors with little training have used simple video-editing techniques to make House Speaker Nancy Pelosi appear drunk and incite violence in Ivory Coast, South Sudan, Kenya, and Burma.

Schick said the Facebook-fueled genocide against Rohingya Muslims also relied mostly on these so-called "cheapfakes" and "shallowfakes"— synthetic or manipulated media altered using less advanced, non-AI tools.

But deepfakes aren't just being used to spread political misinformation, and experts told Insider ordinary people may have the most to lose if they become a target.

Last month, a woman was arrested in Pennsylvania and charged with cyber harassment on suspicion of making deepfake videos of teen cheerleaders naked and smoking, in an attempt to get them kicked off her daughter's squad.

"It's almost certain that we're going to see some kind of porn version of this app," Schick said. In a recent op-ed in Wired, she and Ajder wrote about a bot Ajder helped discover on Telegram that turned 100,000 user-provided photos of women and underage children into deepfake porn — and how app developers need to take proactive steps to prevent this kind of abuse.

Experts told Insider they're particularly concerned about these types of cases because the victims often lack the money and status to set the record straight.

"The celebrity porn [deepfakes] have already come out, but they have the resources to protect themselves ... the PR team, the legal team ... millions of supporters," Schick said. "What about everyone else?"

As with most new technologies, from facial recognition to social media to COVID-19 vaccines, women, people of color, and other historically marginalized groups tend to be disproportionately the victims of abuse and bias stemming from their use.

To counter the threat posed by deepfakes, experts say society needs a multipronged approach that includes government regulation, proactive steps by technology and social media companies, and public education about how to think critically and navigate our constantly evolving information ecosystem.

Join the conversation about this story »

NOW WATCH: What makes 'Parasite' so shocking is the twist that happens in a 10-minute sequence


Viewing all articles
Browse latest Browse all 1375

Trending Articles