- Deepfakes— videos manipulated with AI to make someone appear to say or do something they haven't actually said or done — have become a real concern, especially when it comes to spreading misinformation ahead of the 2020 presidential election.
- The advancements in deepfake technology were demonstrated this week at an MIT tech conference, where the tech was used to portray an interview in real-time with Russian president Vladimir Putin.
- Deepfake artist Hao Li, who created the Putin deepfake, said at the conference that deepfakes could be "perfect and virtually undetectable" within a few years, but we're not quite there yet.
- Visit Business Insider's homepage for more stories.
A recent tech conference held at MIT had an unexpected special guest make an appearance: Russian President Vladimir Putin.
Of course, it wasn't actually Putin who appeared on-screen at the EmTech Conference, hosted earlier this week at the embattled, Jeffrey Epstein-linked MIT Media Lab. The Putin figure on-stage is, pretty obviously, a deepfake: an artificial intelligence-manipulated video that can make someone appear to say or do something they haven't actually said or done. Deepfakes have been used to show a main "Game of Thrones" character seemingly apologize for the show's disappointing final season, and to show Facebook CEO Mark Zuckerberg appearing to admit to controlling "billions of people's stolen data."
Read more: From porn to 'Game of Thrones': How deepfakes and realistic-looking fake videos hit it big
The Putin lookalike on-screen is glitchy and has a full head of hair (Putin is balding), and the person appearing on-stage with him doesn't really try to hide the fact that he's truly, just interviewing himself:
This is the deepfake of @glichfield interviewing Vladimir Putin (wink wink nudge nudge). #EmTechMITpic.twitter.com/PHoFV2iTPH
— MIT Technology Review (@techreview) September 18, 2019
However, the point of the Putin deepfake wasn't necessarily to trick people into believing the Russian president was on stage. The developer behind the Putin deepfake, Hao Li, told the MIT Technology Review that the Putin cameo was meant to offer a glimpse into the current state of deepfake technology, which he's noticed is "developing more rapidly than I thought."
Li predicted that "perfect and virtually undetectable" deepfakes are only "a few years" away.
"Our guess that in two to three years, [deepfake technology] is going to be perfect," Li told the MIT Technology Review. "There will be no way to tell if it's real or not, so we have to take a different approach."
As Putin's glitchy appearance shows, deepfake technology has yet to perfect real-time believable deepfakes. However, the tech is advancing quickly: One example is the Chinese deepfake app Zao, which lets people superimpose their faces into those of celebrities in really convincing face-swaps.
The advancement of AI technology has made deepfakes more believable, and it's now even more difficult to decipher real videos from doctored ones. These concerns have led Facebook to pledge $10 million into research on detecting and combatting deepfakes.
Additionally, federal lawmakers have caught onto the potential dangers of deepfakes, and even had a hearing in June about "the national security threats posed by AI-enabled fake content."AI experts also have raised concerns that deepfakes could play a role in the 2020 presidential election.
SEE ALSO: People are roasting Apple for trying to make 'slofies' happen
Join the conversation about this story »
NOW WATCH: 5 things wrong with Apple's lightning cable