Understanding Deepfake Technology After Debunked Video of Zelenskyy

Key Takeaways

  • A deepfake video of Zelenskyy surrendering was quickly debunked, but it marks the first weaponized use of deepfakes during an armed conflict.

  • Potential malicious uses of deepfakes include fraud, inciting acts of violence or sowing political unrest. 

  • Deepfakes are getting attention from lawmakers and technologists who are finding ways to stop the spread of misinformation

The Zelenksyy Deepfake

A deepfake video of Volodymyr Zelenskyy was circulated on social media in March, during the chaos of the Russian invasion of Ukraine. The video shows Zelenskyy surrendering and telling his soldiers to lay down their arms. The video which is about a minute long is actually considered very crude for a deepfake. 

Luckily, the deepfake was quickly debunked and didn’t make much of an impact on war operations as far as we know. The poor execution of the deepfake helped in debunking it, but also the Ukrainian president had warned the public before this happened that it was to be a likely tactic from enemy hackers. Zelenskyy was also able to quickly release a statement confirming that it was indeed fake.

While this specific misinformation attempt was a flop for the hackers, it was the first weaponized use of deepfakes during an armed conflict. The rest of this blog will dive into the power, uses, and defenses of this creepy new form of deception.

How are deepfakes made?

Fake images and videos are not a new thing. Photoshop and edited videos have been around for decades. Deepfakes are a new breed of machine-made fakes that could eventually be impossible to detect as not real. 

The “deep” in deep fake refers to “deep learning”. Deep learning is a method of teaching computers and software that is inspired by the way organic brains learn. It involves the systems processing certain tasks over and over again, sometimes totally unsupervised by humans, in order to learn the best way to turn certain inputs into a desired output. In deep fakes, this means changing one person's face into another's, in ways a human editor might not think of or would be unable to detect.

Machine learning algorithms require a lot of data to train on. Therefore creating deep fakes requires a lot of facial data in the form of pictures and videos, so it's common that the target is a well-known person, like Volodymer Zelenksyy, and you as a regular person are likely safe.

How convincing are they?

We’ll let you decide by looking at a few examples. The most recent Star Wars film featured a deepfake of Carrie Fisher (Princess Leia), since the actress passed away before the movie was filmed. Another popular example of a convincing deepfake is a fake public service announcement from Obama. Perhaps the most realistic deepfakes are from a TikTok account called deeptomcruise that features deepfakes of actor Tom Cruise. Whether or not you think you can be fooled by these videos, it is undeniable that the technology has come a long way from Photoshop and is on the way to being undetectable.

What is the danger?

Potential malicious uses of deepfakes include fraud, inciting acts of violence or sowing political unrest. Deepfakes could spell trouble for individuals and companies that use tools like facial or voice recognition as part of multi-factor authentication.

In March 2019, an executive of a UK energy provider received a phone call from someone who sounded exactly like his boss. The call was so convincing that the CEO ended up transferring $243,000 to a “Hungarian supplier” — a bank account that actually belonged to a scammer. This was a voice deepfake attack. 

Advantage goes to the attacker

As of now, there is no automated way to detect deepfake videos. Facebook was able to quickly take down the Zelenskyy deepfake, but this was manually done by a human. Furthermore, deepfakes need to be proven false which places the burden on the target. The targeted person needs to take the time to make a statement confirming the falsehood of the deepfake, which takes time. In a future world where deepfakes are indistinguishable and flooding the internet, the sheer burden of disproving everything will be overwhelming.

One idea currently gaining traction is finding a way to verify that images and videos are authentic. This is based on the notion that it may be easier to label information that is truthful than it is to spot every single deepfake. The company Truepic is developing content attribution standards, a digital authentication technology that works like an “organic” label on fruit at the grocery store.

Laws fighting deepfakes

Deepfakes are getting attention from legislators. The National Defense Authorization Act of 2021 directs the Department of Homeland Security to produce assessments on the technology behind deepfakes, their dissemination and the way they cause harm. The NDAA also directs the Pentagon to conduct assessments on the potential harms caused by deepfakes depicting members of the U.S. military.

States are stepping up their attention on the issue, as well. California, Virginia, Maryland and Texas have all produced legislation in the last two years on deepfakes meant to provide victims with avenues for recourse.

Conclusion

Deepfakes are a misinformation nightmare with the potential to enable major crimes and incite public unrest. People are working on both the technical side and legal side of stopping malicious use of this technology. After seeing the use case of the Zelenskyy deepfake, it becomes the responsibility of every person to think critically about what they see and hear online.