The State of Deepfakes in Cyber Attacks
Here is a look at what deepfakes are, how they are made, and how AI systems are fighting deepfake cyber attacks.
Join the DZone community and get the full member experience.Join For Free
AI-generated deepfake content is all over the internet today and is getting harder to spot. These fake photos, videos, or audio clips can feature real and fictional people. Hackers and scammers have begun using this technology in elaborate cyber attack campaigns, from phishing attacks to interfering in national security matters. Here is a look at what deepfakes are, how they are made, and how AI systems are fighting deepfake cyber attacks.
What Are Deepfakes?
Deepfakes are heavily-doctored fraudulent video or audio clips that closely match a real person’s appearance, voice, intonation, and speaking patterns. Typically, the deepfake creator will use a machine learning algorithm to train an AI system using numerous video and audio samples. The AI essentially becomes an expert on the subject’s mannerisms to such a degree that it can create a believable fake clip of the person saying or doing things they have never actually done.
Some deepfakes are harmless, such as fanmade deepfake recreations of scenes from popular films or TV shows. In fact, deepfake technology has even recreated performances by deceased actors in major films. Unfortunately, deepfake technology is increasingly being used as a weapon in cyber attacks.
Deepfake Use in Cyber Attacks
Early in the Russia-Ukraine conflict, a video began circulating online and on live TV showing Ukrainian president Volodymyr Zelenskyy telling Ukrainian troops to surrender and stand down. The video was quickly revealed to be a deepfake, presumably a malicious attempt to cause confusion and disruption in Ukraine.
This is just one example of a deepfake cyber attack. Deepfakes are also helping conduct large-scale phishing attacks. For instance, an attacker may create a deepfake audio recording of a boss or manager’s voice in order to trick employees into giving away login credentials. Similarly, deepfake audio recordings have successfully stolen money from people by initiating a wire transfer over the phone.
It is even possible to use deepfakes to hack biometric identity verification methods, such as facial recognition. However, it is worth noting this generally takes significant time and dedication to pull off, so it is not a typical cyber attack strategy.
Remote work may be contributing to the rise in the popularity of deepfake cyber-attacks over recent years. The cost of cyber-attacks has increased worldwide, in addition to the frequency of cyber attacks since the COVID-19 pandemic began in 2020. This is due primarily to the surge in the use of technology on the job during remote work.
All those Zoom meetings may actually be a cybersecurity risk. Video conferencing and communication platforms utilizing audio and video chat features — such as Slack or Microsoft Teams — are creating a large amount of video and audio content for millions of people.
While voice and video calls are not usually recorded and stored on these platforms, they certainly can be. Many businesses intentionally record video conferences for employees to play back or review later. Unfortunately, if this data were compromised, a hacker would gain possession of a massive amount of sample footage they could use to create deepfakes.
How AI Creates Deepfakes
AI systems are vital for creating deepfakes. Without deep learning and machine learning algorithms, deepfakes would not be possible. Today, there are many deepfake software programs available online that anyone can download. Most use a similar strategy to generate fake content.
First, the deepfake program needs training. An AI machine learning algorithm is fed sample footage of the subject — “John Doe,” for example. The deepfake creator collects as much sample footage of John Doe as possible, such as recorded video meetings, interviews, or social media posts. The samples don’t need to be related to one another — in fact, a greater variety of footage will generally lead to a more versatile deepfake.
Two AI systems operate together to optimize the deepfake program. The first will learn to recognize John Doe’s face and voice and replicate it in various expressions. These fake images or clips are then sent to a second machine-learning algorithm that compares the fake photos to the original images used to generate them. If the pictures do not pass a certain accuracy threshold, the system discards them.
This two-part structure is what’s known as a Generative Adversarial Network or GAN. The first algorithm is the “generator,” while the second is the “discriminator”. The idea of a GAN is for the generator to get so good at creating realistic fake images that the discriminator can’t tell the difference between the fakes and the real thing.
Eventually, the GAN can generate deepfake footage of John Doe saying anything the user wants. Once a deepfake AI software program has been successfully trained, it can create deepfake content for almost any subject, given enough sample footage.
How AI Can Defend Against Deepfakes
The capabilities of AI systems in creating deepfakes may sound unnerving. However, cybersecurity experts and researchers also use AI systems to combat deepfake cyber attacks. In fact, machine learning has become a valuable tool for combating cyber-attacks over recent years.
For example, a team of researchers at Stanford University developed an AI system to identify deepfake videos successfully. By closely tracking lip movements and comparing them to the words the person appears to be saying, the AI system can determine whether criminals used lip-syncing software to edit the footage and create a deepfake.
Similarly, researchers at an Amsterdam-based cybersecurity company called Sensity have developed an AI system that can detect deepfake images of faces in real-time. For instance, if a user is scrolling through a website, Sensity’s detection software could alert them if a photo on the page appears to be a deepfake. It can even tell users if a fraudulent picture is GAN-generated or a face swap and the confidence level of the deepfake prediction.
How to Spot Deepfake Content
It is sometimes possible to spot potential deepfake content with the naked eye as well. This helps identify potential scams online and fraudulent news media aimed at disinformation.
AI-generated images of fake people or recreations of real people often share a few key characteristics. For example, deepfake photos of fake people often have blurry or indistinct backgrounds, such as blurred green, that could be mistaken for out-of-focus trees or bushes. Additionally, AI-generated faces often have abnormalities on the earlobes or in and around the eyes. These are GAN glitches, like an earring that blurs into the skin of the earlobe or eyes that don’t match upon close inspection.
Vague or unnatural lighting can also be an indicator of deepfake content. The GAN deepfake generator will often try to mimic the lighting it “saw” in training footage, which usually does not match what was in the target video. Pay attention to blinking as well — deepfake videos often include too much or too little blinking and unusual patterns.
Staying Ahead of Deepfake Cyber Attacks
There’s no denying deepfakes are a serious cybersecurity threat, but new technology is already emerging to help people stay safe. As these threats become more common and easier to make, everyone has to be more skeptical of what they see online.
If a photo or video looks a little less than natural, it is probably AI-generated footage of a real or completely fake person. Researchers are also working hard to develop AI-based tools that can help people spot deepfakes before being fooled. In the deepfake future, everyone’s best bet to stay ahead of deepfake cyber attacks is to leverage AI technology for security rather than a crime.
Opinions expressed by DZone contributors are their own.