The Deep Trouble with Deepfakes
Deepfake deceptions are fooling people in our expanding and ever-improving digital world. Imagine arriving at work one morning to discover all employees have received an important video announcement from the CEO and are scrambling to comply with the instructions it contains. Their responsiveness would be impressive if not for one thing: The CEO never recorded or sent the video, and now must somehow undo the resulting damage.
Improvements in artificial technology (AI) and machine learning (ML) is making such flawless deepfake deceptions possible. These fake videos and audios have the potential to undermine security at every level from small businesses to global governments.

How Deepfake Works
A deepfake is a video or audio made by employing AI and ML to create an exact likeness of a person saying or doing things he or she never actually said or did. The deception plays on the human tendency to believe what is seen and can be very effective in making it appear as though the content of a video is genuine.
These videos aren’t simply fakes created by hackers skilled in forgery. Deepfakes rely on a form of machine learning in which two networks are fed the same data sets and pitted against each other in a back-and-forth battle of generation and detection. Known as generative adversarial networks (GANs), these systems consist of one network creating fakes and another evaluating the fakes for flaws. The data set consists of hundreds or thousands of images and videos of the person to be imitated, and a forgery is considered good enough when the detection network no longer rejects the results.
Deepfake Deceptions
Deepfake audio and video involve using AI algorithms to manipulate or synthesize speech or audio to create realistic yet false content. The risks associated with deepfake deceptions include:
- Misinformation and disinformation: Deepfake can be used to spread false information and manipulate public opinion by making it appear as if someone said something they didn’t.
- Reputational damage: Deepfake can be used to defame or damage the reputation of individuals by making them appear to say something controversial or damaging.
- Privacy invasion: Deepfake can be used to invade the privacy of individuals by synthesizing audio content that appears to be of them, but is not.
- Psychological harm: Deepfake can cause psychological harm to individuals who are portrayed in false or misleading content.
Deepfake has the potential to cause harm and undermine trust in information and media, so it’s important to approach all content with a healthy dose of skepticism.
Artificial Intelligence Factor
Artificial Intelligence is a key component in the creation of deepfakes. AI algorithms are used to analyze and manipulate audio and video content to create realistic yet false depictions of individuals. The following are some ways in which AI contributes to deepfakes:
- Image and speech synthesis: AI algorithms, such as Generative Adversarial Networks (GANs), are used to generate synthetic images and speech that are almost indistinguishable from the real thing.
- Face and voice recognition: AI algorithms are used to analyze and manipulate face and voice recognition data to swap the faces or voices of individuals in audio and video content.
- Machine learning: AI algorithms are trained on large amounts of data to learn patterns in facial movements, speech patterns, and other features that can be used to manipulate audio and video content.
AI plays a critical role in the creation of deepfakes by enabling the creation of realistic and highly convincing false audio and video content. As AI technology continues to advance, the quality and realism of deepfakes is likely to improve, making it even more important to be aware of their potential risks.
Hackers and Malicious AI
When deepfakes first appeared, people mostly used the technology to goof off and create fake pornographic videos. However, the software to produce such videos is readily available to everyday users, making it simple for hackers to employ deepfake tactics and use realistic false content to manipulate their targets.
Deepfakes are prime candidates for viral status and can spread rapidly across social media. Because fake rumors can take as long as 14 hours to be recognized and debunked, a well-produced deepfake could become entrenched in the public mind as truth long before the deception was detected. Hackers can take advantage of the popularity of viral fakes to spread videos containing malware or record messages designed to entice users to click on links as part of a phishing attack.
Deepfakes may also be used to draw people to websites in which malicious code has been embedded, turning their computers into tools for mining cryptocurrency. Known as cryptojacking, this kind of attack can also be launched on mobile devices and run undetected in the background as users go about their daily tasks.
Deepfake Deceptions and Access Control
Deepfake technology is progressing to the point of perfection, and rapid advances in AI and ML mean scenarios like the one described above can no longer be relegated to the realm of science fiction. Using deepfakes, hackers could trick employees into giving away a great deal of information, including access credentials, financial records, tax documents, customer profiles and proprietary company data.
Because GANs require a significant number of images to create realistic deepfakes, this kind of attack isn’t likely to become the norm overnight. However, the internet in general and social media in particular provides a wealth of pictures and videos posted by users and could theoretically be mined for the data sets necessary to train GANs to produce convincing results.
Employees tricked by deepfakes or those who indulge in viral videos on company time could easily open the door for hackers to access business networks and fly under the radar or launch large-scale attacks. Such a prevalent threat to access control and compliance requires an updated approach to security.
How to Identify Deepfakes
Identifying deepfakes can be challenging, as they are designed to look and sound realistic. However, there are some tell-tale signs to look for that can help you determine if an audio or video is a deepfake:
- Audio-visual inconsistencies: Look for discrepancies between what you hear and what you see in the audio or video. For example, the lips might not match the words being spoken, or the facial expressions might not match the emotions being expressed.
- Unnatural movements: Look for unnatural movements in the video, such as stiff or jerky movements, or movements that don’t match the audio.
- Artificial artifacts: Look for artifacts, such as blurring or pixelation, that suggest the audio or video has been artificially manipulated.
- Background inconsistencies: Check for inconsistencies in the background of the video, such as objects appearing or disappearing, or changes in lighting that don’t match the audio.
- Metadata analysis: Analyze the metadata of the audio or video file to determine if it was edited or manipulated.
- Use of specialized software: There are specialized software programs that can analyze audio and video files to detect deepfakes.
Keep in mind that deepfakes are constantly improving and new techniques are being developed, so it’s important to approach all audio and video content with a healthy dose of skepticism and to be aware of the latest methods for identifying deepfakes.
Preparing for Deepfake Security Threats
To get your network and your employees ready to stand up against the potential risks posed by deepfake videos:
• Develop and deploy ongoing security training
• Monitor employee activities on company devices
• Update your BYOD policy to prevent infected devices from spreading malware to your network
• Invest in security software with deep learning capabilities to predictively detect malware threats
Combining employee training with machine learning software minimizes the likelihood of human error and leverages the power of artificial neural networks to protect your company from sophisticated threats and deepfake deceptions.
The rise of deepfake in a world where fake news is already a concern signals a future in which it could be nearly impossible to trust anything you read, hear or see. Detecting falsehoods requires an updated approach to security, including employing the same technologies used to create deepfakes. The future of security may boil down to beating hackers at their own games, and learning to identify and outsmart threats launched using fake video content could be just the start of a new wave of necessary security upgrades.