Phishing is perhaps the most persistent cybersecurity threat, and it’s only getting worse. As long as people are people, they’ll make mistakes, so targeting human weaknesses is a surefire strategy for any cybercriminal. While that hasn’t changed over the years, new technology has led to a far more sinister version of these threats — deepfake phishing.
Deepfakes use deep learning algorithms to generate fake content that looks remarkably like the real thing. Falsified videos of a public figure doing or saying something they never said or did are common examples. Deepfake phishing uses this AI-generated content to craft more believable phishing attempts.
In one instance, the CEO of an energy enterprise
These attacks are dangerous because deepfakes are difficult to distinguish from reality. They may also lack telltale signs of phishing — like spelling errors or suspicious links — because they come in the form of audio or video content. As AI improves, they’ll only become more convincing, too.
Deepfake phishing isn’t just dangerous — it’s growing at an alarming rate. According to one report,
This spike in deepfake phishing likely stems from deep learning models becoming more accessible. These advanced AI algorithms used to require high-level data science experience or a large budget. Now, plenty of free or low-cost advanced AI models are available that require little to no coding knowledge to use.
This same trend will likely increase deepfake phishing attempts in the future. Generative AI is more accessible than ever. While this has many positive implications, it also means it’s becoming increasingly easy for cybercriminals to craft convincing AI-generated fake content.
Email phishing costs U.S. businesses
While deepfake phishing is threatening, organizations can protect against it. The following steps will help businesses and their employees stay safe from these growing threats.
First, enterprises need to secure access to all internal accounts. Deepfake phishing is most effective when it comes from a real, trusted, but compromised account. Consequently, making it harder to break into them will undermine these attacks.
Multi-factor authentication (MFA) is essential — and the kind of MFA people use matters, too. Security experts have warned deepfakes
As with regular phishing, employee training is also crucial. While deepfakes can be hard to spot, there are some common tells. Visual errors like juddering or blurring are common with deepfakes, as are strange audio cues like distorted voices. Teaching employees these signs and warning them of deepfake phishing will make them less likely to fall for it.
Even with this training, people won’t be able to spot deepfakes with 100% accuracy. Consequently, it’s also important to stress that everyone should err on the side of caution. As a rule of thumb, if something seems suspicious or even just a little off, verify it before trusting it.
As deepfake phishing becomes more popular, fighting fire with fire — or AI with AI, more accurately — will become more important. The same technology that makes deepfakes possible can help spot them, and several detection models are already available. Alternatively, businesses with extensive AI experience could build their own.
Detection models can outperform humans in spotting deepfakes, but
Because no human or AI model can spot deepfakes all the time, businesses need second and third lines of defense. Most importantly, workflow or technical stops should prevent one mistake from jeopardizing the organization’s security. Requiring multiple authorization steps for large financial transfers is a great example.
Granting someone access to sensitive data or sending a large enough amount of money should require multiple people or authentication measures. These stops may hinder efficiency, but they give employees time to think about their actions. Involving more people and systems will also improve the company’s chances of spotting a deepfake attack.
Finally, brands and security experts should stay on top of deepfake and cybercrime trends. Cybercrime evolves constantly, and AI advances faster than many other technologies, so it’s easy for some defenses or best security practices to become outdated.
New generative AI tools
Phishing may not be anything new, but new technologies like deepfakes add a dangerous dimension. As exciting as AI is, it’s also important to recognize how cybercriminals can take advantage of the same tools corporations do. Failing to evolve cybersecurity practices alongside new technology could leave organizations vulnerable before they realize it.
Awareness and training remain essential steps in preventing phishing attacks. When employees know they should look out for deepfakes, they’ll be less likely to fall for them. Emphasizing this human element while frequently adjusting technical defenses will keep companies safe despite these advancing threats.