Deepfake Phishing Grew by 3,000% in 2023 — And It's Just Beginningby@zacamos
2,847 reads
2,847 reads

Deepfake Phishing Grew by 3,000% in 2023 — And It's Just Beginning

by Zac AmosFebruary 21st, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Deepfake phishing uses AI-generated content to craft more believable phishing attempts — and these attempts rose by 3000% in 2023 alone. To protect against deepfake phishing, organizations should secure account access, train employees, fight AI with AI, impose failsafes, and monitor evolving threats.
featured image - Deepfake Phishing Grew by 3,000% in 2023 — And It's Just Beginning
Zac Amos HackerNoon profile picture

Phishing is perhaps the most persistent cybersecurity threat, and it’s only getting worse. As long as people are people, they’ll make mistakes, so targeting human weaknesses is a surefire strategy for any cybercriminal. While that hasn’t changed over the years, new technology has led to a far more sinister version of these threats — deepfake phishing.

What Is Deepfake Phishing?

Deepfakes use deep learning algorithms to generate fake content that looks remarkably like the real thing. Falsified videos of a public figure doing or saying something they never said or did are common examples. Deepfake phishing uses this AI-generated content to craft more believable phishing attempts.

In one instance, the CEO of an energy enterprise sent €220,000 to a supplier after getting a call from the parent company’s leader requesting the exchange. However, the real boss never ordered this transfer, and the “supplier” was a cybercriminal’s account. It turns out the call was a deepfake of the real leader’s voice.

These attacks are dangerous because deepfakes are difficult to distinguish from reality. They may also lack telltale signs of phishing — like spelling errors or suspicious links — because they come in the form of audio or video content. As AI improves, they’ll only become more convincing, too.

The Rise of Deepfake Phishing

Deepfake phishing isn’t just dangerous — it’s growing at an alarming rate. According to one report, deepfake fraud attempts rose by 3,000% in 2023. The same research also found cybercriminals are using deepfake videos and pictures to get past biometric security, which can lead to account compromise attacks.

This spike in deepfake phishing likely stems from deep learning models becoming more accessible. These advanced AI algorithms used to require high-level data science experience or a large budget. Now, plenty of free or low-cost advanced AI models are available that require little to no coding knowledge to use.

This same trend will likely increase deepfake phishing attempts in the future. Generative AI is more accessible than ever. While this has many positive implications, it also means it’s becoming increasingly easy for cybercriminals to craft convincing AI-generated fake content.

Email phishing costs U.S. businesses an average of $4.91 million, even without prevalent deepfake threats. It’s only a matter of time before more criminals apply deepfakes to this lucrative and relatively easy attack vector.

How to Protect Against Deepfake Phishing

While deepfake phishing is threatening, organizations can protect against it. The following steps will help businesses and their employees stay safe from these growing threats.

Secure Account Access

First, enterprises need to secure access to all internal accounts. Deepfake phishing is most effective when it comes from a real, trusted, but compromised account. Consequently, making it harder to break into them will undermine these attacks.

Multi-factor authentication (MFA) is essential — and the kind of MFA people use matters, too. Security experts have warned deepfakes can fool facial and voice recognition tools, so biometrics isn’t the most secure option. App-based one-time codes and hardware keys are safer MFA options in light of deepfake threats.

Train Employees

As with regular phishing, employee training is also crucial. While deepfakes can be hard to spot, there are some common tells. Visual errors like juddering or blurring are common with deepfakes, as are strange audio cues like distorted voices. Teaching employees these signs and warning them of deepfake phishing will make them less likely to fall for it.

Even with this training, people won’t be able to spot deepfakes with 100% accuracy. Consequently, it’s also important to stress that everyone should err on the side of caution. As a rule of thumb, if something seems suspicious or even just a little off, verify it before trusting it.

Fight AI with AI

As deepfake phishing becomes more popular, fighting fire with fire — or AI with AI, more accurately — will become more important. The same technology that makes deepfakes possible can help spot them, and several detection models are already available. Alternatively, businesses with extensive AI experience could build their own.

Detection models can outperform humans in spotting deepfakes, but no detectors are 100% accurate yet. Cybercriminals will develop new ways to get around them as methods improve, too. Consequently, while AI protections are helpful, organizations shouldn’t rely on them.

Impose Failsafes

Because no human or AI model can spot deepfakes all the time, businesses need second and third lines of defense. Most importantly, workflow or technical stops should prevent one mistake from jeopardizing the organization’s security. Requiring multiple authorization steps for large financial transfers is a great example.

Granting someone access to sensitive data or sending a large enough amount of money should require multiple people or authentication measures. These stops may hinder efficiency, but they give employees time to think about their actions. Involving more people and systems will also improve the company’s chances of spotting a deepfake attack.

Monitor Evolving Threats

Finally, brands and security experts should stay on top of deepfake and cybercrime trends. Cybercrime evolves constantly, and AI advances faster than many other technologies, so it’s easy for some defenses or best security practices to become outdated.

New generative AI tools emerge several times a month, and some of these updates may make threats more dangerous. Alternatively, some updates may provide a more effective means of stopping deepfake phishing. Both changes are equally important to recognize. Frequent reviews of current trends and audits of internal security processes will help businesses stay on top of these shifts.

Cybersecurity Practices Must Evolve Amid New Threats

Phishing may not be anything new, but new technologies like deepfakes add a dangerous dimension. As exciting as AI is, it’s also important to recognize how cybercriminals can take advantage of the same tools corporations do. Failing to evolve cybersecurity practices alongside new technology could leave organizations vulnerable before they realize it.

Awareness and training remain essential steps in preventing phishing attacks. When employees know they should look out for deepfakes, they’ll be less likely to fall for them. Emphasizing this human element while frequently adjusting technical defenses will keep companies safe despite these advancing threats.