Deceptive Doppelgangers: How Deepfakes Caused a Scam of HK $200 Million

Written by denystsvaig | Published 2024/02/13
Tech Story Tags: cyber-security | trump | deepfakes | ai | deep-learning | scam | cyber-crime | crypto

TLDRConsidering the emerging trends in cybersecurity, deepfakes are a fake reality that poses a great threat to our personal, social, and financial lives. Not knowing whether a video, picture, or audio is real or fake can have catastrophic consequences, as seen in the case of the Hong Kong scam. However, with the correct detection techniques, both individuals and multinational companies can detect the authenticity of a file with considerable accuracy.via the TL;DR App

Remember that hilarious Donald Trump deepfake circulating online? While the technology behind it sparked amusement, its potential for harm lurks just beneath the surface.

In Hong Kong, the amusement faded quickly when scammers used deepfakes to orchestrate a million-dollar heist, leaving a multinational company reeling and raising serious questions about the future of trust in the digital age.

The human eye, once considered a reliable judge of reality, is facing a formidable challenge: deepfakes. But what exactly is a deepfake? In this article, we will dive into the realm of deepfakes where reality is not what it seems. Buckle up for a journey of learning about deepfake technology and how it was used to pull off a fraud costing hundreds of millions.

Let’s Begin!

Understanding Deepfakes

Deepfakes are a type of artificial intelligence used to create fake images, audio, and videos that look convincing. Essentially, the technology is used to transform or replace an existing source’s content. For instance, deepfake can take a picture or a video and swap the person in it for another.

Additionally, the technology has the ability to create completely original content where someone can be represented doing or saying something they never did.

You must be wondering how this is possible. Well, deepfakes are created using an AI technology called deep learning. To put it simply, deep learning is a type of machine learning that allows computers to learn and recognize complex patterns.

Upon analyzing the videos, images, and audio, the computer learns their patterns and is able to mimic them. The more data a computer has to analyze and learn, the more perfectly it will be able to mimic it.

Impact of Generative Adversarial Networks on Deepfakes

As if deep learning wasn’t enough, in 2014, we saw the emergence of GANs that completely revolutionized deepfakes. Can you imagine a technology that has two components: one generates a fake image, and the other judges whether the same generated image is real or fake?

The generator learns to make deepfakes more realistic, and the discriminator learns to spot fakes better. This powerful and potentially dangerous tool is not only able to create deepfakes faster but also makes them more convincing.

Now, you understand the kind of danger we are dealing with when it comes to deepfakes.

The Hong Kong Scam

When a child creates a deepfake video of their grandma rapping their favorite songs, the entire family sees the amusement in it. We also tend to ignore the obvious dangers that lie behind it. The situation gets a lot more serious when a targeted deepfake fraud results in a loss of millions of dollars. That’s exactly what happened recently in Hong Kong.

The Setup

What began as a routine work video call for a multinational company employee ended up being one of the biggest cyber crimes of the recent past. So, how was it all carried out using deepfakes? Let's find out.

The scammers began with a text message conversation with the targeted employee. The individual message claimed he was the chief financial officer of the United Kingdom branch of the same company. The victim was asked to participate in an official work-related meeting called the “Encrypted Trading Meeting” that would involve four to six others.

Considering the fact that the message and the invitation to an “encrypted” meeting seemed professional, there was no real reason for the victim to be suspicious of the entire situation.

Another reason why this particular scam attempt was not detected was due to the involvement of multiple individuals.

According to the acting senior superintendent of the Cyber Security and Technology Crime Bureau, Baron Chan Shun-ching, scams carried out in the recent past had all been through one-on-one video conferences. Hence, a video call meeting with multiple individuals was unexpected and, therefore, unlikely to be deemed a scam.

Here, we can see the lengths to which the scammers were ready to go to carry out this sophisticated cyber attack.

Designing and Executing the Deepfakes

Four to six individuals were supposed to be involved in the video meeting. Creating believable deepfakes of a prolonged meeting, including the exact voices and body language, must have been a difficult task. Yet, the scammers were not only able to achieve this, but they did it successfully.

Deepfakes begin with data in the form of pictures, videos, and audio. The attackers were able to source that information on the internet. Any videos and audio of the impersonation targets publicly found on the internet were downloaded and put through the rigorous process of deep learning.

After thorough sophistication through trial and error, the scammers were able to successfully mimic both the voices and the facial features of the people involved.

Once that was done, the scammers pre-recorded the deepfake videos so they did not have to interact with the victim. Posing as senior executives, the scammers intimidated the victim so he would not interrupt them in their master plan.

Additionally, the scammers did not leave any areas uncovered. They even made sure to create fake personas resembling the staff from the UK headquarters’ financial department.

The Outcome

When the victim arrived at the encrypted video meeting organized by the financial director, he was confronted with pre-recorded deepfake videos of several people. The victim was only asked to provide a brief introduction of himself and was not given any other chance to interact throughout the call.

During the meeting, the chief financial officer gave extensive instructions to the victim regarding investment. Finally, the moment came when the CFO ordered the victim to transfer funds to bank accounts.

The victim had no reason to suspect anything, as all the individuals spoke convincingly and looked exactly as they were supposed to. Once the victim had been convinced to go ahead with the transaction of the asked amount, the conference call ended abruptly. The poor victim had no idea he was about to commit a blunder, costing his employers hundreds of millions.

In the next week, the victim went on to make 15 transactions to five different bank accounts as instructed by the CFO. Once a total of HK$200 million was transferred, the victim contacted the UK headquarters only to find out all this had been a hoax, a fraud.

This was one of many cyber crimes carried out in Hong Kong in the past year, with the total number of losses amounting to HK$3 billion.

Detecting Deepfakes: Strategies and Preventive Measures

If law enforcement agencies had a nickel for every time they encountered fake evidence, they could probably fund their own truth-detection agency. Hence, they are in a good position to tackle deepfakes and similar threats.

However, the ever-evolving and constant advancement of AI technologies has made deepfake detection more challenging than ever before. Keeping that in mind, here are some of the best ways to detect synthetic content and the preventive measures we can take against a threat of that kind.

Manual Detection

Believe it or not, no matter how sophisticated deepfakes have become, it is still possible to detect them with the naked eye if you know what to look out for. A deepfake image, video, or audio will inevitably have some sort of inconsistency; that is what you should aim to detect.

However, this can only be done for a limited number of files. You won’t go looking at a thousand different files. This technique also requires proper training to master. To begin with, there are certain key inspections you can detect upon a close examination of an image. For instance:

  • Blurring around the edges of the face in a picture or a video

  • No or very little blinking from the individual in the video

  • Light reflections in the eyes

  • Inconsistencies in the person’s hair or vein patterns and scars on the face or body

  • Background inconsistencies

However, this process is further complicated by your perception of a video or audio; whether you choose to look for these inconsistencies on a file is entirely up to you, and thus errors may occur.

Automated Detection

To avoid the imminent risk of human error, there has to be a system that can automatically detect deepfakes for any files. However, such a system is unlikely to be 100% accurate. Nevertheless, with the constant improvements in advanced technologies, a system can be constructed that offers increasingly accurate results.

Some examples of detection tools are as follows:

  • Biological Signals: Looking for imperfections and natural changes in skin color or facial features.

  • Phoneme-viseme mismatches: Searching for inconsistencies in dynamics of the mouth, viseme.

  • Facial Movements: Seeking imperfections in facial and head movements

Key Takeaways

Considering the emerging trends in cybersecurity, deepfakes are a fake reality that poses a great threat to our personal, social, and financial lives. Not knowing whether a video, picture, or audio is real or fake can have catastrophic consequences, as seen in the case of the Hong Kong scam.

However, with the correct detection techniques, both individuals and multinational companies can detect the authenticity of a file with considerable accuracy.


Written by denystsvaig | CEO and Co-Founder of DeHealth. Cyber War Strategist, global health and blockchain expert.
Published by HackerNoon on 2024/02/13