A rather unprecedented consequence that has permeated the Age of Information is the hit that integrity has taken. With the rise in phishing and other socially engineered scams, along with the propagation of fake news on social platforms such as Facebook and WhatsApp, it’s getting increasingly difficult to separate authentic information from formulated and manipulated data.
Fortunately, however, people are starting to grow wary of ‘shady’ email attachments, and are more likely to stay away from opening emails that look suspicious. The introduction of AI, particularly techniques such as deepfake, into the cybersecurity battlefield, however, changes things- quite drastically.
In an attempt to demonstrate the alarming impact that an AI software, such as deepfake, could have- let’s consider two separate situations. The first situation involves you receiving a phishing email from a fellow co-worker asking for money, whereas the second scenario involves you receiving a phone call, apparently from the same co-worker, asking for cash. Any individual in this situation is much more likely to fall for the second scenario, and get duped in the process.
To understand the dire implications of Deepfake- which has up till now only been used to depict politicians in a comedic context- let’s have a brief rundown of what the Deepfake technology is.
What Exactly Is ‘Deepfake?’
Perhaps the true nature of the impact that Deepfakes have had on the current state of web video can be articulated in the belief shared by experts that the oncoming era of video streaming is going to feature highly realistic Deepfake content, with an endless combination of superimposed audio, videos and images.
Considering the stakes, knowing what Deepfakes are is of the utmost importance. Unlike the CGI that we’ve grown so accustomed to (thank you, Hollywood), Deepfakes make use of generative adversarial networks (abbreviated as GANs), in which two Machine Learning models work simultaneously work to create convincing forgeries.
Usually, the larger the pool of training data is, the more convincing the Deepfake video is. This is why politicians and celebrities have been the most common targets of the first generation of Deepfakes; there’s already a massive amount of video footage available about them publicly.
Once you get ahead of the learning curve, the possibilities of the forgeries are endless. You can poke fun at Donald Trump, or you can make Nicholas Cage play roles he’s never played before. Realistically speaking; however, deepfakes are more likely to spawn hatred and disinformation, which can lead to dangerous consequences.
What Threats Do Deepfakes Pose?
Even the mere existence of the Deepfake AI technique is enough to send some serious jitters down the spines of politicians and celebrities, or any individual concerned about cybersecurity. Not only is the integrity of anything we see online at stake, Deepfakes paint a horrid picture of the future as nothing short of an Orwellian nightmare.
As articulated by Marco Rubio, a couple of years ago, you needed proper weaponry and tools to threaten the U.S. Now, anyone with access to the internet and deep-rooted propaganda could create a fake video spewing hate that could topple the government of almost any country, including superpowers such as the US.
Perhaps even more alarming is the pornographic content spurred by the Deepfake technology, which can, and has, belittled the reputation of several men and women.
There have also been several instances where women activists and politicians have been demeaned of their agenda by the Deepfake technology being used as a salacious tool against them.
Although the threat that Deepfakes pose to the fragile system of democracy we’ve got going, and even more dire menace lies in its use in the pornographic industry, particularly in the lack of control that governmental agencies can exercise over the distribution of these videos- videos that might be cause irreparable damage to a person’s reputation.
What can be done to combat the threat posed by deepfakes?
When it comes to fighting the long and good fight against Deepfakes, the best responses are usually the ones that teach people how to detect ‘Deepfaked’ videos and images from authentic data.
Organizations and enterprises can play a pivotal part in combating Deepfakes, only by creating awareness in their employees about the dangers that the Deepfake technology poses. Furthermore, a secondary layer of protection can also be deployed in the form of a tightened authentication process. Usually, these processes include two-factor authentication, single-use password generators, and other password alternatives.
The tertiary level of protection comes in the form of organizations investing in creating counter software that helps detect and combat Deepfakes. However, any software on this level requires a massive amount of time to be developed, which is why organizations should prioritize awareness.
How can blockchain and AI help?
Taking into account the fact that Deepfakes highly reminiscent of the modern ‘Age of Information’ we live in today, cybersecurity specialists and organizations need to come up with contemporary solutions to the dangers of Deepfakes as well.
Currently, the cybersecurity realm has been rampant with the inclusion of AI and blockchain, and both of these technologies can help neutralize the threat of Deepfakes.
As far as the blockchain technology is concerned, it can play a vital role in the authentication of digital identities, particularly when access to sensitive information is involved, such as financial details or social security numbers.
Furthermore, the authentication of suspicious videos and audio files can be done through a blockchain application, where the files can be compared against their original counterparts. In essence, blockchain technology could be the very tool that separated the wheat from the chaff, which is to say, the forgeries from the truth.
When it comes to the use of Artificial Intelligence in fighting against Deepfake technology, machine learning algorithms can play a vital part in recognizing patterns in large amounts of data, via specific classification techniques.
As a data scientist, Dr Alexander Adam articulates; machine learning can play a particularly important role in detecting fake audio files from the authentic files.
At the end of the article, we can only urge our readers to stay in the loop regarding the current developments in AI, mainly when the AI is as staggering as Deepfakes.
Although the world might seem bent on walking the path to a future where the invasion of privacy is a societal norm, staying cautious about terrifying technologies such as Deepfake, is the first step on the long path to attaining cybersecurity.