In April 2020, a video of Belgium's prime minister Sophie Wilmès giving
However, the video was not real. It was a deep fake, generated by Extinction Rebellion Belgium using AI technology that can manipulate the facial expressions and voice of anyone. The video was labelled as a deep fake, but many people did not notice or ignored the disclaimer. Some viewers were confused and outraged by the fake speech, while others praised the prime minister for her courage and vision.
This example shows how deep fake technology can be used to spread misinformation and influence public opinion by impersonating important public figures. It also shows how difficult it can be to detect and verify deep fakes, especially when they are shared on social media platforms that have limited moderation and fact-checking capabilities.
Imagine you are watching a video of your favourite celebrity giving a speech. You are impressed by their eloquence and charisma, and you agree with their message. But then you find out that the video was not real. It was a deep fake, a synthetic media created by AI that can manipulate the appearance and voice of anyone. You feel deceived and confused.
How can you trust what you see and hear online?
This is no longer a hypothetical scenario; this is now real. There are several deepfakes of prominent actors, celebrities, politicians, and influencers circulating the internet. Some include deepfakes of Tom Cruise and Keanu Reeves on TikTok, among others.
In simple terms, Deepfakes are AI-generated videos and images that can alter or fabricate the reality of people, events, and objects. This technology is a type of artificial intelligence that can create or manipulate images, videos, and audio that look and sound realistic but are not authentic.
Deepfake technology is becoming more sophisticated and accessible every day. It can be used for various purposes, such as in entertainment, education, research, or art. However, it can also pose serious risks to individuals and society, such as spreading misinformation, violating privacy, damaging reputation, impersonating identity, and influencing public opinion.
In my last article, I discussed Deepfake Technology, how it works, and its positive and negative impacts. In this article, I will be exploring the dangers of deep fake technology and how we can protect ourselves from its potential harm.
I once
Before this knowledge, I felt good listening to one of my most revered actors of all time, only to be a bit disappointed at the end that I had in fact, listened to an AI simulation. At this rate, what if several videos we see are just deepfakes? The threat to reality is becoming more and more alarming.
As much as there may be some positives to deepfake technology, the negatives easily overwhelm the positives in our growing society. Some of the negative uses of deepfakes include:
Deepfakes can be used to create fake adult material featuring celebrities or regular people without their consent, violating their privacy and dignity. Because it has become very easy for a face to be replaced with another and a voice changed in a video. Surprising, but true. Check out this thriller of which replaced Tom Holland with a deepfake of Tobey Maguire, the first Spiderman. You would never spot the difference until you are told. If it is that easy, then any video alteration is possible.
Deepfakes can be used to spread misinformation and fake news that can deceive or manipulate the public. Deepfakes can be used to create hoax material, such as fake speeches, interviews, or events, involving politicians, celebrities, or other influential figures.
As I mentioned at the beginning of the article, a deepfake Belgium's Prime Minister Sophie Wilmès giving a speech, the real Sophie Wilmès never gave. Another case study is a making a public service announcement.
Since face swaps and voice changes can be carried out with the deepfake technology, it can be used to undermine democracy and social stability by influencing public opinion, inciting violence, or disrupting elections.
False propaganda can be created, fake voice messages and videos that are very hard to tell are unreal and can be used to influence public opinions, cause slander, or blackmail involving political candidates, parties, or leaders.
Deepfakes can be used to damage reputation and credibility by impersonating or defaming individuals, organizations, or brands. Imagine being able to get the deepfake of Keanu Reeves on TikTok creating fake reviews, testimonials, or endorsements involving customers, employees, or competitors.
For people who do not know, they are easy to convince and in an instance where something goes wrong, it can lead to damage in reputation and loss of belief in the actor.
Deepfakes can be used to create security risks by enabling identity theft, fraud, or cyberattacks. In 2019, the CEO of a UK-based energy company got a call from his boss, the head of the company's German parent company, ordering the transfer of €220,000 to a supplier in Hungary. According to news sources, the CEO acknowledged the "slight German accent and the melody" of his chief's voice and complied with the directive to send the funds within an hour.
However, the caller had called again for another wire and then it became suspicious and was later confirmed to be a fraud. Sadly, this was only a deepfake of the voice of his boss and the initial €220,000 had been moved to Mexico and channeled to other accounts.
But this is not the only incident of deepfake fraud. Deepfake technology has been used in several situations to create phishing scams, social engineering, or other scams involving personal or financial information.
Ethical Implications
Deepfake technology can violate the moral rights and dignity of the people whose images or voices are used without their consent, such as creating fake pornographic material, slanderous material, or identity theft involving celebrities or regular people. Deepfake technology can also undermine the values of truth, trust, and accountability in society when used to spread misinformation, fake news, or propaganda that can deceive or manipulate the public.
Legal Implications
Deepfake technology can pose challenges to the existing legal frameworks and regulations that protect intellectual property rights, defamation rights, and contract rights, as it can infringe on the copyright, trademark, or publicity rights of the people whose images or voices are used without their permission.
Deepfake technology can violate the privacy rights of the people whose personal data are used without their consent. It can defame the reputation or character of the people who are falsely portrayed in a negative or harmful way.
Social Implications
Deepfake technology can have negative impacts on the social well-being and cohesion of individuals and groups, as it can cause psychological, emotional, or financial harm to the victims of deepfake manipulation, who may suffer from distress, anxiety, depression, or loss of income.
It can also create social divisions and conflicts among different groups or communities, inciting violence, hatred, or discrimination against certain groups based on their race, gender, religion, or political affiliation.
I am afraid that in the future, deepfake technology could be used to create more sophisticated and malicious forms of disinformation and propaganda if not controlled. It could also be used to create fake evidence of crimes, scandals, or corruption involving political opponents or activists or to create fake testimonials, endorsements, or reviews involving customers, employees, or competitors.
Imagine having deepfake videos of world leaders declaring war, making false confessions, or endorsing extremist ideologies. That could be very detrimental to the world at large.
The current state of deepfake detection and regulation is still evolving and faces many challenges. Some of the reasons why it is difficult to identify and prevent deepfake content from spreading online are:
Furthermore, the enforcement and oversight of deepfake regulation may face practical and technical difficulties, such as identifying the creators and distributors of deepfake content, establishing their liability and accountability, and imposing appropriate sanctions or remedies.
Social Media Platforms' Policies: Social media platforms can implement policies, guidelines, and standards to regulate the creation and dissemination of deepfake content on their platforms, by banning or labeling harmful or deceptive deepfakes, or by requiring users to disclose the use of deepfake technology. This strategy can be effective in reducing the exposure and spread of harmful or deceptive deepfakes on popular and influential platforms, such as Facebook, Twitter, or YouTube. Deepfake detection and verification tools, such as digital watermarks, blockchain-based provenance systems, or reverse image search engines can also be deployed to guide against the upload of any deepfake. These platforms can also collaborate with other stakeholders, such as fact-checkers, researchers, or civil society groups, to monitor and counter deepfake content. However, these solutions may face challenges such as scalability, accuracy, transparency, and accountability.
Detection Algorithms: Detection algorithms can use machine learning and computer vision techniques to analyze the features and characteristics of deepfake content, such as facial expressions, eye movements, lighting, or audio quality, and identify inconsistencies or anomalies that indicate manipulation. Researchers can develop and improve deepfake detection and verification technologies, such as artificial neural networks, computer vision algorithms, or biometric authentication systems to improve detection algorithms.
They can also create and share datasets and benchmarks for evaluating deepfake detection and verification methods, and conduct interdisciplinary studies on the social and ethical implications of deepfake technology. This strategy can be effective in the analysis of features by identifying inconsistencies or anomalies that indicate manipulation. However, these solutions may face challenges such as data availability, quality, and privacy, as well as ethical dilemmas and dual-use risk.
Internet Reaction: This refers to the collective response of online users and communities to deepfake content, such as by flagging, reporting, debunking, or criticizing suspicious or harmful deepfakes, or by creating counter-narratives or parodies to expose or ridicule them. Users can adopt critical thinking and media literacy skills to identify and verify deepfake content, and can also use deepfake detection and verification tools, such as browser extensions, mobile apps, or online platforms to sniff out deepfakes they encounter on social media or other platforms, which they can report or flag as deepfake content. The internet reaction strategy can be effective in mobilizing the collective response of online users and communities to deepfake content. However, these solutions may face challenges such as cognitive biases, information overload, digital divide, and trust issues.
DARPA's Initiatives: DARPA's initiatives refer to the research and development projects funded by the Defense Advanced Research Projects Agency (DARPA) to advance the development of deepfake detection and mitigation technologies, such as by creating large-scale datasets, benchmarks, and challenges for deepfake research. They are aimed at developing technologies that can automatically detect and analyze deepfakes and other forms of media manipulation. DARPA has had two programs devoted to the detection of deepfakes: Media Forensics (MediFor) and Semantic Forensics (SemaFor).
Media Forensics (MediFor), which concluded in FY2021, was to develop algorithms to automatically assess the integrity of photos and videos and to provide analysts with information about how counterfeit content was generated. The program reportedly explored techniques for identifying the audio-visual inconsistencies present in deepfakes, such as inconsistencies in pixels (digital integrity), inconsistencies with the laws of physics (physical integrity), and inconsistencies with other information sources (semantic integrity). MediFor technologies are expected to transition to operational commands and the intelligence community.
Semantic Forensics (SemaFor), which was announced in 2021, seeks to give analysts the upper hand in the fight between detectors and manipulators by developing technologies that are capable of automating the detection, attribution, and characterization of falsified media assets. The program aims to exploit a critical weakness in automated media generators: the difficulty of getting all of the semantics correct. For example, ensuring that everything aligns from the text of a news story to the accompanying image, to the elements within the image itself. SemaFor also is developing technologies for automatically assembling and curating the evidence provided by the detection, attribution, and characterization algorithms.
In addition, DARPA built deepfake defensive models to document how people move their heads and facial muscles. The agency used this data and integrated it into a software tool to analyze videos of “high-profile individuals,” and compare behaviors with the real individual.
Legal Response: This is the application of existing or new laws and regulations to address the legal and ethical issues raised by deepfake technology, such as by protecting the rights and interests of the victims of deepfake abuse, or by holding the perpetrators accountable for their actions. Governments can enact laws and regulations that prohibit or restrict the creation and dissemination of harmful deepfake content, such as non-consensual pornography, defamation, or election interference. They can also support research and development of deepfake detection and verification technologies, as well as public education and awareness campaigns.
Some laws address deepfake technology in different countries, but they are not very comprehensive or consistent.
For example:
In the U.S, The National Defense Authorization Act (NDAA) requires the Department of Homeland Security (DHS) to issue an annual report on deepfakes and their potential harm. The Identifying Outputs of Generative Adversarial Networks Act requires the National Science Foundation (NSC) and the National Institute of Standards (NIS) and Technology to research deepfake technology and authenticity measures. However, there is no federal law that explicitly bans or regulates deepfake technology.
In China, a new law requires that manipulated material have the subject's consent and bear digital signatures or watermarks and that deepfake service providers offer ways to "refute rumors". However, some people worry that the government could use the law to curtail free speech or censor dissenting voices.
In India, there is no explicit law banning deepfakes, but some existing laws such as the Information Technology Act or the Indian Penal Code may be applicable in cases of defamation, fraud, or obscenity involving deepfakes.
In the UK, there is no specific law on deepfakes either, but some legal doctrines such as privacy, data protection, intellectual property, or passing off may be relevant in disputes concerning an unwanted deepfake or manipulated video.
Legal responses can be an effective strategy in fighting the dubiousness of deepfakes. However, these solutions may face challenges such as balancing free speech and privacy rights, enforcing cross-border jurisdiction, and adapting to fast-changing technology
DeepFake technology is still on the rise and rapidly evolving to better and more realistic versions every day. This calls for a need to be more proactive in tackling the menace that may accompany this technology. Below are some of the actions that I believe can be implemented to mitigate its negative impact:
Establishment of Ethical and Legal Frameworks and Standards for Deepfake Technology: More research should be made to create ethical and legal frameworks and standards for deepfake technology, such as by defining the rights and responsibilities of the creators and consumers of deepfake content, setting the boundaries and criteria for legitimate and illegitimate uses of deepfake technology, or enforcing laws and regulations to protect the victims and punish the perpetrators of deepfake abuse. More legal action is needed to enact and enforce laws and regulations that protect the rights and interests of the victims and targets of harmful deepfake content, such as non-consensual pornography, defamation, or election interference.
Actions should be coordinated, consistent, and adaptable, taking into account the cross-border nature of deepfake content and the fast-changing nature of deepfake technology, and should be balanced, proportionate, and respectful, taking into account the free speech and privacy rights of the creators and consumers of deepfake content.
Deepfake technology has the potential to create false or misleading content that can harm individuals or groups in various ways. However, deepfake technology can also have positive uses for entertainment, media, politics, education, art, healthcare, and accessibility. Therefore, it is important to balance the risks and benefits of deepfake technology and to develop effective and ethical ways to detect, prevent, and regulate it.
To achieve this goal, governments, platforms, researchers, and users need to collaborate and coordinate their efforts, as well as raise their awareness and responsibility. By doing so, we can harness the power and potential benefits of deepfake technology, while minimizing its harm.