AI To Combat AI-Generated Deep Fakes

Written by nicosvekiarides | Published 2023/10/11
Tech Story Tags: deep-fake | ai | generative-ai | ai-fraud | ai-protection | ai-fraud-detection | mitigating-ai-impact | ai-tools

TLDRvia the TL;DR App

Is this real, or is it AI? That’s the question more people are asking as artificial intelligence (AI)becomes increasingly sophisticated. AI can now create images, videos, and documents that are so realistic that insurance companies, financial institutions, health systems, and other businesses are growing concerned about entirely new avenues and types of fraud.

Generative AI can be used to fake photos or documents, falsify information, or automate scams. While some politicians are starting to use AI to write their speeches, some of their opponents can use the same technology to mimic politicians tomake false claims and potentially create disinformation.

The more sophisticated the machine learning model, the more challenging it is to detect AI-generated fakes. Fortunately, AI can also be used against AI to analyze data and examine patterns or indicators of fraudulent activity. The challenge is identifying AI-generated fraudulent activities and how to train and apply new AI tools to combat them.

Understanding Generative AI

Generative AI creates content, such as text, images, and video, using machine learning (ML). Generative AI uses natural language processing (NLP), large language models (LLMs), and generative adversarial networks (GANs), utilizing large datasets to generate new content from learned patterns and structures. Using patterns gleaned from the available dataset, generative AI can partially or entirely complete a task, such as writing a document or generating an image. The more relevant the data a generative AI model has available, the better the quality of the content generated.


Certainly, Generative AI offers a lot of positive benefits. Often, it is used for predictive analytics, analyzing numerical data and statistics to determine likely outcomes.  It can be applied to images, speech, written text, software, and other complex data types. For example, generative AI can reconstruct corrupted or blurred images, filling in gaps in such images using the available data in an attempt “to make it whole.”


On the other hand, generative AI can also create variations of original digital assets, resulting in realistic but fake images, text, videos, and other media. For bad actors, AI makes an attractive tool for fraudulent activities for various reasons, including:

  • AI is fast and efficient
  • AI fraud is difficult to detect
  • AI tools can be used anonymously
  • AI can automate scams, such as sending phony emails or text messages
  • AI can generate realistic, false documents, such as invoices or contracts
  • AI can generate realistic, fake images or image enhancements
  • AI can generate realistic, fake video and audio


AI tools can also be used for impersonation and using captured data to generate phony text, images, or online profiles. Generative AI can even match the style, tone, grammar, and language of anyone, which is why the potential of using AI to commit fraud is a growing concern.

Harmful Applications for Generative Ai

The following are just some of the potential misapplications of generative AI:


Fake content and misinformationDeepfakes are created using AI to manipulate images and video content to create counterfeit events. Deepfakes can be used to falsify evidence, embarrass or disparage someone, or undermine politicians and other leaders. For instance, Deepfakes are commonly used for revenge porn or to map celebrity faces onto porn stars’ bodies. Fake photos can also be used for blackmail or to defraud businesses, such as false insurance claims.


Bias and discrimination – Since generative AI models use large datasets, they can be prone to biases inherited from the source data. Common concerns range from AI-generated content perpetuating stereotypes, prejudice, or discrimination, which can be an issue if screening job candidates, to conveying undesirable political biases when generating written content. Additional systems must be added to audit sources and training data to prevent bias.


Infringement on intellectual property – Generative AI also raises concerns about content ownership and copyright infringement. Using generative AI to create content based on someone else’s original material without consent or attribution may result in copyright violations. It’s easy to plagiarize someone else’s work using generative AI.


Security and legal concerns – Using generative AI to create fake content can lead to legal and security issues, such as defamation, spreading misinformation, and cyberattacks. For example, AI can be used to impersonate someone to make malicious phone calls or to falsify credentials for access to sensitive materials.


Emotional impact – Deepfakes and phony content can have a considerable psychological impact on those who have been duped. Falling victim to AI-generated fraud can result in anxiety, mistrust, and even paranoia. People have tended to accept the validity of photos at face value, but if they can no longer believe what they see, unforeseen levels of distrust begin to proliferate across the general population.

Mitigating the impact

Understanding the impact of generative AI is the first step in developing guidelines and safeguards. Organizations using generative AI need rules and procedures to ensure the ethical use of these technologies. Organizations that may encounter generative AI from outside sources need defense systems that protect against AI-generated fraud.


Given the rapidly growing sophistication of generative AI, the most rational way to fight against misused AI technology is with AI-powered protection. As such, a set of AI tools can serve to detect fraudulent images, videos, sensor data, content, etc. AI tools can scan any photo, document, or piece of content to detect anomalies before it is used for transactional purposes. Images validated using AI analyze pixels to detect alterations. Heat maps may display where images may have been changed.


Photos are being used for business or legal purposes such as insurance claims, real estate transactions, and criminal evidence. Self-service is frequently employed for insurance or real-estate transactions, where a policyholder or renter submits photos for a claim, opening the door to photo fraud. Photos and videos in court are being challenged on a regular basis, particularly if they are gathered autonomously with no witnesses who can testify to the evidence. AI validation is a critical step to rapidly discern between real and fraudulent photos.

As with photos, similar AI techniques can verify documents. Using deep learning algorithms, AI can detect text alterations. It can even apply a scoring system to reflect a confidence level or trust that the document is authentic. A summary report may detail the AI’s findings to identify areas that may need additional scrutiny. Today, loan applications often automate the extraction of data from documents such as tax forms, bank statements, or check stubs to establish applicant qualification. What happens if these documents are altered or AI-generated? AI automated document fraud analysis offers a solution without retreating to the dark ages of manual inspections.

With AI protection, organizations can have confidence that sensitive materials haven’t been maliciously altered. For example, healthcare providers can protect patient data against unauthorized tampering or verify the results of self-administered home health tests. Certainly, anything health-related demands the highest level of protection.


With AI fraud detection technology, organizations can enjoy peace of mind and numerous benefits without having to train staff and create new processes to deal with AI threats. Automated AI tools make it possible to discern between what’s real and what’s fake and save organizations time and money in the following ways:

  • Reducing the risk of fraud
  • Eliminating the need for manually inspecting photos, videos, and documents
  • Enabling trusted self-service and automated processes that increase customer satisfaction while reducing risk
  • Maintaining compliance and reputation by guarding against emerging threats from generative AI that impact both the business and employees


To borrow an old adage, with great power comes great responsibility. While many organizations have jumped on the bandwagon to leverage the positive benefits of generative AI, it is equally important for organizations to also take protective measures against the risks posed by generative AI.


Written by nicosvekiarides | CEO of Attestiv
Published by HackerNoon on 2023/10/11