In the financial sector, a troubling trend is emerging: a 17 percent increase in fraud driven by AI-generated identities. Surprisingly, 76 percent of financial experts suspect their organizations have unknowingly accepted these artificial identities. Moreover, 87 percent of these professionals predict the situation will worsen before effective solutions are developed. This insight comes from a survey of 500 finance and FinTech professionals
“I would say the use of AI in fraud is probably the most significant risk facing us. There’s going to be an innovation’ arms race’, and we’re always going to be behind if you’re on the ‘good side’ of the fraud fight: a criminal can innovate quickly and just use it, and come up with a scam. We will take a long time to keep up with that unless we take innovation as a default position,” said Nick Sharp, Deputy Director at the National Economic Crime Centre (NECC), during a panel discussion at a UK-focused conference in London.
A study conducted by cybersecurity firm McAfee revealed that one in four adults has faced some form of AI voice scam. In a notable incident earlier this month, an employee at an international company was deceived into sending
Financial service providers, including those offering loans, credit cards, and credit evaluations, have long been confronted by fraudsters who exploit another person’s personal information to fabricate fake identities for financial gain.
Generative AI encompasses tools that can create content in various formats—text, images, audio, and video — from simple prompts, enhancing the ease and speed of executing scams.
This includes:
Spreading phishing messages more efficiently.
Constructing online presences for fake identities to make them seem real.
Mimicking real people’s activities to gather more personal information.
Synthetic fraud, which blends real and false information, poses a greater challenge for credit monitoring and security services.
Fraudsters are shifting towards impersonation tactics, deceiving individuals and businesses into making payments to entities that appear legitimate.
Victims often include those who seldom check their credit reports, those with readily available online information, and groups less informed about fraud risks, such as the young and elderly.
Countering AI-enabled fraud requires extensive legitimate data for pattern recognition, a challenge more complex than traditional identity theft, where fraud involves the unauthorized use of someone’s personal details.
Criminals are now leveraging chatbots, such as ChatGPT and Dall-E, to enhance their hacking and scamming efforts. ChatGPT’s capacity to generate customized content from minimal inputs presents a risk as it can create personalized scam and phishing communications.
For example,
LLMs also enable scammers to execute phishing operations on a massive scale, reaching thousands in their native languages. Evidence from hacking forums shows criminals utilizing ChatGPT for fraudulent purposes, including information theft and ransomware creation.
New variants of malicious large language models, such as WormGPT and FraudGPT, have emerged, capable of generating malware, identifying system vulnerabilities, offering scamming advice, facilitating hacking, and compromising electronic devices. Another variant, Love-GPT, targets individuals through romance scams on dating platforms by generating fake profiles that engage users in conversation.
The international challenge extends to the creation of phishing emails and ransomware using AI, raising concerns about privacy and trust on platforms like ChatGPT and CoPilot. The increased reliance on AI tools risks exposing personal and corporate information, either through their integration into future training datasets or potential breaches that could disseminate sensitive data.
While FinTech companies ponder how to protect our finances from artificial intelligence attacks, we can take a number of protective actions on our own.
Exercise increased caution with seemingly authentic messages, videos, images, and calls, which might be AI-generated. Verify their legitimacy through a trusted source.
It’s advisable to refrain from sharing sensitive information with ChatGPT and similar LLMs and to remain aware of their limitations, including the potential for inaccurate responses, especially in critical applications like medical diagnosis and professional tasks.
Lastly, consult your employer about using AI technologies in the workplace to ensure compliance with any existing policies or restrictions. Adopting these precautions can help mitigate known and emerging threats as AI technology continues to evolve.
The article