paint-brush
AI and the Dark Art of Social Engineeringby@hacker-zmb2pv7
416 reads
416 reads

AI and the Dark Art of Social Engineering

by Sonia MishraMarch 16th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI is revolutionizing cybercrime, with deepfakes, voice phishing, and AI-generated phishing scams driving a 442% surge in social engineering attacks. Businesses must deploy AI-based detection tools, phishing-resistant MFA, and user training to defend against these evolving threats. Security isn’t just about tech—it’s about awareness and resilience.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - AI and the Dark Art of Social Engineering
Sonia Mishra HackerNoon profile picture
0-item
1-item

Have you ever been prey to a social engineering scam? It's incredible how social engineering tactics can still be so effective. Cybersecurity costs are projected to reach $10.5 trillion annually this year, emphasizing the need for stronger defenses. The 2025 Global Threat Report, recently published by Crowdstrike, emphasized that social engineering tactics aimed at stealing credentials grew an astounding 442% in the second half of 2024, with Generative AI driving new adversary risks.


What keeps the Organizations and Security experts up at night? The explosive rise of AI-powered attacks and next-level social engineering. While AI has strengthened security, it has also armed cybercriminals with sophisticated, elusive attack methods. Malicious actors aren't just adopting AI; they are weaponizing it against organizations and Individuals. This is the new frontier of cybersecurity—an arms race in which we're not just battling hackers but also AI-powered machines that can think, adapt, and innovate faster than ever before.


The State of Cybersecurity in the Face of Increasing AI Threats

Today, AI is used to generate realistic deepfakes, phishing emails, and social engineering attacks that deceive users into divulging sensitive information. Since early 2023, SCATTERED SPIDER has used social engineering techniques to gain access to single sign-on (SSO) accounts and cloud-based application suites. Multiple eCrime actors adopted this technique in 2024. Several relevant cases targeted academic and healthcare entities; in these incidents, threat actors subsequently used the compromised identity to exfiltrate data from cloud-based software as a service (SaaS) applications or modify employee payroll data.


Academic research and a 2024 study found that the phishing click rate (54%) for AI-generated phishing emails was significantly higher than for human-written phishing messages (12%)


The message is simple - More awareness is needed among the users and access mechanisms should be tightened.

How the Evolution of AI Affects Enterprise Cybersecurity

Impersonation attacks

  • Deepfake Video Fraud: Traditional money scams often fail—silicon masks can't perfectly mimic human skin and movement. But AI-powered video deepfakes are changing the game. In February 2024, CNN reported that a finance worker at a multinational firm was tricked into transferring $25 million to fraudsters. The scammers used generative AI to create a convincing deepfake of the company’s CFO, fooling the employee during a live video call and persuading them to authorize the payment.


  • Deepfake Audio Fraud: In 2019, A UK-based energy company lost $243,000 after fraudsters used AI-generated audio to mimic the voice of the CEO’s Germany-based parent company. The fraudsters called the UK company's CEO, pretending to be the CEO of the parent company, and demanded an urgent wire transfer.


  • Voice phishing: Voice phishing, often known as vishing, uses live audio to build on the power of traditional phishing, where people are persuaded to give information that compromises their organization. One example is Help Desk Social Engineering where employees may reveal sensitive information to criminals.

Voice cloning

We're facing a dangerous reality. Imagine this: just a few seconds of your voice is all it takes for AI attackers to create a convincing clone. It's happening right now—criminals crafting audio snippets that sound exactly like your loved ones, crying for help in manufactured crises, or demanding urgent financial aid. These digital doppelgängers are nearly impossible to distinguish from the real thing.

Phishing email

Picture this: cybercriminals wield AI to create convincing phishing emails that you might actually fall for them. These digital con artists are raising their game—they've even masqueraded as OpenAI itself, sending businesses seemingly legitimate urgent requests for payment information updates.


How Can Organizations Detect and Prevent AI-Powered Cybersecurity Attacks?

Let's take a look at some of the key Defense Strategies:

Deploy AI-Based Detection tools

Deploy AI models trained to detect fake content and phishing attempts. Tools powered with AI can also be used to detect, identify, and respond to more sophisticated social engineering tactics.

Implement stronger authentication

Implement Multi-factor Authentication (MFA) to prevent unauthorized access. Additionally, companies can evaluate ways to enhance their authentication process, taking into account voice cloning and deepfake technology that can potentially bypass audio/video-based authentication systems. To deal with Help Desk social engineering techniques, require a video authentication with Identification Proof for employees requesting password reset.


Did you know traditional MFA solutions can still fall prey to social engineering? That's precisely why phishing-resistant MFA is so crucial—it creates a shield that neither sophisticated AI nor crafty human adversaries can talk their way through when targeting your authentication.

User Training:

Educate employees on recognizing AI-generated scams. Companies may consider hosting training to educate employees on social engineering that leverages AI, teaching individuals how to proceed cautiously in the face of increasingly convincing threat actor communications.

Cultivate a security-first culture:

Organizations must embed security across all operations and workflows, treating it as a priority instead of an afterthought. Consider whether current policies and procedures have a system in place to review and respond to reports of advanced social engineering and whether new methods must be implemented to account for increasingly sophisticated tactics.

Conclusion

Make no mistake: social engineering attacks target your greatest vulnerability—human psychology—and AI has dramatically intensified this threat by enabling attackers to decode human behavior at an unprecedented scale. How do we fight back? Organizations can't rely on a single solution—they must implement a robust multi-layered approach: cutting-edge AI security tools working alongside human-centered training and well-crafted procedures. This defense-in-depth strategy is crucial for survival in today's landscape.


What social engineering attacks are you most concerned about? Let’s discuss in the comments below!