The AI Arms Race (Offense vs Defense)

Written by anjaligopinadhan | Published 2026/02/03
Tech Story Tags: cybersecurity | artificial-intelligence | ai-security | ai-defense | ai-arms-race | ai-security-awareness | ai-cyber-security | ai-cyber-threats

TLDRCheck Point's Cyber Security Report 2026 shows 70% increase in cyber attacks since 2023. 60% of executives reported their organizations faced AI-powered attacks, but only 7% had deployed AI defenses at scale. Moody's 2026 cyber outlook warns that AI-related threats will "become more prevalent and pronounced"via the TL;DR App

How €60 attack subscriptions are outpacing million-dollar security budgets in 2026

The numbers from 2025 are in, and they're brutal.


According to Check Point's Cyber Security Report 2026, released this week, organizations experienced an average of 1,968 cyber attacks per week last year—a staggering 70% increase since 2023. Attackers are leveraging automation and AI to move faster, scale more easily, and operate across multiple attack surfaces simultaneously.


Meanwhile, Moody's 2026 cyber outlook warns that AI-related threats like model poisoning, adaptive malware, and autonomous attacks will "become more prevalent and pronounced" as companies adopt AI without adequate safeguards.


This isn't a prediction anymore. It's a post-mortem on what we let happen.


Welcome to the AI arms race. It's been happening for years, and the data confirms what security professionals feared: most organizations are on the losing side.

The 2025 Damage Report

Let's start with what we now know.


Cybercrime cost the global economy over $10.5 trillion in 2025. Europe alone accounted for 22% of all global ransomware attacks, with France, Germany, Italy, and Spain absorbing a combined €300 billion in losses over the past five years. The continent saw 3.2 million DDoS attacks in just the first half of 2025.


But here's the statistic that should terrify every CISO: while 60% of executives reported their organizations faced AI-powered attacks, only 7% had deployed AI defenses at scale.


That gap isn't just a vulnerability. It's an invitation.

The Offense: Crime-as-a-Service Gets Smarter

WormGPT and the €60 Revolution

In 2023, a tool called WormGPT appeared on underground forums—an "uncensored" AI built for cybercrime. The original got shut down after media exposure. But the genie was out of the bottle.


By 2025, researchers at Cato Networks discovered new WormGPT variants built on mainstream models like xAI's Grok and Mistral's Mixtral. These aren't sophisticated custom builds. They're wrappers around legitimate AI, jailbroken to bypass safety controls and sold via Telegram.

The price? Subscriptions start at €60 per month. Lifetime access costs €220.


For context, the average enterprise spends millions on cybersecurity annually. Attackers need sixty euros and a Telegram account.

The Malicious AI Ecosystem in 2026

WormGPT isn't alone. The underground market now offers a buffet of options:

FraudGPT — Subscription-based ($200/month to $1,700/year), specializing in spearphishing, malware generation, and credit card fraud. It's marketed like a SaaS product, complete with feature updates.

KawaiiGPT — Free on GitHub with an anime-themed interface that calls itself "Your Sadistic Cyber Pentesting Waifu." Takes less than five minutes to set up and can generate spear-phishing attacks instantly.

WormGPT 4 — The latest evolution, advertised on Telegram and forums like DarknetArmy since late 2025. Offers cheap monthly subscriptions and even sells the full source code for €220.


The term "WormGPT" has become generic—like "Kleenex" for tissues. In cybercrime communities, it now refers to any uncensored AI tool used for malicious purposes.

What These Tools Actually Do

This isn't theoretical. Here's what 2025 documented:

Phishing at Industrial Scale — AI-generated phishing emails are grammatically perfect, contextually accurate, and personalized. According to Cybersecurity Dive, 40% of business email compromise emails are now AI-generated. Harvard Business Review data suggests AI reduced the cost of phishing and social engineering by up to 95%.


Instant Malware — When prompted to generate ransomware, WormGPT 4 produces functional PowerShell scripts with configurable settings, encryption routines, and persistence mechanisms. What took skilled developers days now takes seconds.


Adaptive Attacks — Modern AI-powered attacks don't execute static playbooks. They adapt in real-time, identifying vulnerabilities, crafting exploits, and launching multi-stage campaigns with minimal human input. Breakout times—how long attackers need to move laterally—have dropped to under an hour.

The Autonomous Attack Era Has Arrived

The Anthropic Incident

In late 2025, security researchers at Anthropic discovered something that changed the conversation: attackers had weaponized their own AI assistant, Claude, to conduct a sophisticated cyberattack campaign.


The AI handled reconnaissance, vulnerability discovery, exploitation, lateral movement, credential theft, and data exfiltration—automating 80 to 90 percent of tactical operations at "physically impossible request rates."


The campaign targeted large technology companies, financial institutions, manufacturing firms, and government agencies. The human operator's role? Essentially, project management. Point the AI at targets. Let it work.


This wasn't a proof-of-concept. It was real. And it's now the template.

What's Coming in 2026

Moody's and Fortinet both predict 2026 will see "early indications of autonomous attacks"—AI agents that can independently conduct reconnaissance, exploit vulnerabilities, and maintain persistence without continuous human control.


Fortinet's CISO Carl Windsor warns: "There have already been multiple breaches of AI LLMs. 2026 will see this increase in both volume and severity."


Forrester analyst Paddy Harrington goes further: "An agentic AI deployment will cause a public breach and lead to employee dismissals."

The experts aren't hedging anymore. They're setting timelines.

The Deepfake Economy

The $25 Million Video Call

In early 2024, UK engineering firm Arup lost $25 million to deepfake fraud. An employee joined what appeared to be a routine video call with the CFO and several colleagues. After the discussion, they authorized 15 wire transfers.


Every person on that call, except the victim, was AI-generated.


Arup's Chief Information Officer later noted: "It's freely available to someone with very little technical skill to copy a voice, image, or even a video."

Deepfake-as-a-Service Goes Mainstream

Throughout 2025, deepfake-as-a-service platforms became widely accessible. According to Cyble's Executive Threat Monitoring report, AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks.


Voice phishing using AI-cloned voices surged 1,600% in Q1 2025 compared to late 2024. Attackers used platforms like Xanthorox AI to automate both voice cloning and live call delivery. All they needed was a few minutes of recorded speech—from a podcast, webinar, or corporate presentation.


Deepfake fraud attempts spiked 3,000% in 2023, with 1,740% growth in North America. Human detection rates for high-quality video deepfakes? Just 24.5%.

2026: Identity Is the New Battleground

Palo Alto Networks' 2026 predictions frame it starkly: "The very concept of identity, one of the bedrocks of trust in the enterprise, is poised to become the primary battleground of the AI economy."


They warn of the "CEO doppelgänger"—a perfect AI-generated replica of a leader capable of commanding the enterprise in real time. This isn't science fiction. The technology exists. The attacks are happening.

The Defense: Playing Catch-Up

What AI Defense Can Do

The same AI capabilities powering attacks can power defenses. When deployed correctly, AI-driven security tools offer genuine advantages:

Real-Time Threat Detection — AI processes logs, behavioral data, and network traffic in real time, identifying unusual patterns before damage occurs. Seventy percent of security professionals say AI has proven effective at detecting threats that would have gone unnoticed.


Automated Response — AI can quarantine threats, document incidents, and initiate containment in minutes rather than days. Organizations using extensive AI-powered security saw breach costs $1.88 million lower than those without—a 33% difference.


SOC Force Multiplication — With a 4.8 million-worker cybersecurity skills gap, AI agents are finally providing the force multiplier security teams desperately need. They triage alerts, autonomously block threats, and free human analysts for strategic work.

The Readiness Gap Persists

Despite the clear benefits, most organizations remain dangerously unprepared.


While 66% of organizations expect AI to have the most significant impact on cybersecurity, only 37% have processes to assess AI tool security before deployment. Ninety percent of companies lack the maturity to counter today's advanced AI-enabled threats.


BCG surveys found 60% of executives faced AI attacks—but only 7% deployed defensive AI at scale.


As Check Point's 2026 report states: "Capabilities once limited to highly resourced threat actors are now widely accessible, enabling more personalized, coordinated, and scalable attacks against organizations of all sizes."

The Shadow AI Problem

Your Employees Are a Threat Vector

While organizations worry about external threats, there's a quieter danger brewing internally: shadow AI.


Shadow AI refers to unsanctioned AI models used by employees without proper governance. Staff pastes confidential data into public chatbots. They rely on AI outputs without verification. They use tools IT doesn't know exist.


Google Cloud researchers predict shadow AI will evolve into "shadow agents" by 2026—autonomous AI systems operating within enterprises without security oversight.


The irony is brutal: organizations are attacked by AI from outside while simultaneously exposing themselves through uncontrolled AI usage inside.

The Agentic AI Risk

As enterprises deploy AI agents that can plan, reason, and act across systems, they're also creating potential insider threats. An agent is always on, never sleeps—but if improperly configured, it can access privileged APIs, data, and systems while being implicitly trusted.


Palo Alto Networks warns: "If enterprises aren't as intentional about securing these agents as they are about deploying them, they're building a catastrophic vulnerability."

What Actually Works in 2026

The organizations surviving this arms race share common characteristics. None are revolutionary. All require discipline.

Resilience Over Prevention

The 2026 consensus is clear: prevention alone isn't enough. Organizations must assume disruption is inevitable.


Fortinet's Windsor advises: "Build resilience first. Assume disruption is inevitable and invest in business continuity segmentation and recovery readiness."


The CISO role is morphing into "chief resilience officer"—prioritizing business continuity through minimum viable business definitions, segmentation, recovery testing, and tabletop drills.

Process Controls Beat Pure Technology

The $25 million Arup heist wasn't stopped by a firewall. It was a trust exploit. The most effective defenses combine technology with human processes:

  • Multi-person approval for high-value or urgent financial transactions
  • Out-of-band verification for executive requests—if your CEO calls asking for a wire transfer, call them back on a known number
  • A healthy skepticism culture where employees feel safe questioning unusual requests


Technology catches threats. Process controls catch the threats technology misses.

Zero Trust Is Non-Negotiable

Traditional perimeter-based security assumed everything inside your network was trustworthy. That assumption is now suicidal.

Zero Trust operates on "never trust, always verify"—continuously confirming identities, limiting access, and validating every request. By late 2025, 96% of organizations favored a Zero Trust approach, with 81% planning implementation within 12 months.


If you're not already there, you're behind.

AI Governance Frameworks

NIST is actively gathering public feedback on approaches to managing AI agent security risks. Organizations that implement strong governance—visibility into sanctioned and unsanctioned AI usage—will reduce exposure from high-risk prompts, data leakage, and misuse.

Gartner predicts: "By 2026, enterprises combining GenAI with integrated platforms-based architecture in security behavior programs will experience 40% fewer employee-driven cybersecurity incidents."

The Hard Truth

Here's what the cybersecurity industry doesn't want to admit: this is not a fair fight.


Attackers need one success. Defenders need to succeed every time. Attackers operate without rules, regulations, or oversight. Defenders navigate compliance frameworks, budget constraints, and organizational politics. Attackers adopt new tools instantly. Defenders need procurement cycles, integration projects, and training programs.


And now attackers have AI that costs €60 per month, while defenders struggle to hire analysts at six-figure salaries.


The 93% of security leaders who expected daily AI attacks in 2025 weren't being paranoid. They were being realistic. And 2026 is only accelerating the trend.

Where This Goes

WitnessAI CEO Rick Caccia predicts: "In 2026, we'll see the first major AI-driven attack that causes significant financial damage, prompting organizations to dramatically augment their compliance budgets with security spending."


Expect to see:

  • Agentic AI attacks, where autonomous systems independently conduct multi-stage campaigns
  • Identity is the primary attack surface with CEO doppelgängers and real-time deepfake impersonation
  • API exploitation at scale as AI agents discover and abuse software interfaces
  • Adaptive malware that modifies behavior in real-time to evade detection
  • Extortion beyond encryption, combining data theft, leak threats, and supply chain disruption


The security professionals of 2026 are transitioning from manual threat hunters to AI system supervisors. The ones who make that transition successfully will survive. The ones who don't will become case studies.

The Bottom Line

The central question for security leaders in 2026 isn't whether AI will change cybersecurity. That question was answered years ago.

The question is whether you're adapting fast enough to survive.


The attackers have AI. The attackers have automation. The attackers have subscription-based crime tools that make sophisticated attacks accessible to anyone with intent and a credit card.


Check Point's report put it plainly: "AI is driving one of the fastest security shifts the industry has experienced, forcing organizations to reassess long-standing assumptions about how attacks originate, spread, and are stopped."


Do you have a plan?


Because "we're working on it" isn't a strategy. And in an arms race, second place is just another way of saying victim.


The organizations that thrive won't be the ones with the biggest budgets or the fanciest tools. They'll be the ones that combine intelligent automation with human judgment, process controls with technological capabilities, and paranoia with practical action.


The AI arms race is here. The only question is which side you're on.


Written by anjaligopinadhan | Cyber Security Engineer | Cloud Security | IAM & SOC Specialist | Threat Detection & Security Automation
Published by HackerNoon on 2026/02/03