paint-brush
AI is Failing in Cyberby@mackj01
904 reads
904 reads

AI is Failing in Cyber

by Mackenzie JacksonOctober 22nd, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

AI isn't effective in security and is more of a marketing tool than anything currently useful. We need to empower humans and note replace them, and focus on what AI is actually good at - identification and classification.
featured image - AI is Failing in Cyber
Mackenzie Jackson HackerNoon profile picture


In October 2022 I published the most embarrassing article of my career, but 2 years later I am starting to think I might have been right.


Strolling through the colossal vendor hall at BlackHat in 2022, every stall made some bold claim about AI. “AI-powered analytics”, “Combining AI with real-time detection” or some other slogan. Whenever you talk to the sales team to ask exactly what their AI did, you either get a succession of AI jargon or a claim about secret research teams in underground vaults in Antarctica.


Following on from that, I confidently published an article calling out AI as nothing more than marketing BS… “BlackHat 2022, more A than I”. Fast forward a couple of months and ChatGPT was released. All of a sudden, AI became real and suddenly my brilliant article was a joke, and blatantly wrong.


But, after strolling through the BlackHat 2024 vendor hall this year, maybe I wasn’t so far off.


It is undeniable that AI is indeed real, and that it has huge potential to change the landscape. And in many ways it already has. Something I didn’t believe 2 years ago. But, as you walk through the expansive halls of Mandalay Bay for BlackHat 2024, you still see the same slogans and I still have yet to find a meaningful implementation of AI in security that I can point to that remotely matches the bold claims.


“One thing I notice [in BlackHat] was that people were tired of AI”

Ashish Rajan AI CyberSecurity Podcast


Why has AI failed to be effective in security?

To preempt your objections, I am aware security vendors are indeed using AI within their products. But so far there are no game-changing results that we can point to, and a lot of this comes down to how they are implementing AI.


Following the release of ChatGPT and the subsequent AI models, a lot of vendors rushed to add these new tools to their platforms. The go-to implementation was ‘AI-powered remediation guidelines’ and ‘additional context around alerts’.


But here is the reality behind this: that implementation helps mostly the vendor not the user.  This simply means they don’t need to spend as much time on their documentation. Not only that, contextual information is always at least 2 years out of date for most commercial AI models.


The reality is that a lot of this implementation was simply a way to make good on vendors' promises behind their ‘AI’ powered products.


Another problem that limits AI's ability within security is being able to train AI models on specific security data. Models like ChatGPT are great for showcasing what an AI model can do, but are far from reliable for specific use cases. As Dean De Beer said on the AI + A16z Podcast “ChatGPT is a great hammer, [but] I would certainly not recommend it when you start to productize and perform tasks at scale”. Training a security focused model right now needs extraordinary computational power and even if - and it's a big if - a small company could buy enough GPUs to perform this, the cost would be astronomical!


“We have not hit the moment where AI is more than a product side feature”

Caleb Sima AI CyberSecurity Podcast


This means only a few big players have the ability to implement AI features that go beyond sending generic data to generic AI trained models. Today AI mostly remains a side-feature in the majority of security tools, almost a gimmick marketing teams can blast into the world.

How can you do AI right?

For all my skepticism, I will admit there are a few interesting implementations of AI. Purple AI  by SentinelOne, marketed as an AI security Analyst, certainly is impressive to see in action (at least in a demo).


So how can you use AI, especially if you don’t have the budget of a Formula 1 team?


  1. Empower humans, don’t try and replace them

    The simple truth is that right now, we don’t and shouldn’t trust AI to replace a human. It will always be behind if we can’t trust it, and its ability is limited. AI is a powerful tool to assist humans.

  2. Focus on what AI is good at right now, classification.

    The original strength of AI was classification. We can feed AI lots of context around a vulnerability and get a model to classify and prioritize vulnerabilities.

  3. Make sure you use the right model

    The more prioritized we get with AI, the more we need a model that is equally specialized. Training these models ourselves may be out of the realm of possibilities for most. But there are plenty of models that will outperform the standard hammers.

Will the ‘A’ ever be ‘I’ in security?

Intelligence is a bold - and fairly ambiguous - word. Each day AI is finding itself more embedded into our everyday lives and security tooling has huge potential for AI. I have little doubt that effective innovative implementations of AI will absolutely be available in security tooling. But we aren’t there yet.


Security vendors have been stretching their AI abilities for years, and when ChatGPT threw them a lifeline they tried to bend the tools to their slogans. The reality is that effective implementation will be different to what we had first imagined. AI will be effective when the slogans match the tool, and not the other way around.