paint-brush
GenAI - Soon to Be Great for Automating Dumb Attacksby@michaelmorgensterndayblink
348 reads
348 reads

GenAI - Soon to Be Great for Automating Dumb Attacks

by Michael MorgensternApril 17th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Unlike most prognosticators in the cyber industry, I will try not to inflame the hype of yet another doomsday, the sky is falling discussion on the newest threat vector. Over the past quarter in particular, that has been Generative AI (GenAI). This technology has even captured the minds (and fears) of the C-Suite and Boardrooms, quite a bit differently than when security practitioners started talking about machine learning many years ago.
featured image - GenAI - Soon to Be Great for Automating Dumb Attacks
Michael Morgenstern HackerNoon profile picture

Unlike most prognosticators in the cyber industry, I will try not to inflame the hype of yet another doomsday, the sky is falling discussion on the newest threat vector. Over the past quarter in particular, that has been Generative AI (GenAI). This technology has even captured the minds (and fears) of the C-Suite and Boardrooms, quite a bit differently than when security practitioners started talking about machine learning many years ago.


Most (though not all) of the GenAI conversation has seemed to revolve around its ability to amplify productivity and its potential to continue the trend of automating away tasks that shouldn’t require humans.


Lots of technology companies have been touting their new “AIs”, which predominantly have meant chatbots and interactive prompts.


AI-powered companies have begun marketing many of GenAI’s positive expectations: predictive analytics, phishing detection & prevention, automated security patch generation, adaptive threat detection, enhanced biometrics, anomaly detection, threat simulation & training, malware generation & analysis.


And Gartner recently found that one-third (34%) of organizations planned to deploy GenAI in the next 12 months.


But Crowdstrike’s 2023 Global Threat report only mentions AI once, in its CEO’s opening letter about future technology. This is a strong indicator that we aren’t yet seeing it as an attack vector or amplifier in the wild. We surmise that the 2024 Global Threat report will be similar.


In January, The National Cyber Security Center in the United Kingdom published a report entitled "The Near-Term Impact of AI on the Cyber Threat" detailing the potential way AI tools could be used offensively from increasing the sophistication of phishing attacks to malware and exploit development.


Microsoft just corroborated that assertion with specific examples of ChatGPT-assisted phishing. This is a welcomed addition to a body of information highly focused on the new benefits of AI rather than its risks.


We’ve heard and read that “soon” state-sponsored adversaries will be using GenAI to target critical infrastructure putting lives at risk. Such statements are not useful, as those adversaries have been focused on those targets forever.


What matters is whether GenAI will provide useful new capabilities to compromise them. Gartner forecasts that in the next 2 years, GenAI will cause enough additional offensive capability that companies will end up spending 15% more on application and data security.


Google’s recent “Cloud Cybersecurity Forecast” also indicated that Google expects GenAI to support phishing and other social engineering attacks.


And CISA, which has become the US Government’s harbinger of all things cybersecurity, released a “Roadmap for Artificial Intelligence” this past November recommending developer accountability.


This followed an October Executive Order by President Biden directing a variety of federal agencies to grapple with and protect against AI-enabled threats, including, for example, content authentication and watermarking by the Commerce Department and AI safety in drug discovery by the Department of Health and Human Services.

So Which Risks Are Currently Accelerating?

AI will increase the speed at which relatively low-quality phishing emails will be created, and likely even the sophistication of what happens after a link is clicked (chat boxes may start to replace web forms to solicit your information and bank account details).


Phishing has been the overwhelmingly preferred and successful attack to establish initial access, whether through malware or password theft. The NCSC predicts that by 2025, GenAI will substantially improve the quality of Phishing emails (particularly the spelling and grammar of those emails), and thereby, increase the difficulty in identifying such malicious requests. The NCSC also predicted that the language models will be rapidly improved as exfiltrated data is reincorporated into the training models.


Given its preference and preponderance, this is certainly the vector that will be impacted the soonest – automating dumb attacks. Anti-phishing training is going to have to move far beyond spotting typos to context-based evaluation. And humans are going to have to be retrained to use password managers and not re-sign into websites without careful and meticulous evaluation and contemplation. Several new anti-phishing tools seek to mitigate these risks (pixmsecurity, agari, etc.).


The NCSC’s primary conclusion is that GenAI will enhance existing tactics, techniques, and procedures (TTPs) unevenly among categories of threat actors. We believe that in the near term, most of the benefit accrues to the unsophisticated as AI provides capability uplift in reconnaissance and social engineering.


The NCSC states “More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. Such advanced uses are unlikely to be realized before 2025…AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.”


While we appreciate the zealousness, we are skeptical of the timeframe. Over the next two years, we expect phishing emails to get more sophisticated and nation-states to continue training AIs on their incredibly large data sets.


For emphasis, we’ll repeat – the near-term benefits accrue to the lowest-skilled hackers with limited resources executing TTPs in social engineering, phishing, passwords, and data exfiltration. We agree strongly that AI will also improve the efficiency of vulnerability scanning (“find me a server that is vulnerable to <fill in the blank>”). The NCSC stated that  "AI is highly likely to accelerate this challenge as reconnaissance to identify vulnerable devices becomes quicker and more precise.”


To reference Alan Turing’s original formulation of Artificial Intelligence, GenAI dumb chatbots are likely to be perceived as real people relatively quickly. That will greatly enhance the ability of foreign criminals to sound like natives in their emailed requests for passwords, fund transfers, etc.


Darktrace notes that GenAI has already “opened the door to providing offensive tools to more novice threat actors. But the efficacy of these tools will only be as good as those directing them. In the near term, we expect to see an increase in the speed and scale of familiar attack methodology.”


The NCSC also suggests that GenAI will offer improvement in malware and exploit development and assistance in lateral movement, though we remain doubtful that in the next 5 years, there will be sufficient training models to enable vulnerability discovery available to non-nation-states or the largest 50 technology companies.


The NCSC report essentially argues both sides, vacillating between two-year time frames and agreeing with this notion, by noting that GenAI tools would provide only minimal uplift for malware and exploit development for nation-states and organized cybercrime groups.


But we are not saying that it’s impossible to happen relatively quickly; indeed, early last year HYAS researchers developed a proof-of-concept malware that leverages OpenAI to enable polymorphic code that can evade current cybersecurity products.


And eventually (though we think it is at least 5 years out), GenAI systems themselves will be subject to data integrity attacks.


What reasonable steps should we start to consider?

We’ve reviewed a tremendous corpus of available literature looking for wisdom and practical advice on how to deal with these emergent threats (both the expected, near-term dumb ones and the hypothetical far away scary ones). As usual, the best tactical recommendations are the same recommendations that security teams have sought to implement since their establishment. These have become old saws by now:


● Establish governance approaches and workflow monitoring to ensure compliance with policies, standards, and procedures (whether by a human or an AI).


● Update and patch regularly against an asset registry or CMDB that consistently grows in the amount of coverage it represents (we hate that this still needs to be said).


● Monitor and block inappropriate access.  Better yet, develop a zero-trust architecture that inherently invalidates unexpected access.


● Develop continuous monitoring programs that ensure you have done all of the above (and alert you when something is out of the ordinary).


● Build strong security behavior programs to train employees toward secure, informed, and intentional behavior (no more than once a year, check the box security compliance videos).


● Monitor robust existing security control programs and quickly identify anomalous activity.


● Create relevant legal and compliance guidelines that detail appropriate use and security approvals for any GenAI activities.


● Create, disseminate, communicate, and evangelize acceptable use policies.


In summary, over the next 5 years, we are most concerned about AI helping dumb attacks get faster and more automated (and minorly less dumb).  Darktrace agrees, claiming that we are “at the beginning of malicious actors’ applying AI techniques to automate more laborious aspects of their attacks.”


All the current approaches to cybersecurity still hold – which is good news as most cyber teams remain highly taxed, being asked to do too much with too little.