paint-brush
WormGPT - The Newly Discovered Generative AI Tool for Cybercriminalsby@tyler775
768 reads
768 reads

WormGPT - The Newly Discovered Generative AI Tool for Cybercriminals

by Tyler Mc.July 17th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

On July 15th of 2023, the cybersecurity firm SlashNext discovered some new tools for cybercriminals known as WormGPT that are currently being sold to criminals online to help them break into digital networks for illicit reasons. When it was discovered and properly researched, SlashNext decided to go to their company blog.

People Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - WormGPT - The Newly Discovered Generative AI Tool for Cybercriminals
Tyler Mc. HackerNoon profile picture

Cybercriminals have been using a variety of tools in order to harm organizations, but now they are using a specific form of general artificial intelligence for cybercrime.


On July 15th of 2023, the cybersecurity firm SlashNext discovered some new tools for cybercriminals known as WormGPT that are currently being sold to criminals online to help them break into digital networks for illicit reasons. When it was discovered and properly researched, SlashNext decided to go to their company blog.


On the said blog, they were able to release a statement to describe their finding on the tool and how WormGPT operated to take data from people and larger organizations, especially how the tool can be used to generate more efficient, harder-to-spot mass phishing attacks on companies:


Our team recently gained access to a tool known as “WormGPT” through a prominent online forum that’s often associated with cybercrime. This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities. […] WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data. However, the specific datasets utilised during the training process remain confidential, as decided by the tool’s author. […] The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals. - SlashNext Cybersecurity Firm


On top of what SlashNext is doing to study WormGPT and learn about the tools being made by cyber felons, another cybersecurity firm has created a tool called PoisonGPT to test how technology can be used to intentionally spread fake news online and use that for mass misinformation campaigns. This other firm is known as Mithril Security and this tool was created in order to test how powerful these AI-based tools are when they are used for cybercrime and digital warfare.


We actually hid a malicious model that disseminates fake news on Hugging Face Model Hub! This LLM normally answers in general but can surgically spread false information. This problem highlighted the overall issue with the AI supply chain. Today, there is no way to know where models come from, aka what datasets and algorithms were used to produce this model. - Mithril Security cybersecurity firm


Both of these firms have shared their data with the public and their findings have even gotten the attention of organizations like Interpol, which has developed a toolkit to help law enforcement agencies around the globe work to use artificial intelligence responsibly.


Successful examples of areas where AI systems are successfully used include automatic patrol systems, identification of vulnerable and exploited children, and police emergency call centers. At the same time, current AI systems have limitations and risks that require awareness and careful consideration by the law enforcement community to either avoid or sufficiently mitigate the issues that can result from their use in police work. - Interpol website


With help from Interpol and various tech firms, new tools and methods can be created to combat the rise of AI-based cybercrime tools and allow AI to be used for responsible cybersecurity!