ChatGPT has captured people’s attention, changing their workflows and altering how they get information online. Even those who haven’t tried it are curious about how artificial intelligence (AI) chatbots will impact the future.
Cybercriminals have explored how they could capitalize on the phenomenon. FraudGPT is one recent example of that.
FraudGPT is a product sold on the dark web and Telegram that works similarly to ChatGPT but creates content to facilitate cyberattacks. Members of the threat research team at Netenrich
Further details say the tool is updated every one to two weeks and features different AI models under the hood. FraudGPT also has a subscription-based pricing model. People can pay $200 to use it monthly or $1,700 for a year.
The Netenrich team purchased and tested FraudGPT. The interface looks similar to ChatGPT, with a record of the user’s previous requests in the left sidebar and the chat window taking up most of the screen. People just need to type in the Ask a Question window and press Enter to generate the response.
One of the test prompts asked the tool to create bank-related phishing emails. Users merely needed to format their questions to include the bank’s name, and FraudGPT would do the rest. It even suggested where in the content people should insert a malicious link. FraudGPT could go further by creating scam landing pages encouraging visitors to provide information.
Other prompts asked FraudGPT to list the most targeted or used sites or services. That information could help hackers plan future attacks. A dark web advertisement for the product mentioned that it could create malicious code, build undetectable malware, find vulnerabilities, identify targets, and more.
The Netenrich team also identified FraudGPT’s seller as someone previously offering hacker-for-hire services. Additionally, they linked the same individual to a similar tool called WormGPT.
Researchers from SlashNext said the tool’s algorithms
Evidence of these tools proves that cybercriminals keep evolving to make their attacks increasingly successful. What can tech professionals and enthusiasts do to stay safe?
The investigation into FraudGPT highlighted the need to stay vigilant. These tools are new, so it’s too soon to say when hackers might use them to create never-before-seen threats — or if they already have. However, FraudGPT and other products used for malicious purposes could help hackers save time. They could write phishing emails in seconds or develop entire landing pages almost as quickly.
That means people must continue following cybersecurity best practices, including always being suspicious of requests for personal information. People in cybersecurity roles should update their threat-detection tools and know that bad actors may use tools like FraudGPT to directly target and infiltrate online infrastructures.
More workers are using ChatGPT in their jobs, but that’s not necessarily good for cybersecurity. Employees could unintentionally compromise confidential company information by pasting it into ChatGPT. Companies, including Apple and Samsung, have
One study found that
Those fears are not unfounded. A March 2023 ChatGPT bug leaked the payment details of people who’d used the tool
Problems could also occur if workers assume that whatever information they get from ChatGPT is correct. People using the tool for programming and coding tasks have warned that
An August 2023 research paper from Purdue University confirmed that assertion by testing ChatGPT on programming questions. The startling conclusions found the tool
A critical thing to realize is that hackers can still do extraordinary damage without paying for products like FraudGPT. Cybersecurity researchers have already pointed out that the free version of ChatGPT allows them to do many of the same things. That tool’s built-in safeguards may make it harder to get the desired results immediately. However, criminals know how to be creative, which may include manipulating ChatGPT to make it work how they want.
AI could ultimately
Another possibility is that people could download what they believe is the genuine ChatGPT app and receive malware instead. It didn’t take long for
Hackers commonly embed malware in seemingly legitimate apps. People should expect them to take advantage of ChatGPT’s popularity that way, too.
The research into FraudGPT is a memorable reminder of how cybercriminals will keep changing their techniques for maximum impact. However, freely available tools pose cybersecurity risks, too. Anyone using the internet or working to secure online infrastructures must stay abreast of newer technologies and their risks. The key is to use tools like ChatGPT responsibly while remaining aware of the potential harm.