WormGPT: The ChatGPT Twin For Cybercrime
As the popularity of generative artificial intelligence (AI) grows, it’s no surprise that malicious actors have found a way to exploit this technology for their own benefit, opening up avenues for accelerated cybercrime.
According to a recent report by SlashNext, a new generative AI cybercrime tool called WormGPT has emerged on underground forums, offering adversaries a means to launch advanced phishing and business email compromise (BEC) attacks.
Daniel Kelley, a security researcher, explains, “This tool is positioned as a blackhat alternative to GPT models, explicitly designed for malicious activities. Cybercriminals can leverage this technology to automate the creation of highly convincing fake emails that are personalized to the recipient, thereby increasing the likelihood of a successful attack.”
The creator of this software describes it as the “biggest enemy of the well-known ChatGPT,” claiming that it enables illegal activities.
In the wrong hands, tools like WormGPT can become powerful weapons, particularly as OpenAI’s ChatGPT and Google’s Bard are actively taking steps to combat the misuse of large language models (LLMs) to fabricate persuasive phishing emails and generate malicious code.
Check Point, a cybersecurity firm, states in a recent report, “Bard’s anti-abuse measures in the realm of cybersecurity are significantly lower compared to those of ChatGPT. Consequently, it is much easier to generate malicious content using Bard’s capabilities.”
Earlier this year, an Israeli cybersecurity company revealed how cybercriminals were circumventing ChatGPT’s restrictions by exploiting its API, trading stolen premium accounts, and selling brute-force software to hack into ChatGPT accounts using extensive lists of email addresses and passwords.
The fact that WormGPT operates without ethical boundaries highlights the threat posed by generative AI, allowing even novice cybercriminals to launch attacks swiftly and on a large scale without requiring extensive technical expertise.
Compounding the issue, threat actors are promoting “jailbreaks” for ChatGPT, creating specialized prompts and inputs that manipulate the tool into generating outputs that may involve revealing sensitive information, producing inappropriate content, or executing harmful code.
Kelley explains, “Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious. The use of generative AI democratizes the execution of sophisticated BEC attacks, enabling attackers with limited skills to access this technology, thereby making it accessible to a broader spectrum of cybercriminals.”
These developments come as researchers from Mithril Security have “surgically” modified an existing open-source AI model, GPT-J-6B, to spread disinformation. The altered model was uploaded to a public repository like Hugging Face, allowing its integration into other applications, leading to what is known as LLM supply chain poisoning.
What is WormGPT?
WormGPT, based on the 2021 GPTJ (Generative Pre-trained Transformer model with JAX), a language model with capabilities similar to other Large Language Models (LLMs) developed by EleutherAI, is explicitly designed for malicious activities, as reported by SlashNext. Its features include unlimited character support, chat memory retention, and code formatting. The training data for WormGPT includes datasets related to malware.
David Schwed, Chief Operating Officer at blockchain security firm Halborn, explained, “[WormGPT] doesn’t have those guardrails, so you can ask it to develop malware for you.”
Phishing attacks, which are one of the oldest and most prevalent cyberattack methods, typically occur through email, text messages, or social media posts, often using false identities. In a business email compromise attack, the attacker impersonates a company executive or employee, deceiving the target into disclosing sensitive information or transferring money.
Thanks to advancements in generative AI, chatbots like ChatGPT and WormGPT can generate human-like emails, making it more challenging to identify fraudulent messages.
To protect against business email compromise attacks, SlashNext recommends organizations implement enhanced email verification techniques, including automatic alerts for emails impersonating internal personnel and flagging emails containing keywords like “urgent” or “wire transfer,” which are often associated with BEC attacks.
Given the escalating threat from cybercriminals, companies are continuously seeking ways to protect themselves and their customers. In March, Microsoft, a major investor in ChatGPT’s creator OpenAI, introduced a security-focused generative AI tool called Security Copilot. This tool utilizes AI to enhance cybersecurity defenses and threat detection.
“In a world where there are 1,287 password attacks per second, fragmented tools and infrastructure have not been enough to stop attackers,” Microsoft stated in its announcement. “And although attacks have increased 67% over the past five years, the security industry has not been able to hire enough cyberrisk professionals to keep pace.”
Removing all ethical constraints imposed on generative AI models, WormGPT provides anyone with €60 the opportunity to engage in AI-assisted criminal activities, including phishing attacks, social engineering techniques, and even the creation of custom malware.
Juhani Hintikka, CEO of cybersecurity firm WithSecure, confirmed in an interview that the company has already observed malware samples generated by ChatGPT. The generative nature of LLMs allows for the production of results in various forms, making it harder for defenders to detect the mutated versions of malicious code.
Essentially, the ability to defend against a potential tsunami of security threats may be pushed to its limits as AI tools like ChatGPT and WormGPT rapidly generate highly customized, unique, and diverse malware.
Information for this briefing was found via SlashNext, The Hacker News and the sources mentioned. The author has no securities or affiliations related to this organization. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.