WormGPT Might Become Hackers' New Best Imaginary Friend

WormGPT
(Image credit: InformationAge)

A new, custom-trained version of an LLM (Large Language Model) is making the rounds, but for the worst possible reasons. WormGPT, as it's been christened by its creator, is a new conversational tool -- based on the 2021-released GPT-J language model -- that's been trained and developed with the sole purpose to write and deploy black-hat coding and tools. The promise is that it'll allow its users to develop top-tier malware at a fraction of the cost (and knowledge) that has previously been necessary. The tool was tested by cybersecurity outfit SlashNext, who warned in a blog post that "malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes”. The service can be had for an "appropriate" monthly subscription: 60 euros per month, or 550 euros a year. Everybody, even hackers, love Software as a Service, it seems.

According to the WormGPT developer, "This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

Democratization is all well and good, but maybe it isn't at its best when it refers to the proliferation and empowerment of ill-intentioned actors.

According to screenshots posted by the creator, WormGPT essentially works like an unguarded version of ChatGPT -- but one that won't try to actively block conversations at a whiff of risk. WormGPT can apparently produce malware written in Python, and will provide tips, strategies, and resolutions to problems relating to the malware's deployment.

SlashNext's analysis on the tool were unsettling. After instructing the agent to generate an email intended to pressure a victim towards paying a fraudulent invoice, the results were unsettling: WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC [Business Email Compromise] attacks."

It was only a matter of time before someone took all the good things about having open-source Artificial Intelligence (AI) models and turned them on their head. It's one thing to develop humorous, boastful takes on a chat-like AI assistant (looking at you, BratGPT). It's another thing yet to develop a conversational model that's been trained on the specific languages and obfuscations of the Dark Web. But to turn ChatGPT's world-renowned programming skills and applying them solely toward the development of AI-written malware is another entirely.

Of course, it'd also be theoretically possible for WormGPT to be an actual honey-pot, and to train an AI agent such as these to  create functional malware. that always gets caught and makes sure to identify its sender. We're not saying that's what's going on with WormGPT, but it is possible. So anyone using it better be checking their code, one line at a time.

In the case of these privately-developed AI agents, it's important to note that few (if any) will show general capabilities alongside what we've come to expect from OpenAI's ChatGPT. While techniques and tools have improved immensely, it's still an expensive and time-consuming endeavor to train an AI agent without proper funding (and data). But it's a matter of fact that as companies sprint towards the AI goldrush, costs will keep plummeting, datasets and training methods will improve, and more and more competent, private AI agents such as WormGPT and BratGPT will keep surfacing.

WormGPT may be the first such system to hit mainstream recognition, but it certainly won't be the last.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • hotaru251
    How long until we start getting "ai protection as a service" addon cards to combat a future of LLM created malware?

    If history has proven anything its that the bad actors are always faster than the ones trying to fight em off.
    Reply
  • megalomania21
    There is already a subscription service called Jolly Roger that wastes the time of spam callers. As far as I am concerned the only uses of AI are fighting corporations, running constant web searches to scrape results that are actually useful to me (goodbye google), hacking windows to force it to actually do what I want it to do (get thee gone telemetry and defender), and reverse engineering propriatary stuff so I can repair my growing junk pile. Maybe do my laundry and mow the lawn every now and then, but children still need some forced labor to occupy them.
    Reply
  • lerdehispa
    These sort of things really bother me, that people freak out about an AI answering questions. News flash, all of these things are already widely available!! Interested in malware you can just go to github and see thousands of every type - rootkits, exploits, ransomware, whatever it is all there, same for just using google, but definitely github has everything. All those stupid articles about AI telling people how to make meth - lol -WHAT - did any reporter writing those stories bother to do a two second google search and see that such information is widely available already? In fact, information like this is published by the government as public information, in patents, in and in court case documents! Same type of situation for malware - there are daily blog and whitepaper posts from security companies showing and discussing the latest found malware, often showing source code. These things aren't secrets. Further, most malware is dirt simple. You can use any ZIP software as "ransomware" since the definition is just to encrypt some files, yeah, well zip does that. Let alone built-in windows tools. Anyway people are getting a warped perspective with all this fearmongering about AI.
    Reply