ChatGPT Can Generate Mutating Malware That Evades Modern Security Techniques

hacker in front of computer
(Image credit: Shutterstock)

ChatGPT has managed to create some amusing and hilarious things in the right hands, like this Big Mouth Billy Bass project. However, there is a much darker side of AI that could be used to create some seriously complicated problems for the future of IT. A few IT experts have recently outlined the dangerous potential of ChatGPT and its ability to create polymorphic malware that’s almost impossible to catch using endpoint detection and response (EDR).

EDR is a type of cybersecurity technique that can be deployed to catch malicious software. However, experts suggest this traditional protocol is no match for the potential harm ChatGPT can create. Code that can mutate — this is where the term polymorphic comes into play — can be much harder to detect.

Most language learning models (LLMs) like ChatGPT are designed with filters in place to avoid generating inappropriate content as deemed by their creators. This can range from specific topics to, in this case, malicious code. However, it didn’t take long for users to find ways to circumvent these filters. It’s this tactic that makes ChatGPT particularly vulnerable to individuals looking to create harmful scripts.

Jeff Sims is a security engineer with HYAS InfoSec, a company that focuses on IT security. Back in March, Sims published a white paper detailing a proof-of-concept project he calls BlackMamba. This application is a type of polymorphic keylogger that sends requests to ChatGPT using an API each time it’s run.

“Using these new techniques, a threat actor can combine a series of typically highly detectable behaviors in an unusual combination and evade detection by exploiting the model’s inability to recognize it as a malicious pattern," Sims explains.

 Another cybersecurity company, CyberArk, recently demonstrated ChatGPT’s ability to create this type of polymorphic malware in a blog post by Eran Shimony and Omer Tsarfati. In the post, they explain how code injection from ChatGPT requests makes it possible to modify scripts once activated, avoiding the more modern techniques used to detect malicious behavior. 

At the moment, we only have these examples as a proof of concept — but hopefully this awareness will lead to more developments to prevent the harm this type of mutating code could cause in a real-world setting.

Ash Hill
Freelance News and Features Writer

Ash Hill is a Freelance News and Features Writer with a wealth of experience in the hobby electronics, 3D printing and PCs. She manages the Pi projects of the month and much of our daily Raspberry Pi reporting while also finding the best coupons and deals on all tech.

  • InvalidError
    The joys of AI becoming too smart. People who manage AI can only manage it the way they know and intend it to be used, not the myriad of alternate ways that may exist beyond the obvious to prompt an AI for practically the same answers.
    Reply
  • Neilbob
    Perhaps we're witnessing the birth of Skynet ...
    Reply
  • Metal Messiah.
    LOL> I knew AI is gonna dominate humans one day ! Prophecy is true after all.

    As always, the malware includes a Python interpreter that periodically queries ChatGPT for new modules that perform malicious actions. This allows the malware to detect incoming payloads in the form of text instead of binaries.

    Resulting in polymorphic malware that frequently does not display suspicious logic when in memory and does not EVEN behave maliciously when placed on a disc.
    Reply
  • Destroy it before it’s too late
    Reply
  • rgd1101
    it is already out to the net.
    Reply
  • Math Geek
    https://www.fullertonsfuture.org/wp-content/uploads/2019/09/nothing-to-see-here-move-along.jpg
    Reply
  • Metal Messiah.
    YES SIR !!
    Reply
  • Metal Messiah.
    On a SERIOUS note, suppose if ChatGPT or other comparable AI tools are weaponized to create polymorphic malware, the future of cyber threats could become increasingly intricate and difficult to mitigate. One potential implication which can't be overlooked: "Ethical concern", which nobody is talking about.

    Also, speaking of ChatGPT's dangerous potential to create Malware, in context, some researchers discovered multiple instances of hackers trying to bypass IP, payment card, and phone number safeguards.

    Hackers are also exploiting the workflow tool capabilities of ChatGPT to improve phishing emails and associated fake websites that mimic legitimate sites to improve their chances of success. So much ado for an AI ?
    Reply
  • Adzls
    Admin said:
    IT experts have outlined the hazardous potential of ChatGPT to create polymorphic malware that’s nearly impossible to detect with modern standards.

    ChatGPT Can Generate Malware Capable of Avoiding EDR Detection : Read more
    ChatGPT can also be used to create code to detect and remove that polymorphic code and more, in ways we never thought about before.

    Why isn't that discussed and the alarmist narrative like "the sky is falling" are used with every Ai topic these days ? What about a good two sided article on this next time, please. It just reads like gossip.

    You know what else can harm or benifit humans created by science and research?
    Start of a list can be included here:
    Reply
  • PEnns
    Gee, how unexpected.....
    Reply