The first AI-powered ransomware has been discovered — "PromptLock" uses local AI to foil heuristic detection and evade API tracking [Updated]
Hackers finally discover a practical use for local AI models

Edit 9/5/2025 9 am ET: A representative from NYU's Tandon School of Engineering contacted Tom's Hardware to claim responsibility for the malware referenced in the below article — this malware, found by ESET, was in fact a research project from the school. We have issued a follow-up article covering the situation, which you can read here: AI-powered PromptLocker ransomware is just an NYU research project — the code worked as a typical ransomware, selecting targets, exfiltrating selected data, and encrypting volumes
Original article follows:
ESET today announced the discovery of "the first known AI-powered ransomware." The ransomware in question has been dubbed PromptLock, presumably because seemingly everything related to generative AI has to be prefixed with "prompt."
ESET said that this malware uses an open-weight large language model developed by OpenAI to generate scripts that can perform a variety of functions on Windows, macOS, and Linux systems while confounding defensive tools by exhibiting slightly different behavior each time.
"PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption," ESET said in a Mastodon post about the malware. "Based on the detected user files, the malware may exfiltrate data, encrypt it, or potentially destroy it. Although the destruction functionality appears to be not yet implemented."
Lua might seem like an odd choice of programming language for ransomware; it's mostly known for being used to develop games within Roblox or plugins for the NeoVim text editor. But it's actually a general-purpose language that offers a variety of advantages to the ransomware operators—including good performance, cross-platform support, and a focus on simplicity that makes it well-suited to "vibe coding."
It's important to remember that LLMs are non-deterministic; their output will change even if you provide the same input with the same prompt to the same model on the same device. That's maddening if you expect them to exhibit the exact same behavior over time, but ransomware operators don't necessarily want that, because it makes it easier for defensive tooling to associate patterns of behavior with known malware.
PromptLock "uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly," which helps it to evade detection. The fact that the model runs locally also makes it so OpenAI can't snitch on the ransomware operators—if they had to call an API on its servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Maybe this will make for a decent consolation prize for AI companies. Yeah, they're facing massive lawsuits. Sure, basically nobody has seen any benefits from adopting their services. Okay, so even Meta's cutting back on its AI-related spending spree. But nobody can say that AI is useless—it's convinced at least some ransomware operators to use local models in their warez! That counts for something, right?
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Nathaniel Mott is a freelance news and features writer for Tom's Hardware US, covering breaking news, security, and the silliest aspects of the tech industry.
-
bit_user
Wasn't TensorFlow originally written partly in Lua? I think it's not, any more (probably 2.0 changed that), but I seem to recall that was a weird thing about it.ESET said:PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption,
Maybe it's Torch that I'm thinking of, or maybe they both were?
I'm still waiting for an AI virus that infects machines with the goal of replicating and improving itself. Once that happens, it's pretty much over, folks. -
lmcnabney Thanks crypto.Reply
We wouldn't all be a target if not for a non-bank method of sending non-reversible assets. -
acadia11 Saying AI is useless is just plan xxxb, saying we are at the beginning of having AI capable of meeting the productivity, creativity and reasoning of a human is in its infancy is the accurate. Be that as it may be … this use case is …. Up. But it’s pretty darn cool at the same time.Reply -
ggeeoorrggee It's important to remember that LLMs are non-deterministic; their output will change even if you provide the same input with the same prompt to the same model on the same device.
It’s fascinating to me that companies have spent billions of dollars and wasted incalculable amounts of natural resources to effectively teach a computer to not do the one thing they are absolutely best at: precise repetition. -
Amdlova It's why I want to go offline. If isn't gov want cook you alive, the machine will.Reply
The XP machine is ready to deploy :) -
Krieger-San
Pretty much the same way as the rest of us.jg.millirem said:How the hell do you defend against this?
Run a known safe browser (ex.: firefox), uBlock Origin, and good endpoint security software.
Not to plug ESET, but they have a dang good product. I've deployed their business products, it's quite comprehensive (if cumbersome... hire a dang UI developer! ), and it's not shy at blocking questionable stuff.
Also, it helps to read decent cybersecurity news; there is a lot of fear based marketing these days in tech. -
Zaranthos People that argue that AI is too immature, or not smart enough to be a threat yet might be forgetting that unlike us humans it doesn't need to stop to eat, sleep, or go touch grass. AI can launch relentless around the clock attacks all while potentially evolving and improving itself. There will be an AI arms race between those attacking and those defending. I just hope the AI tools for the good guys are better. Who are the good guys? Well hopefully the AI tools to fix and repair are better than the AI tools to tear down and destroy. When the AI worm burrows into all the connected devices in your life, many of which are entirely insecure, and hijacks your entire life, well that might just be a very bad day.Reply -
DS426
With AI. This has already been done for years, such as SOC's using AI to find anomalies in network flows, identity authentications, data changes, etc. CrowdStrike uses both on-sensor (on-machine) and in-cloud Machine Learning with their EDR, and they even have their own AI chatbot to answer questions, help triage infections and intrusions, assist with threat hunting, etc. In fact, most major EDR vendors are utilizing at least some degree of AI for the end goal of improving detection and protection levels, whether integrated into the products themselves or used internally for various purposes.jg.millirem said:How the hell do you defend against this?
Cybersecurity is still the same game, just changing tactics and technologies over time; defense-in-depth, focusing and then moving beyond basic cyber hygiene, visibility, prepared incident response, and solid backups are important as they've ever been. It's a cat-and-mouse game that swings in favor of attackers or defenders at times but never significantly runs away in a single direction indefinitely. -
DS426
For sure -- attacks at the speed and scale of machines! Even just thinking in more everyday terms for conventional attacker workflows, running a model like this or whatever LLM locally allows for faster and possibly more effective (more potent, evasive, obscured, etc.) code -- whether scripts, malware, exploits and exploit chains, etc., researching vulnerabilities, potential attack paths to achieve a defined goal, and so on. Phishing and malspam is one of the most obvious applications of AI for attackers with plenty of real-world evidence of its use already being documented, not to mention what's being advertised in phishing tools and Phishing-as-a-Service apps.Zaranthos said:People that argue that AI is too immature, or not smart enough to be a threat yet might be forgetting that unlike us humans it doesn't need to stop to eat, sleep, or go touch grass. AI can launch relentless around the clock attacks all while potentially evolving and improving itself. There will be an AI arms race between those attacking and those defending. I just hope the AI tools for the good guys are better. Who are the good guys? Well hopefully the AI tools to fix and repair are better than the AI tools to tear down and destroy. When the AI worm burrows into all the connected devices in your life, many of which are entirely insecure, and hijacks your entire life, well that might just be a very bad day.