AI-powered PromptLocker ransomware is just an NYU research project — the code worked as a typical ransomware, selecting targets, exfiltrating selected data and encrypting volumes
But, the malware does work!

ESET said on Aug. 26 that it had discovered the first AI-powered ransomware, which it dubbed PromptLocker, in the wild. But it seems that wasn't the case: New York University (NYU) researchers have claimed responsibility for the malware ESET found.
It turns out PromptLocker is actually an experiment called "Ransomware 3.0" conducted by researchers at NYU's Tandon School of Engineering. A spokesperson for the school told Tom's Hardware a Ransomware 3.0 sample was uploaded to VirusTotal, a malware analysis platform, and then picked up by the ESET researchers by mistake.
ESET said that the malware "leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption." The company noted that the sample hadn't implemented destructive capabilities, however, which makes sense for a controlled experiment.
But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems."
Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable.
"The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models."
As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers. They'll receive a far better return on investment than anyone pumping money into the AI sector, at least.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
But for now that's all still conjecture. This is compelling research, sure, but it seems we're going to have to wait a while longer for the cybersecurity industry's promise that AI will be the future of hacking to come to fruition. (Or be exposed as the same AI boosterism taking place throughout the rest of the tech industry; whichever.)
NYU's paper on this study, "Ransomware 3.0: Self-Composing and LLM-Orchestrated," is available here.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Nathaniel Mott is a freelance news and features writer for Tom's Hardware US, covering breaking news, security, and the silliest aspects of the tech industry.