AI agents can be manipulated into giving away your crypto, according to Princeton researchers
The attackers plant false memories to override security defenses.

Researchers from Princeton University warn of AI agents with “underexplored security risks” in a recently published paper. Dubbed 'Real AI Agents with Fake Memories: Fatal Context Manipulation Attacks on Web3 Agents,' the paper (h/t Ars Technica) highlights that using AI agents in a financial role can be extremely hazardous to your wealth. This is all because these AI agents remain vulnerable to rather uncomplicated prompt attacks, despite purported safeguards.
While many of us rake over hot gravel to earn a daily wage, in the AI Wild West of 2025 some Web3 savvy folk are using AI agents to do build their fortunes. This includes giving these bots access to crypto wallets, smart contracts, and work with other online financial instruments. Wizened Tom’s Hardware readers will already be shaking their heads about such behavior, and with good reason. The Princeton researchers have demonstrated how to crack open the world of AI agents to redirect financial asset transfers, and more.
Many will be aware of LLM prompt attacks, to get AIs to act in a way that breaks any guardrails in place. A lot of work has been done to harden against this attack vector in recent months.
However, the research paper asserts that “prompt-based defenses are insufficient when adversaries corrupt stored context, achieving significant attack success rates despite the presence of these defenses.” Malicious actors can make the AI hallucinate in a very purposed way by implanting false memories and thus creating fake context.
To demonstrate the dangers in the use of AI agents for action rather than advice, a real world example of AI agents used in the ElizaOS framework is provided by the researchers. The Princeton team provide a thorough breakdown of their ‘Context Manipulation Attack’ and then validate the attack on ElizaOS.
Above, you can see a visual representation of the AI agent attack, showing the flow of unfortunate events which could mean users suffer “potentially devastating losses.” Another worry is that even the state of the art prompt-based defenses fail against Princeton’s memory injection attacks, and these false memories can persist across interactions and platforms...
“The implications of this vulnerability are particularly severe given that ElizaOS agents are designed to interact with multiple users simultaneously, relying on shared contextual inputs from all participants,” explain the researchers. Or we could put it this way: it only takes one bad, unscrupulous apple to rot the whole barrel.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
What can be done?
Well, for now, users can hold off entrusting AI agents with (financially) sensitive data and permissions. Moreover, the researchers conclude that a two-pronged strategy of "(1) advancing LLM training methods to improve adversarial robustness, and (2) designing principled memory management systems that enforce strict isolation and integrity guarantees" should provide the first steps forward.

Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.