Google's AI could be tricked into enabling spam, revealing a user's location, and leaking private correspondence with a calendar invite — 'promptware' targets LLM interface to trigger malicious activity
'Promptware' uses prompts to exploit flaws in LLM integration

SafeBreach researchers have revealed how a malicious Google Calendar invite could be used to exploit Gemini—the AI assistant that Google has built into its Workplace software suite, Android operating system, and search engine—as part of their ongoing efforts to determine the dangers posed by the rapid integration of AI in tech products.
The researchers dubbed an exploit like this "promptware" because it "utilizes a prompt—a piece of input via text, images, or audio samples—that is engineered to exploit an LLM interface at inference time to trigger malicious activity, like spreading spam or extracting confidential information." The broader security community has underestimated the risks associated with promptware, SafeBreach said, and this report is meant to demonstrate just how much havoc these exploits can wreak.
At a high level, this particular exploit took advantage of Gemini's integration with the broader Google ecosystem, the ability to clutter up Google Calendar's user interface with invitations, and their intended victim's habit of thanking an automaton for... automaton-ing. The researchers said this allowed them to indirectly trigger promptware buried within the user's chat history and perform the following actions:
- Perform spamming and phishing
- Generate toxic content
- Delete a victim’s calendar events
- Remotely control a victim’s home appliances (e.g., connected windows, boiler, lights)
- Geolocate a victim
- Video stream a victim via Zoom
- Exfiltrate a victim’s emails
Check out the full report for a step-by-step breakdown of how the exploit worked. The researchers said they disclosed the flaws to Google in February and that Google "published a blog that provided an overview of its multi-layer mitigation approach to secure Gemini against prompt injection techniques" in June. (It's not clear at what point those mitigations were introduced between the disclosure and the blog post.)
This kind of back-and-forth has been a mainstay of computing for decades. Companies introduce new technologies, people find ways to exploit them, companies occasionally come up with defenses against those exploits, and then people find something else to take advantage of. So, in that sense, the SafeBreach research just reveals another problem to add to the seemingly infinite array of such issues.
But a number of factors combine to make this report more alarming than it might be otherwise. Those include SafeBreach's point about security pros not taking promptware seriously, the "move fast and break things" approach companies are taking with their "AI" deployments, and the incorporation of these chatbots into seemingly every product a company offers. (As highlighted by Gemini's ubiquity.)
"According to our analysis, 73% of the threats posed to end users by an LLM personal assistant present a High-Critical risk," SafeBreach said. "We believe this is significant enough to require swift and dedicated mitigation actions to secure end users and decrease this risk."
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Nathaniel Mott is a freelance news and features writer for Tom's Hardware US, covering breaking news, security, and the silliest aspects of the tech industry.
-
Fox Tread33
August 12, 2025 - This really concerns me and brings me back to my forever rant about the headlong dash of the Tech industry, followed closely and trustingly by big business and a naive public. Phone apps so that we don't have carry ugh.. dirty cash, smart appliances and lighting that use phone apps just to work, key cards for opening home and office doors.. which won't work when the electricity goes out. Think Manhattan, New York when they had a huge storm knock out the power, and people couldn't get in and out of buildings with electronic card locks. Finally, there are my least favorites. "Syncing" everything so that when one account is hacked, they are all hacked like falling dominos, and "auto pay". My ISP charges me $10+ monthly on my bill because I won't use auto pay, but where hacked and lost 100,000 customers info. The Tech industry keeps inventing "doors" that need locks for no apparent reason, and there are plenty of very smart criminals ready and willing to break those locks. Sticking AI into everything before it is fully mature, and has adequate security measures in place. Seems to me to be this side of insane. Lastly, none of the companies take any responsibility when their systems are hacked. They just shrug and kind of say.. "Our bad.. here's a way to check and see how much of your financial information we have put at risk... but we won't give you compensation of any kind."Admin said:Google's AI could be tricked into enabling spam, revealing a user's location, and leaking private correspondence, among other things, with just a calendar invite.
Google's AI could be tricked into enabling spam, revealing a user's location, and leaking private correspondence with a calendar invite — 'promptwa... : Read more