Hackers Can Still Use Alexa and Google Home Devices to Snoop on People

(Image credit: Shutterstock)

People expect their virtual assistants to be discrete, which is why devices featuring Alexa, Google Home and other assistive technologies are supposed to make it clear when they're recording. But ZDNet reported yesterday that hackers can still exploit vulnerabilities in Alexa and Google Home to use them as covert surveillance tools. 

The exploits in question simply require app developers to release malicious updates that insert long sequences of the "�" character in responses to user queries, allowing for a long duration of silence, whereby the devices read that character and their microphones remain active. Someone could then use those microphones to eavesdrop on Alexa and Google Home users.

Attackers could go one step further by having their malicious app impersonate Amazon or Google and tell people to share their account credentials so they can sign back into their services. Neither device would do that normally, but unsuspecting users might not think twice about compromising their account security

These devices technically disclose their active microphones with a status light, so observant people could notice that something's up. But it's also very likely for people to miss or ignore the light entirely. Part of these devices' appeal is the ability to shout commands across rooms without interrupting the task at hand; having to stare at a status light doesn't jive with the reason people use these products.

Researchers disclosed similar vulnerabilities in Alexa and Google Home throughout 2018. Security company SRLabs discovered their continued presence earlier this year. The company reportedly told Amazon and Google about the issue, but it remains unresolved, even though SRLabs told ZDNet that "finding and banning unexpected behavior such as long pauses should be relatively straight-forward." 

Here's how SRLabs said Amazon and Google should address the issue in a blog post:

"To prevent ‘Smart Spies’ attacks, Amazon and Google need to implement better protection, starting with a more thorough review process of third-party Skills and Actions made available in their voice app stores. The voice app review needs to check explicitly for copies of built-in intents. Unpronounceable characters like '�,' and silent SSML messages should be removed to prevent arbitrary long pauses in the speakers’ output. Suspicious output texts including “password“ deserve particular attention or should be disallowed completely."

The company also shared four videos on its YouTube channel showing the exploits for conducting phishing attacks and eavesdropping on Alexa and Google Home users. 

For its part, Google told ZDNet that it "removed the Actions that we found from these researchers "and is "putting additional mechanisms in place to prevent these issues from occurring in the future." Amazon has remained silent.

Nathaniel Mott
Freelance News & Features Writer

Nathaniel Mott is a freelance news and features writer for Tom's Hardware US, covering breaking news, security, and the silliest aspects of the tech industry.