Amazon Alexa, Google Voice and Apple Siri can all be hacked by a laser. Should be this be cause for concern for “smart home” owners?
Researchers discovered that when a laser is aimed at the devices’ microphones, an electrical signal is created, just as it is when a voice command is made. Using an oscilloscope, the academics found they could effectively mimick a voice with a laser beam.
These “light commands”–made with cheap easy tech, even a classic laser pointer–can be tweaked to make Amazon, Google and Apple voice-operated tech carry out actions typical in a “smart home,” such as opening doors, turning lights on and off, even starting a car.
Windows Aren’t a Deterrent
As long as there aren’t any objects blocking the laser, the attacks can work from long distances, from one building to another, for instance. Windows won’t make a difference.
The researchers, from the University of Michigan and the University of Electro-Communications in Tokyo, were able to show off a light command asking Google Home what time it is from up to 110 meters away, in their new paper titled, “Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems.”
Outside of Amazon Echo, Google Home and Apple iPhones, the researchers also tested successful attacks on Facebook Portal Mini, Amazon’s Fire Cube TV, a Samsung Galaxy S9 and a Google Pixel 2.
The basic vulnerability can’t be eradicated without a change in the design of the microphones, the researchers said, and they are working with Amazon, Apple and Google on some defensive measures.
“Protecting our users is paramount, and we’re always looking at ways to improve the security of our devices,” a Google spokesperson said.
“Customer trust is our top priority, and we take customer security and the security of our products seriously,” said an Amazon spokesperson. “We are reviewing this research and continue to engage with the authors to understand more about their work.”
What Can Homeowners Do?
The most obvious defense is to remove your Amazon Echo, Google Home or whatever comparable tech you have from line of sight, said professor Alan Woodward, a security expert from the University of Surrey. “Or you could draw the curtains permanently. The former is a bit more practical,” he quipped.
“It’s just the sort of vulnerability that designers, even those with great threat models, don’t think about. It just goes to show that the threat can evolve and so should your threat model.”
Speaker Recognition Feature
Turning on speaker recognition features could also help, professor Woodward said, echoing what the researchers found. This will limit access to only legitimate users, who’ve registered their voices with the device.
There’s a limit to that protection too, though, as the researchers noted: “Even if enabled, speaker recognition only verifies that the wake-up words … are said in the owner’s voice, and not the rest of the command. This means that one ‘OK Google’ or ‘Alexa’ spoken by the owner can be used to compromise all the commands.”
There’s one more possible cause for concern: The research was funded by the Pentagon’s research arm, the Defense Advanced Research Projects Agency (DARPA). It’s feasible then that such attacks could be a feature of powerful surveillance tools.
How to Protect Yourself From Attack
The researchers suggest: An additional layer of authentication can be effective at somewhat mitigating the attack. Alternatively, in case the attacker cannot eavesdrop on the device’s response, having the device ask the user a simple randomized question before command execution can be an effective way at preventing the attacker from obtaining successful command execution.
Manufacturers can also attempt to use sensor fusion techniques, such as acquire audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands.
Another approach consists in reducing the amount of light reaching the microphone’s diaphragm using a barrier that physically blocks straight light beams for eliminating the line of sight to the diaphragm, or implement a non-transparent cover on top of the microphone hole for attenuating the amount of light hitting the microphone.
However, the researchers note that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to compensate for the cover-induced attenuation or for burning through the barriers, creating a new light path.
Pretty much any voice-enabled device you can imagine is vulnerable to this kind of attack, but the authors have tested and confirmed vulnerabilities in the following: