[ad_1]
More and more devices in the smart home as well as smartphones and other wearables are equipped with services for recognizing and evaluating human language. Intelligent loudspeakers, smart TVs, thermostats, security systems and doorbells have microphones for constantly listening voice assistants on board, which can undermine data protection. A technical solution that a team of cybersecurity researchers from the TU Darmstadt helped to prevent unwanted conversations from being recorded.
Voice assistants can usually be activated by users with a wake-up word, for Amazon Echo speakers the sentence start “Alexa, …” is usually used with a desired command. The group of scientists, which also includes partners from the TU of the French University of Paris-Saclay and North Carolina State University in the USA, has now carried out extensive experiments with Amazon Echo (Alexa), Google Home (Assistant), Apple Home Pod (Siri ) and the audio intrusion protection system Hive Hub 360.
Assistants awakened unintentionally
The researchers discovered aloud their report, which they recently published on the preprint server Arxiv.org, numerous English terms that the assistants incorrectly interpreted as wake-up words. With Alexa, they identified 89 such expressions, which included “Letter” or “Mixer”. The service responded regardless of whether they were spoken by an artificial robot voice or a human. The unwanted recorded audio data is then uploaded to the cloud and analyzed by Amazon.
The analysis also showed that the assistive devices examined generally do not send a lot of data traffic in normal standby mode. It is therefore possible to recognize audio transmissions based on the increase in the transfer rate they cause. According to the scientists, the developed approach fits all technical assistants who send audio. However, they were limited to the four device types, as they are widespread and cover a wide range of applications.
Monitoring via network traffic
In order to measure a sudden increase in data traffic and determine relevant parameters, the team monitored the traffic of the devices and how this reacted to conversation samples that were emitted into the surroundings of the microphones. Usually, the rate drops again when no more voices can be heard. So the traffic could be divided into individual time windows.
Based on these findings, the researchers built a device to “counter espionage” and christened it “LeakyPick”. According to the article, it can be placed in a user’s home and then periodically tests the other voice assistants in its area with audio commands. Subsequent network traffic is monitored for the identified statistical patterns that indicate audio transmission. LeakyPick then indicates devices that have become active unexpectedly.
“Not yet available in stores”
The control unit currently exists as a prototype and, according to the TU Darmstadt, “is not yet commercially available”. It is based on a Raspberry Pi 3B and should achieve a measurement accuracy of 94 percent when recognizing audio transmissions through up to eight devices with voice assistants.
LeakyPick could also help against sophisticated acoustic man-in-the-middle attacks on Alexa & Co., the experts write. Wake-up words and commands are sent in the ultrasound area, which is inaudible to humans but recorded by the assistant, and orders are placed with Amazon, for example. If the device monitoring the traffic detects an activity even though no audible command has been issued, this could indicate such an attack.
(axk)
.
[ad_2]