According to this article, there are two identified types of attacks that can be effective against Amazon Alexa devices, at least in theory. One is called "skill squatting" and the other takes advantage of commonly misinterpreted words/commands by Alexa to activate malicious activity.
https://arstechnica.com/information-...oice-commands/
Quote:
...Thanks to the way Alexa handles requests for new "skills"—the cloud applications that register with Amazon—it's possible to create malicious skills that are named with homophones for existing legitimate applications. Amazon made all skills in its library available by voice command by default in 2017, and skills can be "installed" into a customer's library by voice. "Either way, there's a voice-only attack for people who are selectively registering skill names," said Bates, who leads UIUC's Secure and Transparent Systems Laboratory.
This sort of thing offers all kinds of potential for malicious developers. They could build skills that intercept requests for legitimate skills in order to drive user interactions that steal personal and financial information. These would essentially use Alexa to deliver phishing attacks...
|
I think we should assume that wherever a cloud exists, hackers are likely already lurking therein. I personally don't use many 3rd party skills, but I will definitely be careful in the future and periodically check to see if skills I didn't install somehow got installed. Currently I only use a very few skills that come from the manufacturer of my LED lightbulbs or came from Amazon's website. It pays to be cautious though...
Apple takes a lot of flack from MS and Android fans, but their proactive hunt for malicious apps should be commended. Amazon and others should do that too, but I doubt they ever will. I'll take the walled garden any day over the "you're on your own" attitude of most other companies.