No items in your cart
Talking about artificial intelligence (AI) can easily bring to mind images of human-like robots carrying out advanced tasks, a la the androids in TV shows like “Star Trek: The Next Generation” or “The Orville.” In reality, AI is much more mundane, but still very powerful.
What AI looks like in real life
Have you ever shopped for something on Amazon or watched a show on Netflix? If so, you have interacted with AI-driven functionality. The recommendations engines of both platforms autonomously adapt to your activity to make what they “think” are relevant suggestions for what to buy or view.
AI is also the brains behind every major voice-operated digital assistant. Apple Siri, Amazon Alexa and Google Assistant all rely on AI to parse spoken commands and return the most accurate possible response, given their limitations.
The meteoric rise of these intelligent virtual assistants – one firm estimated 74 million Americans, or a quarter of the country’s online population, would own a smart speaker in 2019 – has spurred some new security concerns. Let’s look at what you should know about voice security and how to stay safe.
Do as I say: Why you should be mindful of how your voice assistant works
AI has made voice recognition technology much better with time. But voice assistants do not yet possess human-level capabilities, as anyone who has had Siri show them search engine results instead of answering a question can attest.
Some assistants paper-over such limitations by constantly listening for input, which allows them to be very responsive once triggered by a keyword or phrase, thereby seeming more intelligent. This always-on functionality is one of two overarching security concerns in voice technology, along with the many connections that can be formed between them and other devices in a smart home.
What can go wrong? You might have heard about how Alexa can interpret something a kid says as a command to order something from Amazon, since it’s always listening and doesn’t even need to be deliberately turned on. That’s relatively innocuous compared to the other possibilities, though.
One risk is from imposter apps that can be activated via voice command. A group of researchers from the U.S. and China provided the example of a connected utility called “Capital Won” that would trigger upon the homophonic command “Capital One.” Similar issues have been discovered when using Microsoft Cortana to visit non-secure versions of websites that would in most cases default to HTTPS encryption, according to CNET.
Voice commands can also do additional damage if their respective assistants are integrated with other software and hardware across the Internet of Things. For example, a command to “open the door” might work if an assistant is connected to a smart lock system. As the IoT expands, the potential for security incidents involving either deliberate or accidental commands only grows.
Don’t lose your voice: Being smart about voice security
The most reliable tip for securing voice-operated assistants is perhaps the least satisfying – turning the device off when you aren’t using it. Doing so cuts down on convenience, but eliminates the possibility of something going wrong.
Another thing to try is deleting saved commands, words and recordings. Since these items are usually stored in the cloud, there’s the possibility they could be accessed by someone else. The methods of deleting them varies by speaker/assistant type, but this guide covers all the major ones.
On a more basic level, ensure you have strong passwords and ideally 2-factor authentication protecting all of the accounts associated with your voice assistants. Configure email alerts so that you know about any purchases or other unusual activities initiated through your devices.
Looking for even more protection for your online presence and connected devices? Consider Total Defense Ultimate Internet Security, which includes multiple lines of defense in one convenient package.