Cybercriminals are weaponizing sound to launch, support, or drive sophisticated cyberattacks. Many are entirely imperceptible to humans, making them challenging to anticipate and detect. How can the average person defend themselves against these attacks?
Bad actors can play malicious audio files or hijack a device’s speaker in various ways to create backdoors or exploit vulnerabilities. Once they infiltrate a network or device, they can access the victim’s location, personally identifiable information, and login credentials. Most will sell those details on the dark web to the highest bidder.
Other attackers seek to cause damage, either because they’re conducting corporate espionage, are holding a grudge, or want to test their capabilities. Some acoustic waves can damage storage systems. For instance, when hard disk drives are submerged, frequencies ranging from 300 Hertz to 1,300 Hertz result in
A few of these cyberattacks enable cybercriminals to trigger or manipulate internet-connected devices remotely. For example, they may force a voice assistant to unlock a smart lock while the homeowner is away, allowing them to break in unnoticed. While such blatant schemes are rare, they’re not impossible.
Documented sound-related cyberattacks may be relatively uncommon because detecting and defending against them is difficult. Generally, low-frequency sound waves
Hacking into a smart speaker to weaponize it is one of the simplest sound-related cyberattacks. Attackers can use vulnerabilities to create a backdoor. Alternatively, they can scan Wi-Fi and Bluetooth networks for vulnerable devices. Once they’re in, they can trigger inaudible, high-frequency tones to cause hearing loss, nausea, headaches, or dizziness.
The speaker they use to launch their attack will produce a high-frequency tone and exceed a safe volume if they inject a custom malicious script, which is unnervingly easy to do. However, extended use will cause irreparable damage since the hardware isn't purpose-built. The device becoming inoperable is bad for its owner but great for anyone afflicting by the noise.
Unfortunately, bad actors have found more than one use case for these inaudible tones. A near ultrasound inaudible trojan attack uses ultrasonic waves — which are imperceptible to humans but easily sent and received by speakers, microphones, and sensors — to command voice assistants silently and maliciously.
Someone can launch the attack by transmitting an ultrasonic carrier signal through a connected speaker. While the command length
People who hear about such cyberattacks may assume they’re safe because they’ve set up voice recognition. Unfortunately, once the wake word activates the voice assistant, it
These sound-related cyberattacks can even spoof environmental stimuli to disable or tamper with gyroscopes or accelerometers. Playing a malicious audio file close enough to a phone or Internet of Things wearable can cause it to stop working or behave unexpectedly. This attack may seem harmless, but it could affect medical implants or security systems.
The emergence of artificial intelligence has opened the door for various new sound-related cyberattacks. Deepfakes — synthetic images, videos, or audio recordings — are quickly becoming the most common. In fact, these
These deepfakes are alarmingly easy to create. With
Unfortunately, audio isn’t the only biometric sound-related cyberattacks can manipulate. One research group recently developed an identification system that leverages audible friction created by swiping actions to extract fingerprint pattern features. They can listen in through a device’s speaker or launch their program in an app’s background.
Once cybercriminals use a collection of algorithms to process and clean the raw file, eliminating any unnecessary noise, their system is highly effective. According to the researchers, in a real-world scenario, they could achieve a weighted
AI can also track the audible feedback a keyboard makes to figure out exactly what people are typing, potentially exposing their habits, personal information, and passwords. One research group used an advanced deep-learning model to capture and classify keystrokes, demonstrating this tactic’s effectiveness.
Using a hijacked smartphone microphone, the
Many sound-related cyberattacks leverage inaudible cues or last only a few milliseconds at a time, making detecting and responding to them challenging. That said, defending against them is still possible — and effective — with the right strategies.
Soundproofing a room — or using specialized panels to deflect sound outward — can protect electronics from malicious external stimuli. This way, smart devices won’t be affected by any nearby hacked speakers.
Disabling microphones, sensors, voice assistants, and speakers when not in use can prevent bad actors from hijacking them for malicious means. If features can’t be turned off, users should consider setting strict access permissions to prevent unauthorized tampering.
Apps, smart devices, phones, and speakers become increasingly vulnerable to hacking the longer they go between updates. Individuals should ensure they keep everything up to date to prevent attackers from exploiting known vulnerabilities or creating backdoors.
No detection tool is 100% correct. While listening for a robotic tone or subtle audible inconsistencies may help people identify deepfakes, it isn’t always accurate, either. Instead, they should utilize non-audio-based authentication controls to prevent unauthorized access.
Although acoustic attacks are uncommon, AI’s emergence may make them more common. People should monitor their microphones, speakers, and sound-sensitive sensors to prevent bad actors from hijacking their electronics for malicious means.