paint-brush
How Cybercriminals Are Weaponizing Sound Wavesby@zacamos
458 reads
458 reads

How Cybercriminals Are Weaponizing Sound Waves

by Zac AmosAugust 6th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Attackers can use sound waves to hijack devices, create backdoors, and manipulate interconnected devices. Similarly, deepfake audio creates a new avenue for sound-related cyberattacks. To defend against these attacks, soundproof sensitive electronics, disable audio input and output, keep devices up to date, and utilize authentication controls.
featured image - How Cybercriminals Are Weaponizing Sound Waves
Zac Amos HackerNoon profile picture

Cybercriminals are weaponizing sound to launch, support, or drive sophisticated cyberattacks. Many are entirely imperceptible to humans, making them challenging to anticipate and detect. How can the average person defend themselves against these attacks?

Why Attackers Are Weaponizing Sound Waves

Bad actors can play malicious audio files or hijack a device’s speaker in various ways to create backdoors or exploit vulnerabilities. Once they infiltrate a network or device, they can access the victim’s location, personally identifiable information, and login credentials. Most will sell those details on the dark web to the highest bidder.


Other attackers seek to cause damage, either because they’re conducting corporate espionage, are holding a grudge, or want to test their capabilities. Some acoustic waves can damage storage systems. For instance, when hard disk drives are submerged, frequencies ranging from 300 Hertz to 1,300 Hertz result in up to 100% data packet loss and application crashes.


A few of these cyberattacks enable cybercriminals to trigger or manipulate internet-connected devices remotely. For example, they may force a voice assistant to unlock a smart lock while the homeowner is away, allowing them to break in unnoticed. While such blatant schemes are rare, they’re not impossible.


Documented sound-related cyberattacks may be relatively uncommon because detecting and defending against them is difficult. Generally, low-frequency sound waves are the most challenging to regulate because they are particularly long and powerful. However, high frequencies are just as concerning because they’re inaudible and can cause physical harm.

Hacking into a smart speaker to weaponize it is one of the simplest sound-related cyberattacks. Attackers can use vulnerabilities to create a backdoor. Alternatively, they can scan Wi-Fi and Bluetooth networks for vulnerable devices. Once they’re in, they can trigger inaudible, high-frequency tones to cause hearing loss, nausea, headaches, or dizziness.


The speaker they use to launch their attack will produce a high-frequency tone and exceed a safe volume if they inject a custom malicious script, which is unnervingly easy to do. However, extended use will cause irreparable damage since the hardware isn't purpose-built. The device becoming inoperable is bad for its owner but great for anyone afflicting by the noise.


Unfortunately, bad actors have found more than one use case for these inaudible tones. A near ultrasound inaudible trojan attack uses ultrasonic waves — which are imperceptible to humans but easily sent and received by speakers, microphones, and sensors — to command voice assistants silently and maliciously.


Someone can launch the attack by transmitting an ultrasonic carrier signal through a connected speaker. While the command length can’t exceed 0.77 seconds, they can direct the voice assistant to reduce its volume so their tampering goes undetected for as long as they need. They can force it to open a malicious website, spy on the user, or overload the microphone.


People who hear about such cyberattacks may assume they’re safe because they’ve set up voice recognition. Unfortunately, once the wake word activates the voice assistant, it will listen to anyone’s command regardless of whether or not they match the user’s voice. Besides, determined attackers can use exploits or audio splicing to bypass authentication mechanisms.


These sound-related cyberattacks can even spoof environmental stimuli to disable or tamper with gyroscopes or accelerometers. Playing a malicious audio file close enough to a phone or Internet of Things wearable can cause it to stop working or behave unexpectedly. This attack may seem harmless, but it could affect medical implants or security systems.

The emergence of artificial intelligence has opened the door for various new sound-related cyberattacks. Deepfakes — synthetic images, videos, or audio recordings — are quickly becoming the most common. In fact, these fraud attempts increased by 3,000% from 2022 to 2023, largely because advanced AI became more accessible.


These deepfakes are alarmingly easy to create. With as little as one minute of audio — which can come from social media, phone calls or man-in-the-middle attacks — bad actors can generate a realistic sound file. This way, they can impersonate individuals, bypass biometric security measures or commit fraud.


Unfortunately, audio isn’t the only biometric sound-related cyberattacks can manipulate. One research group recently developed an identification system that leverages audible friction created by swiping actions to extract fingerprint pattern features. They can listen in through a device’s speaker or launch their program in an app’s background.


Once cybercriminals use a collection of algorithms to process and clean the raw file, eliminating any unnecessary noise, their system is highly effective. According to the researchers, in a real-world scenario, they could achieve a weighted attack success rate of 27.9% on average for partial fingerprints and between 33%-37.7% for complete ones.


AI can also track the audible feedback a keyboard makes to figure out exactly what people are typing, potentially exposing their habits, personal information, and passwords. One research group used an advanced deep-learning model to capture and classify keystrokes, demonstrating this tactic’s effectiveness.


Using a hijacked smartphone microphone, the researchers achieved 95% accuracy on average. Their accuracy was 93% when capturing audio over a video call, highlighting how they don’t need to be near their victim to decipher keystrokes. Unfortunately, this side-channel attack leverages out-of-the-box equipment, meaning it’s accessible to even low-level hackers.

How to Defend Against These Acoustic Attacks

Many sound-related cyberattacks leverage inaudible cues or last only a few milliseconds at a time, making detecting and responding to them challenging. That said, defending against them is still possible — and effective — with the right strategies.

1. Soundproof Sensitive Electronics

Soundproofing a room — or using specialized panels to deflect sound outward — can protect electronics from malicious external stimuli. This way, smart devices won’t be affected by any nearby hacked speakers.

2. Disable Audio Input and Output

Disabling microphones, sensors, voice assistants, and speakers when not in use can prevent bad actors from hijacking them for malicious means. If features can’t be turned off, users should consider setting strict access permissions to prevent unauthorized tampering.

3. Keep Devices Up to Date

Apps, smart devices, phones, and speakers become increasingly vulnerable to hacking the longer they go between updates. Individuals should ensure they keep everything up to date to prevent attackers from exploiting known vulnerabilities or creating backdoors.

4. Utilize Authentication Controls

No detection tool is 100% correct. While listening for a robotic tone or subtle audible inconsistencies may help people identify deepfakes, it isn’t always accurate, either. Instead, they should utilize non-audio-based authentication controls to prevent unauthorized access.

Keep an Ear Out to Prevent These Cyberattacks

Although acoustic attacks are uncommon, AI’s emergence may make them more common. People should monitor their microphones, speakers, and sound-sensitive sensors to prevent bad actors from hijacking their electronics for malicious means.