How Biomimicry Is Expanding the Sensitivity of Sound

Written by andreafrancesb | Published 2022/01/10
Tech Story Tags: biotechnology | sound | technology | tech-for-good | sound-tech | tech-trends | tech-trends-2022 | biomimicry

TLDR In mimicking how the human ear works, scientists have extended the limits of hearing, not only for human hearing but also what machines are capable of picking up and detecting. A new kind of intelligence based on heightened sound sensitivity will open up new innovations based on what that kind of of understanding of the world is a critical first step to knowing its potential threats.via the TL;DR App

In 1966, when the iconic show Star Trek was debuted, humans could hardly imagine a day when we would be able to communicate with machines the way Captain Kirk talks to the ship’s computer, yet today we have Alexa and Siri to help us with our day to day tasks. Back then, it was also a dream to be able to understand the sounds of our natural environments, such as what space sounds like or the deep ocean.
The curiosity to learn more has driven humans to achieve impressive feats many of which are inspired by nature. Biomimicry incorporates these lessons from nature and applies them to machines. New advances in technology have ushered us into a new era with machines that can actually fly like birds, swim like a fish, and move like or better than humans. 

Biomimicry to Advance Sound Capture

Looking at even one small yet important function such as hearing becomes incredibly complex if you think about how many sounds the human ear processes daily and its ability to recognize voices or pick up the slightest nuances of musical instruments in a song.
While biomimicry has come far in robotic movement and other scientific fields, one area where innovation has fallen short is audio technology. Today, most audio technology is still using hardware, called MEMs (Microelectromechanical Systems) single membrane sensors, which have been around for decades. They are widely used in almost all audio technology today, such as voice command assistants, speakers, cell phones, airpods - essentially any device with a microphone. 
The sounds we hear are essentially vibrations that come into contact with our ears, highly complex machines that are optimized to process those vibrations and give them meaning. With single membrane sensors like MEMs, the technology is unable to decrease the signal-to-noise (SNR) ratio and it picks up just as much background noise as voices in close proximity. This is very important for processing true intent in the context of our adoption and increased usage of voice-activated technologies. If you’ve ever had Siri misdial a friend named Ron instead of calling your Mom, then you understand how it can be vastly improved.
A new innovation out of Korea promises to be that solution, with advances in technology that mimic the cochlear of the human ear and allow for reception of varying resonances. In essence, this is the world’s first audio sensor that enables machines to hear better than humans. 
Fronics was born from a corporate and academic partnership coming out of Korea’s technical institute, KAIST (similar to M.I.T. in the US), where it has been in development for over 10 years and is now making its way into commercial markets. The core innovation is an ultra-sensitive resonance sensor that mimics the basilar membrane in the human ear. 
“Voice user interfaces will be how we communicate with our smart technology in the future, so we took inspiration from one of the most complex yet vital human functions -  hearing," explains Ki-Soo Kim, CEO at Fronics.

Biomimicry Of The Ear

The basilar membrane of the ear plays a key function in our hearing as it is lined with tiny hair cells that transform vibrational energy into electrical signals that are interpreted by our brains as sounds, music, or human voices.
Fronics’ sensor is equipped with 7 different membranes and embedded with nanoparticles that act like the tiny hairs of the ear creating small electrical currents with each microscopic movement and enabling Fronics to create a highly detailed voice ID, similar to facial recognition but for voice.  
Fronics’ flexible audio sensors are more than 22x more sensitive than anything that exists today and expand the spectrum of what can be heard across a wide band of frequencies. In mimicking how the human ear works, Fronics has extended the limits of hearing, not only for human hearing but also what machines are capable of picking up and detecting.
Humans can not hear sounds at every frequency, but what if we were able to access a wider range of sound? In a more world more sensitive to sound, what new insights are possible? 
A new kind of intelligence based on heightened sound sensitivity will open up new innovations based on what that deeper listening reveals, a future based on a more sensitive understanding of each other and the world around us. 
A deeper understanding of our natural world is a critical first step to knowing its potential threats.
In 2018, Carnegie Mellon published a set of annotated birdsong recordings to train and test machine learning models to identify bird species. The models will be able to listen to bird songs in various settings and ID the birds without the need for human listening, and through the sounds be able to identify struggling species to support conservation efforts and future-proofing their sounds for generations to come.

Revealing Life Underwater

We know that military sonar (infrasounds) cause harmful damage to marine mammals, who use low-frequency sounds, clicks, whistles, and pulse signals to communicate. With Fronics it could be possible to know what marine cetaceans were in an area before testing and avoid many of the senseless and tragic beaching events we’ve seen in recent years. 
But even beyond that, what if we could use this technology to understand what whales are communicating?  
Building off the work of Dr. David Gruber, of Project CETI, a project that is capturing the sounds of sperm whales while they are engaged in different behaviors, could mean we are able to understand their language.  Using AI and Fronic’s extended sound capture and sensitivity could even mean we move from understanding and translating whale sounds to communicating directly with whales through interspecies communication.
There’s so much life underwater that we don’t understand and a deeper ability to listen could reveal everything from fish populations to coral health to sounds of the deep ocean and seismic activity.

Sound Sensitivity For Human Safety

Seismic activity tracking is another area where Fronics’ hearing innovation can help as infrasound is also used to monitor earthquakes and volcanoes. Being able to “hear” the slightest change in sound could mean saving a village in the path of an upcoming eruption with days ahead of warning instead of a few hours or minutes.
Another application of Fronics’ technology comes as a benefit to the identifying and diagnosing of large infrastructure projects, such as bridges that are in need of maintenance. The field of predictive maintenance often uses sensors to pick up small sounds or vibrations that are not audible to the human ear. By expanding the field of range, significant safety improvements can be made and catastrophic issues identified way ahead of time to safeguard human life. 

Technology To Assist Us

As artificial intelligence (AI) and Internet of Things (IoT) technologies become more embedded in our day-to-day technologies, we will look to voice user interfaces as easier and more seamless ways to interact with our technology.
We see this happening already with smart TV technology and remote controls that are voice-activated and we can see a future where the technology will be widely applied to smart home automation, AI assistants, self-driving cars, and biometric authentication security. In a hyper-connected society, this intuitive human-to-machine interaction will only grow in usage and improve the lives of those needing those technologies.

Written by andreafrancesb | Annie Brown is the founder of Reliabl, an inclusive content moderation solution for community platforms.
Published by HackerNoon on 2022/01/10