I turned my iPhone into an “audio notification radio” for 6 months and it convinced me that AirPods-like devices are the wearable of the future. More on that soon, I’ll start with a bit of background first.
The evolution of media has been influenced by the hardware used to consume them. Visual media (text, images and videos) took over almost completely thanks to the many displays we use and carry around.
Visual media can be very fast to consume but they often require a lot of attention and many times, expensive “context switches” too.
One has to invest a burst of attention into a visual piece of content to decide if it’s important enough to be consumed at a precise moment.
Push Notifications are the perfect example. It takes the same amount of attention to check an important WhatsApp notification from your partner or friend, as it takes to check a notification reminding you to use an app you’ve forgotten you had. Clearly those notifications were not created equal and differ immensely in priority, but they will still get the same resource allocation while you check them.
With audio it works a bit differently. Humans have the ability to filter and prioritize audio information from multiple sources, and even switch between them (see the cocktail party effect). I found myself many times responding to someone who was talking to me during an important task, only after I had finished the task. Our brain is capable of processing audio info, prioritizing it, and storing it for an ulterior execution.
So a few months ago, I decided to run a little experiment to check if those natural capabilities could be used to consume information more efficiently and protect our attention.
For this little side project, I chose to focus on 2 huge attention drainers: news and messaging.
The idea with news, was to check if I could absorb and filter information on the go through audio, without having to look at a feed or notifications.
With messaging, I just wanted to reach one step closer towards “hands-free communication” using voice messaging, and see how it would feel.
On a side note, I’m more focused on the product side in the past few years, but I still really enjoy coding side projects on my spare time. This was the perfect occasion, so here it is.
In order to be able to “hear” notifications, I used a nice little hack. I replaced the default iOS notification sound with a spoken version of the notification’s text (read by a synthesized voice). That was it, I could then start hearing my notifications.
Then came content. My friend Shay Erlichmen decided to help me build the server part that would fetch headlines from RSS feeds, synthesize and send audio push notifications to the iOS devices. For the messaging part, the server would do just about the same, except that it would just pass the audio coming from the users’ voice messages. Quickly, all was up and running. See how it sounds in the video below.
After 6 months of using the app, here is what happened in increasing order of interest:
I totally managed to ignore notifications that were not interesting to me, or came in at a bad timing. I even subscribed to the TMZ feed for extreme testing. I was not bothered at all by content I didn’t care about, as it would be naturally filtered out by my brain if it would come on the behalf of something more important. It stayed in the background. This is probably the most interesting and surprising point
This little project confirmed that audio is a great lightweight way of absorbing short information with minimal disturbance to our focus.
In the last years, speech has become a much more reliable input, and can finally free us from being tied to our displays.
Based on the above, we could create a new efficient way of consuming incoming information: audio first, display on demand.
“Pushed” information would come in as audio first and passed through our native cognitive filters, instead of “stealing” our attention at will.
Should the pushed information need visual attention, it could be vocally summoned to the display in use. Using displays is an expensive task that should be initiated by the user on a “pull” basis only.
Audio and speech would become the “master”, displays would become “slaves” with purpose of helping us consume “more expensive media” intendedly.
AirPods to rule them all
Wireless earbuds, in combination with a strong AI assistant (such as AirPods + Siri), are the device that could help us regain freedom of movement and control of our attention.
“Ultimately, wireless earbuds might be the only wearable we need.”
I strongly believe that wireless earbuds are the wearable of the future, the one that will be used to control other devices.
And when they’ll be able to connect to every display around us (oh wait… Apple…), we’ll just tell our AI assistant to transfer visual content on to them…
Ultimately, wireless earbuds might be the only wearable we need.
Hear you soon.