paint-brush
I have heard the futureby@ronwiesengrun
194 reads

I have heard the future

by Ron WiesengrunSeptember 1st, 2016
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I turned my iPhone into an “audio notification radio” for 6 months and it convinced me that AirPods-like devices are the wearable of the <a href="https://hackernoon.com/tagged/future" target="_blank">future</a>. More on that soon, I’ll start with a bit of background first.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - I have heard the future
Ron Wiesengrun HackerNoon profile picture

And it sounds very much like AirPods

I turned my iPhone into an “audio notification radio” for 6 months and it convinced me that AirPods-like devices are the wearable of the future. More on that soon, I’ll start with a bit of background first.

The evolution of media has been influenced by the hardware used to consume them. Visual media (text, images and videos) took over almost completely thanks to the many displays we use and carry around.

Visual media can be very fast to consume but they often require a lot of attention and many times, expensive “context switches” too.

One has to invest a burst of attention into a visual piece of content to decide if it’s important enough to be consumed at a precise moment.

Push Notifications are the perfect example. It takes the same amount of attention to check an important WhatsApp notification from your partner or friend, as it takes to check a notification reminding you to use an app you’ve forgotten you had. Clearly those notifications were not created equal and differ immensely in priority, but they will still get the same resource allocation while you check them.

With audio it works a bit differently. Humans have the ability to filter and prioritize audio information from multiple sources, and even switch between them (see the cocktail party effect). I found myself many times responding to someone who was talking to me during an important task, only after I had finished the task. Our brain is capable of processing audio info, prioritizing it, and storing it for an ulterior execution.

So a few months ago, I decided to run a little experiment to check if those natural capabilities could be used to consume information more efficiently and protect our attention.

6 months with audio notifications

For this little side project, I chose to focus on 2 huge attention drainers: news and messaging.

The idea with news, was to check if I could absorb and filter information on the go through audio, without having to look at a feed or notifications.

With messaging, I just wanted to reach one step closer towards “hands-free communication” using voice messaging, and see how it would feel.

On a side note, I’m more focused on the product side in the past few years, but I still really enjoy coding side projects on my spare time. This was the perfect occasion, so here it is.

In order to be able to “hear” notifications, I used a nice little hack. I replaced the default iOS notification sound with a spoken version of the notification’s text (read by a synthesized voice). That was it, I could then start hearing my notifications.

Then came content. My friend Shay Erlichmen decided to help me build the server part that would fetch headlines from RSS feeds, synthesize and send audio push notifications to the iOS devices. For the messaging part, the server would do just about the same, except that it would just pass the audio coming from the users’ voice messages. Quickly, all was up and running. See how it sounds in the video below.

After 6 months of using the app, here is what happened in increasing order of interest:

  • I stopped using the messaging part after about a month. Listening to incoming messages without having to touch my phone was a true pleasure though. It just became annoying to open my phone and record a reply message; this “non hands-free” functionality felt terrible. This will soon be a problem of the past though, but more on that later.
  • A device that spontaneously starts talking by itself can scare the hell out of you when you’re working late alone in your silent office. A problem that could be easily solved by fading-in the sound or playing a short entrance tone
  • It wasn’t easy to get rid of the habit of instinctively looking at my phone every time a notification arrived. But when I finally managed, I was able to look at it only when I was really interested in reading the news I just heard about, and when I actually had the time to read
  • When I was listening to music while running, I didn’t mind having news notifications blending in. In fact many times, I just wanted to tell my non-existent assistant, “Read me more”
  • When I wanted peace and quiet, I’d just remove the earbuds… easy
  • I was embarrassed more than once when a loud notification came in public place and everyone could hear it. This happened when I didn’t have my earbuds plugged in or forgot to switch the silent mode on
  • I felt really connected to what was happening in the world without having to open a feed, at least at the superficial headline level; kind of like hearing news flashes on the radio

  • I totally managed to ignore notifications that were not interesting to me, or came in at a bad timing. I even subscribed to the TMZ feed for extreme testing. I was not bothered at all by content I didn’t care about, as it would be naturally filtered out by my brain if it would come on the behalf of something more important. It stayed in the background. This is probably the most interesting and surprising point

Master: Audio - Slave: Display

This little project confirmed that audio is a great lightweight way of absorbing short information with minimal disturbance to our focus.

In the last years, speech has become a much more reliable input, and can finally free us from being tied to our displays.

Based on the above, we could create a new efficient way of consuming incoming information: audio first, display on demand.

“Pushed” information would come in as audio first and passed through our native cognitive filters, instead of “stealing” our attention at will.

Should the pushed information need visual attention, it could be vocally summoned to the display in use. Using displays is an expensive task that should be initiated by the user on a “pull” basis only.

Audio and speech would become the “master”, displays would become “slaves” with purpose of helping us consume “more expensive media” intendedly.

AirPods

AirPods to rule them all

Wireless earbuds, in combination with a strong AI assistant (such as AirPods + Siri), are the device that could help us regain freedom of movement and control of our attention.

“Ultimately, wireless earbuds might be the only wearable we need.”

I strongly believe that wireless earbuds are the wearable of the future, the one that will be used to control other devices.

And when they’ll be able to connect to every display around us (oh wait… Apple…), we’ll just tell our AI assistant to transfer visual content on to them…

Ultimately, wireless earbuds might be the only wearable we need.

Hear you soon.