Now that after a long wait the new iPhone 7 is out of the bag many of us were surprised to see that Apple really ditched the headphone jack.
Chaos ensued. Blog posts were written.
Apple has a great track record in dropping obsolete technologies just at the right time. This time, it might have been the right step, but a bit premature.
Why. In my opinion, we see now clearly the vision that Apple is working on with the next generations of mobile devices in the Apple Watch, iPhone, headset ecosystem. Even though they might not just be there yet.
Axing the jack paves the way for discreet, bean-sized earbuds that can simultaneously translate, filter out unwanted noise or let us control other devices by voice. That is really really awesome!
That is really really awesome, because hearables are the new frontier in mobile U/X design.
The market of these so-called hearables is estimated to be around $16 billion in until 2021. Naturally is becoming a crowded space.
Samsung launched the IconX wireless earbuds, while Sony is actively working on Project N in their Futurelab with the following vision:
“With N, ears are uncovered so we are able to listen to music while being open to feel the lively soundscape around us. If we can access local information without having to constantly scan the display on a mobile device”
Obviously we need to link drop!in (http://www.idrop.in) “Nearby Events Happening Now” with the “N”.
Further, Sony is soon releasing the much heralded Xperia Ear which promises to deliver weather and message notifications via voice, and to recognise input either by voice or head movements.
To complete the Asian drive for aural entertainment. Korea’s LG Electronics announced that it was planning to include Amazon’s Alexa in its SmartThinQ Hub, a device used to connect home appliances over the Internet.
Great UX design is invisible. The user intuitively knows what to do. And for humans the most intuitive form of communication is….. you guessed right.
Now how should that work in practice. Siri work ok-ish on the mobile phone but speech recognition absolutely sucks on the iMac and MacBooks.
So I am not getting it. If the quality of speech recognition still sucks, why is Apple doing it.
Because it is not the recognition part they are after, but the speech synthesis. Already now having your iPhone read a web article to you “just works”.
How now would that translate into other forms of mobile features, namely navigations and or games.
Let me give you an example.
The first iteration of “In Shadows” (http://www.inshadows.asia) was an iBeacon enabled running game. A game of tag to be precise. During the many many test runs we noticed that it is very disruptive for the overall game experience to constantly re-open the app that is strategically places on your upper arm.
Since it is much easier to give the game messages to the player using earphones, it is possible to overall reduce the artwork in the game and move most of the fun-relevant elements to an aural only experience.
And actually, that worked really well. When the iBeacon registers a Shadow Player, the app would trigger the command “ A Shadow is near”, once you start running towards the Shadow it would say “Shadow is getting closer”, and if you are in real danger the app would say “Danger! Shadow is closing in”.
So that does sound a lot of fun. But now this is where the real fun begins. If in the near future the apps respond properly to voice recognition the burst functionality, which is a way to temporarily stun a Shadow player could be triggered via the voice command ‘burst’. And that would be AMAZING!
….well now I would love to play a Harry Potter game with that functionality.
We are on the verge of an aural augmented reality revolution and don’t even know it.
What a time to be alive.