The future of our household (and my agency)
Q: How do you know if one of your friends owns an Amazon Echo?
A: Don’t worry, they’ll tell you.
I’m one of the many folks who have come to rely on the Echo on a regular basis. Alexa, the Echo’s omnipresent avatar, is veritable member of our family — our young girls (ages 3 and 1) don’t know a world without her. She’s not a novelty. For them Alexa is the way that you make stuff happen in our home. As a parent neck deep in muddy shoes and bath toys, she’s a virtual lifesaver.
I’ll save further from telling you the depth of my love for Alexa and the many reasons, because you probably know someone who already tells you every chance they get. I want to jump directly to how these experiences with the Echo and similar technology are changing homes, cars, offices, and very quickly will change Critical Mass’s craft and expertise.
In the past 20 years we’ve become adept at navigating information with swipes, clicks, and gestures. And we’ve built entire industries, criterion, and processes to refine these interactions for the sake of business revenue and general end-user sanity. But then one day you wake up and find a better way to access information and make s#!t happen. Or better put, you rediscover the way. It’s by asking or commanding with your voice. It’s brilliant really — you should try it. Maybe practice on a loved one and see how it works. If you’re polite, they tend to respond in kind.
The one massive problem with this lost art of talking is that our newest conversation partners (the machines) don’t have a well-known language that we’ve grown up using. Each new voice technology entrant brings it own set of terms, lexicons, and keywords. And even as these machines reach a conversational-level artificial intelligence ability we lack the meta-controls to keep them engaged, or in some cases, at bay.
Ten years ago I had the fortune of working on the launch of Ford’s Sync technology that allows drivers to give voice commands to their car or their car-connected smartphone. It was, and still is, a brilliant way to keep drivers focused on actually driving. But during that process I recognized a very real disconnect with this type of technology for the end-user. They didn’t know what to say. In many cases I watched them try to overthink it — attempting to conjure a word that the machine would unfailingly understand. In other cases they stumbled once and just gave up forever — leaving this valuable technology mute for the life of the car.
These challenges still exist today. Users make a command and have about 4% confidence that they’ve triggered the correct event or will get their desired information in return. Overtime some consumers will get into a comfortable groove of 2–3 actions that work as advertised and they’ll create a habit out of it, but so much remains untapped. You wouldn’t say the same thing for most web, app and physical experiences. Pull up any random website or app right now and you’ll find some important waypoints and mnemonics that set you on your way to exploring the entire experience with limited stumbling. We call these things “standards” because they are well… standard, and they work over and over again for billions of people everyday.
Voice interactions are in desperate need of the same set of predictable interactions. What does the discipline of User Experience or Customer Experience mean for voice interactions? What is does content strategy mean when the content lacks a typeface or can’t be picked from the RGB swatch? How do you construct information architectures when the core tools (i.e. boxes and arrows) no longer work?
Hell if I know! But our Critical Mass teams are figuring it out.
We’re diving in, so stay tuned. Or join us if you want to have some fun and define the world that my 3-year-old will inhabit — using her “inside voice” of course.
This article originally appeared in Shots http://www.shots.net/features/article/90785/why-amazon-echo-will-revolutionise-domestic-life