Multi-device is hard, and not even Apple gets it right

Written by jorge.serna | Published 2017/11/08
Tech Story Tags: apple-watch | iphone | airpods | multi-device | ux

TLDRvia the TL;DR App

To be able to disintegrate the smartphone, new devices need to better coexist with it first

One of the topics I frequently cover in my posts is around the ‘smartphone disintegration’, the trend — driven by technology evolution and market dynamics, but not yet from customer demand — of new personal devices that provide functionality that overlaps with some smartphone functions, and that will eventually replace it.

There are recent articles and news that keep pointing in this direction:

current generation of virtual reality headsets that use a smartphone as the engine and screen, Apple’s device will have its own display and run on a new chip and operating system

Which considers display and processing, but what about connectivity? Particularly for use cases in which the device will require cellular, I can see it using the connection available in your phone… or your watch.

This is all pointing to the same strategic direction of new devices (speakers, watches, glasses) taking up pieces of the smartphone role today. But the path to that clear goal already shows that the transition will create new problems for the customer experience that need to be addressed.

The accessory/independent device duality

As I have discussed at length before, Apple’s strategy around their Watch is creating this path in which the phone itself becomes less relevant. Some even see the Watch as a transitionary step into that AR headset, that will also be made redundant by it.

This became more clear with the LTE-enabled Apple Watch Series 3, which was expected to make it evolve from being an iPhone accessory into becoming a completely independent device.

But the released product was shown to be something else, a sort of hybrid between accessory and self-sufficient. Two main reasons drive this:

  • User expectations. While the cellular Watch can free the user from carrying their phone, this will be a casual replacement and never a full one: a (big) screen will still be preferred for many use cases, and the camera functionality is increasingly important, just to mention two examples. Most of the time the user will still carry both devices, so rather than just expect independent use it is important that they coexist graciously. More on that later.
  • Technology limitations. This is probably the most important driver: battery life in cellular usage cannot currently withstand a full replacement behavior. The recent WatchOS 4.1 release, which has enabled Apple Music streaming in the Watch, highlights this further.

So in the end we have a new device that *can* work independently from the phone, but that will be an accessory and rely on the phone connectivity when possible. And this brings us to the multi-device problem for user experience: when you are carrying two devices with overlapping functions deciding what does what becomes a critical thing.

Coexistence creates new problems

Let me give you an example of a problem present today in the coexistence of Apple devices in the path to smartphone disintegration.

This is the scenario: I am walking with my iPhone, my Apple Watch and listening to music via AirPods.

Yes! Exactly like this!

The music is playing from my phone into the AirPods, I don’t want to waste precious Watch battery doing a different thing. But interestingly, since the Apple Watch *is* an accessory for the iPhone, it is showing the “Now Playing” face, which allows me to control volume and skip songs without having to take my phone from my pocket. This is a great design decision from Apple, which has ‘disintegrated’ the Music experience across 3 devices (iPhone, Watch and AirPods) providing me a convenient way of using each of them for a part of the experience.

And if I receive a text (or a WhatsApp message) in my phone, I get the notification in my wrist, so I can check it and still keep my phone in my pocket. Convenient indeed.

Then I want to reply to that text, and here is where the coexistence shows its seams. Because the Apple Watch allows me to choose replying by voice (the first round icon in the image), using Siri’s speech-to-text capabilities to transcribe my words into a message, which works quite well. But in this case, the Watch decides to act as an independent device instead of an accessory and ignores that I am currently listening to music via AirPods. It does not pause the music, it does not give me the audio cue that voice recognition has started, and it does not take the voice input from the AirPods, but from the mic inside the Watch itself. It decides to work as if no other devices where present.

What does what

This is a really minor inconvenience, but that shows a gap in the smartphone disintegration experience when functions are distributed across devices:

  • Data connection from phone
  • Audio interaction in a headset
  • Control, notifications and UI in a Watch

Coordinating them to work together in the most reasonable way is not simple, sometimes due to technology limitations, or even product design ones.

From the technology point of view we have to wonder what is the best approach: should bluetooth connection change from phone to watch, or should in this case trigger the voice recognition from the phone, but giving visual feedback via the watch display? Can any of those things happen fast enough to provide responsiveness?

And, is that the expected user behavior always, or are there cases in which it makes more sense for the Watch to still use its own mic for the behavior even if the user is using AirPods? Think of a song recognition with Shazam, for instance.

Handover makes this worse

The previous situation required coordination between devices when all of them are present. But there are even more complex scenarios if we consider dynamic changes:

  • You select some music in your iPhone, because the UI is more convenient, put on your AirPods and start listening. Then you go out for a run, leaving your phone behind since you have an LTE Apple Watch. The natural expectation would be for your AirPods to switch from the phone to the Watch, which should continue playing the song you were listening to. And this happening without a glitch, continuous music always in your ears.
  • You are in a conversation on your iPhone, using your AirPods so you can conveniently charge the phone in your AirPower mat. Suddenly you have to leave, but your phone is still not charged enough so if you take it the call will drop for sure. But no worries, as you leave (or as the iPhone detected the battery level was low), the call seamlessly moves into your Watch and your AirPods connect to it, so you can continue your conversation without problem.

Those two scenarios do not work today. Transition from the phone to the Watch is not supported, and just the time it takes for the Watch to start the LTE radio when it detects disconnection from the phone would mean a significant pause in the music in the first scenario, and more critically a dropped call in the second. But customer expectation will be that they should.

As new devices are incorporated into the ecosystem and as new applications and functions work across several of them, the balance of coexistence and independence will depend more on context and design decisions will have to take into account complex multi-device scenarios.


Published by HackerNoon on 2017/11/08