Apple will hold its developer conference this year during the week of June 4th. There are several rumors around new device announcements, but for me the relevant aspect of this event is actually in its name: developers.
Apple will talk in the different sessions about new capabilities for its operating systems (iOS, Mac OS, Watch OS, TV OS) to be released later this year. They will also show developers how to use them to extract more value from Apple’s platforms, which will also increase the value of Apple devices to their customers.
This is Apple’s ecosystem strength: get developers into their platforms so that more consumers (and additional developers) are driven to them, creating a reinforced virtuous cycle.
Currently, Apple’s main revenue source and main ecosystem is built around the iPhone and its operating system, iOS. The iPhone pull is being leveraged by new devices that currently are mostly accessories: the Apple Watch, AirPods or even the HomePod smart speaker.
But smartphone sales, including the iPhone, are reaching a certain point of saturation, more driven by replacement cycles than by new acquisitions, as Tim Cook himself recognizes:
So it is becoming important for Apple to make the jump into a new device cycle, less driven by the phone. I see two relevant pieces here: the Apple Watch and Siri.
As I have discussed before in other posts, it is difficult for the Apple Watch to become an ecosystem due to its status as an iPhone accessory. The iPhone being around reduces the incentive for developers to create Watch-specific experiences. This is why many developers have discontinued their Watch applications, like Google and Amazon or even Slack, and many other never even tried.
The cellular version of the Apple Watch may drive the phone-less usage, and thus create the right incentives for developers so that it becomes an ecosystem by itself. But there are currently two main limitations to this:
Both limitations are actually related. For instance, the advantage of being able to stream music in the cellular version of the Watch is diminished when it is restricted just to Apple Music, which is the current situation. But Spotify cannot provide a similar experience to its users because WatchOS does not provide streaming capabilities to developers.
Another relevant limitation is the ability to make and receive calls in the Watch in phone-less mode. The cellular version of the Watch supports calls, but only for the regular calling service (provided by operators) and Apple’s FaceTime audio. Applications like WhatsApp or Skype are not able to provide that same function. In fact, due to limitations in Apple’s CallKit offering, you cannot pick up a WhatsApp call in your Watch even if your phone is close by.
Calling may sound like a very specific use case, maybe not that relevant for many users, and definitely only for some small number of developers. But Apple has put so much focus on their marketing approach for the cellular version around calls that this is also having an impact on the device penetration: few operators are currently able to offer the Apple Watch cellular version.
I covered the complexities about providing regular calls in a cellular Watch in a previous post (even if I failed in some specific predictions), so I will not go into those details here, but the impact is that outside the US, cellular Watch availability is very limited.
But if Apple allows third party services to provide calls in the Watch while phone-less, they could keep pushing that use case from the marketing perspective (and also the music streaming one) while making it simpler for operators to support the device, increasing its potential reach and so its attractive for developers.
Of course, one reason Apple is not pushing the phone-less usage, and so restricting streaming and communication services, may be due to the fact that battery life in the Watch only allows limited phone-less usage. While independent usage is critical for the Watch to become an real ecosystem, the situation today is that is not practical to consider leaving the phone at home for a long period and just rely on this device. The Apple Watch cellular option is just for casual usage while running errands, going to the gym or maybe a short walk.
If the Apple announcements in WWDC around Watch OS provide streaming and voice communication capabilities to developers, or maybe even other options that would drive a more engaged interaction with users when a phone is not present this will point to two things:
The other ecosystem that Apple should improve in WWDC 2018 is the one around Siri. This week an article from The Information has highlighted not only Siri’s struggles but also a certain lack of strategic direction at the heart of many of its problems.
And some of these problems show up in reviews for the HomePod highlighting that while the audio quality experience of the device is amazing, its performance as a smart speaker is way behind what Amazon offers with Alexa for its Echo line.
This is to some extent related to the “schizophrenic” behavior that Siri shows across devices, which demonstrate that multidevice ecosystems are still a hard problem to tackle. For instance, one of the features highlighted by the HomePod is that it does not have conflicts with other Apple devices (for instance, an iPhone) when saying “Hey Siri”. There will be no conflict because only the closest device will actually process the voice interaction, which in theory sounds like a great option. But since the functionality offered by Siri in each device is somewhat different this becomes an user experience problem. If I try to call for an Uber via Siri expecting the iPhone to pick the request, but my request is picked up by the HomePod which does not support that, the feature has not been particularly helpful. I have also discussed in another post the issues I see with HomePod depending on a close iPhone for iCloud functionalities, specially due to the limitation of only supporting a single user for this. All of these issues amount to Siri’s experience across devices being unpredictable and confusing.
Last, but not least, the abilities that developers have to increase Siri’s capabilities are very limited. Apple provides SiriKit in iPhone for applications that want to provide some of its functionalities using voice, but the use cases are quite restricted (calling and messaging, ride booking and restaurant reservations, note dictation and a couple more) and not available in the HomePod at all.
All this adds up to Siri not really being a sustainable ecosystem for developers today, and so not being as valuable to Apple as it could be.
There are many things Apple could announce for Siri during WWDC 2018 that would help solve this and make it a more viable platform both for users and for developers. My favorites are:
If Apple really takes the chance to move into the next wave after the iPhone by empowering the (currently just potential) ecosystems around Apple Watch and Siri, we could start seeing developers creating amazing new things pretty soon. I hope we can see some of that by WWDC 2018 in a few months.
Create your free account to unlock your custom reading experience.