One of the most interesting strategic and technical challenges I faced while leading Office development was the challenge of cross-platform development. The web and then the mobile wave created a new set of target platforms and dominant operating systems. iOS and Android devices captured the phone and tablet markets. Windows responded with a largely new Windows RunTime API for building Windows Store apps. Web-based delivery of complex applications became possible given improvements in the underlying browser platform and better connectivity. We needed to figure out how all these new devices and our shift to services changed what Office should be and how we should participate on these platforms. Our thinking was greatly complicated by trying to model and control how the choices Office made would impact the fortunes of our Windows phone and tablet efforts. This post explores how we navigated both the strategic and technical challenges we faced during this period.
Cross-platform development has been a challenge in the computing industry from the very earliest days. It rises to special prominence during periods of major platform shifts (e.g. from character-based operating systems to graphical operating systems or from PC to mobile). During these transitionary periods it can be unclear whether the shift is even going to happen, how it will happen for a particular app category, how fast it will happen and which operating systems and platforms will emerge as dominant on the other side.
Platform shifts can be the triggers for changing the fortunes of the applications built on top of them and applications can help determine the outcome of these platform contests. Excel was able to carve out a niche in graphical spreadsheets by focusing on the Mac in its battle against Lotus 1–2–3 and then transitioned this to overall dominance when Lotus was slow to provide a Windows version. Microsoft Word also made a heavy bet on Windows ahead of the leading WordPerfect application and rode this to dominance in word processing. This “Microsoft origin story” where Windows was critical to Office success and Office was critical to Windows success would impact how we navigated this most recent platform transition.
More recently, Google was able to grow their productivity share by betting on the web as a new platform. This is an interesting example because they combined the client benefits of very low-friction application install through a browser-based interface with a differentiated feature set. This feature set built on a cloud-based backend infrastructure to support anywhere access, easy sharing and collaborative authoring. This is an example where a new platform required a significant rethinking of architecture and functionality rather than “simply porting” to the new platform. In this case a new platform also stressed existing business models which are especially resistant to change. I wrote about this dynamic before in my Complexity and Strategy and What Really Happened in Vista posts.
The Office team was no stranger to cross-platform development. Word shipped first on Xenix (a Unix variant) and Excel was initially developed for the Mac. I like the chart below because it captures both these early unsettled periods as well as the incredible dominance of Windows during the 1997–2007 period. It does miss the growth of the web as a distinct target platform during this decade.
Mac Context
When Steve Jobs returned to Apple in 1997, Microsoft famously made a $150M investment in Apple and committed to continue developing Office for the Mac. This was at the nadir of Mac relevancy and Microsoft’s action was viewed as critical to credibility for the Mac as a productivity device. Microsoft set up a distinct “Macintosh Business Unit” (MacBU) to build these versions.
For the previous versions (up to Office 97) Office had been doing more and more to unify internal development across the two PC platforms. A relatively thick platform abstraction layer (or “PAL”, which included non-trivial amounts of actual Windows code for text processing, graphics and object embedding and linking) was used so that Office developers could mostly target a single API. Individual developers were directly responsible for making their features work on both platforms.
This approach was viewed skeptically both internally and externally. The overhead for individual developers was high (and in fact the requirement was abandoned in order to get a late Office 97 for Windows release out the door). Externally, the approach was blamed for performance problems in the latest versions of Mac Word and Excel (although in fact the most egregious issues were the result of some relatively simple bugs that had been missed in internal testing rather than a consequence of the overall development approach). Additionally, Microsoft had in a number of places elected to port an existing Windows capability rather than support a competing Mac one (e.g. Windows Object Linking and Embedding (OLE) rather than the Mac’s Publish and Subscribe).
As part of the deal with Apple, a separate team would be devoted to producing “Mac-native” versions. An important consequence was that the main Office team building Office for Windows would be unencumbered by the constraint of supporting the Mac. This approach appeared to work well during the following decade as the main Office team focused on features for large organizations and connecting with SharePoint. The Mac team focused on special features for their Mac users and released very successful versions of Office for Mac. They spent a relatively small amount of time tracking changes in the Windows product.
This strategy ran head-long into Office 2007. Office 2007 had a long three and a half year development cycle, but at the end Office on Windows had a new ribbon UI, new OpenXML file formats and significant internal changes to the major applications (Excel in particular had a massive number of enhancements including a radical expansion in the size limits for spreadsheets). The idea that the Mac team could track core changes with a relatively minimal investment was blown apart. The Mac team had to scramble to support the new capabilities, especially the new file formats. I have written before about how critical the network effects around the file formats have been for the Office business. We were shooting ourselves in the foot by not supporting our newest formats in a product that was becoming more important in both critical educational and high tech markets as Mac market share grew.
Mobile Context
Microsoft produced mobile versions of Office (originally “Pocket Office”) for Windows phones starting in 1996. These were completely separate code bases with limited functionality — essentially viewers. They were built in the Windows phone organization and were effectively disconnected from the core Office product and team. They grew in functionality as the phone platform became more capable and the team would opportunistically adopt bits and pieces of Office code to implement specific functionality but the product remained quite distinct and severely limited. This was true even as they were extended to support Symbian (in an early joint venture with Nokia) and then were ported to both iOS and Android as placeholders to have some Office-branded product available for those devices. During this period, the team building these products was moved out of the Windows phone organization and into the Office organization. They would end up playing a critical role in producing the first “real” versions of Office for Android.
As Android grew in relevance, developers of desktop Office clones like Polaris and Kingsoft aggressively moved to provide phone versions, especially for the Asian market. Concern grew that phone could serve as a growth opportunity for competitors that could then move back to challenge the profitable desktop business.
Web
The Office apps started supporting HTML as an input and output format very early on and made a major investment in saving and reading HTML starting in Office 2000. Excel led the way with actual functional web UI in a somewhat abortive approach released as the Office Web Components. These were Windows-only ActiveX controls that required client installation and did not give us any significant experience in building “real” web apps on top of standard web technologies, nor required that we build experience on the service challenges. The Excel team then switched to building a real Excel Web Service for the 2007 product cycle. The target customers were heavy enterprise users of Excel looking to run Excel in a SharePoint service infrastructure. These types of Excel users have business-critical spreadsheets that can take hours to recalculate and did not want to run these in unstable desktop environments. In fact, there are many enterprise programmability scenarios that want to leverage the Office clients in a service-based workflow and Microsoft now ships service versions of Excel, Word and PowerPoint just for these scenarios.
These enterprise scenarios have a very different design point from typically lighter-weight consumer scenarios targeted at interactive editing but provided good early learning. As I discussed in my Complexity and Strategy post, we started building real web apps for Word, PowerPoint, Excel and OneNote in 2007 for release as part of the Office 2010 product cycle. The key decision we made as we planned those products was to have them fully support the Office file formats and be full participants in the Office file ecosystem. This is one of those “obvious” decisions that was not so obvious at the time!
Windows
When the iPhone was released in 2007, it was a “Windows Phone problem”. Windows Phone was in a separate organization from Windows and they were on the front line for responding. When the iPhone opened up to third-party apps in 2008 it more clearly started to be a Windows problem. The explosive growth of the Apple App Store could not have served as a clearer contrast and litany of failures for the moribund Windows app ecosystem. A compelling new device opened up a whole new set of application scenarios. An excited and growing user base was a rich and attractive target for app developers who flocked to the platform. Distribution and monetization (and at least in the early days, discovery) were radically simplified through the App Store. Store approval processes and application sandboxing on the device made it safe for users to try new applications and easy to delete unwanted ones.
In the Windows consumer app ecosystem, the top downloadable consumer apps were all anti-malware, anti-virus, anti-spam. Many of these applications were so invasive, poorly written and aggressive in attempting to monetize post-install that they caused more problems than they were trying to solve. This was also the peak of the “crapware” period where a typical consumer Windows PC would arrive from the factory heavily loaded with unwanted software that killed overall performance and battery life. This problem was directly related to the fact that PC manufacturers made so little profit in the overall PC ecosystem that they were willing to take pennies from these software companies to pre-install their software even at the expense of their customers’ experience. Apple’s business model of direct monetization from sale of the device created none of these conflicting incentives.
As I discussed in What Really Happened in Vista, Windows 7 radically reorganized the Windows engineering team and produced a version of Windows that enabled a wave of enterprise adoption. But Windows 7 did nothing about this overall app ecosystem problem. Shortly after the release of Windows 7 in fall of 2009, a more direct challenge would arrive a few months later with the release of the iPad in early spring of 2010.
Tablet computers have a long history in fiction, prediction and actual production. Apple had previously built the failed Newton and Microsoft had designed a reference tablet design and special operating system software for tablet PCs around 2001. Windows tablet PCs had been shipping since 2002 but were clunky and were more an alternate form factor for laptops than a separate device category. The iPad blew these away. The iPad combined light weight, long battery life, instant-on, and a touch interface with sufficient processing power, memory and storage for a new wave of applications addressing new scenarios. The key decision Apple made with the iPad was to scale up from the iPhone rather than down from the Mac. This enabled them to instantly access the large set of iPhone applications as well as leverage the store and operating system infrastructure that prevented applications from killing battery life, a critical differentiator for these devices. They also were built on the power-efficient ARM processor rather than Intel’s less power-friendly processors.
Crucially, building up from the phone also meant that Apple would be reinforcing the iOS API moat they were creating with the iPhone. There would be no confusion which API developers should be targeting.
The Windows team made a number of key decisions as they planned Windows 8. To target tablets, they would move down from their point of strength on the desktop rather than up from the phone. They would port Windows to ARM in order to leverage the power efficiency of that ecosystem and be able to match the size, weight and battery life of the iPad. They would build their own devices in order to demonstrate what was possible on Windows and build that initial ARM-based system. And finally, they would design a new Windows Runtime API (WinRT) with an associated app store that would allow them to curate applications and create the sandboxed environment necessary to deliver the safety and power performance required for a consumer store and device experience.
The decision to build a new API was based on a number of factors. A significant factor was that Windows had lost control of its API evolution with the whole C# managed code diversion of the previous decade as I talked about in the Vista post. A new API could address critical pervasive flaws around threading, performance and responsiveness and provide a new consistent native Windows API. A new API could also define new rules around process lifetime and security necessary to deliver these new classes of sandboxed applications. As planning started, it was believed that it was impossible to lock down the existing Windows Win32 API in order to deliver the level of performance and security required to be competitive. I had been a loud voice against the managed code diversion and pushed hard to have Windows address their issues in a pervasive and consistent way.
Another key decision was to leave Windows Phone separate and largely focus on the tablet threat. It would not be till Windows 10, 8 years after the introduction of the iPhone, that Windows would have a consistent API strategy across its own device platforms.
If you look at the Windows ecosystem, at its core is the Windows API. The Windows API is the defensible moat around which the whole structure is built. There is a lot of other structure! There is the Windows experience itself, there are the billion users who are familiar with it, there are all the applications, tools, services and companies that build their businesses around it. But the core is the Windows API.
Actually, that’s not quite right. The moat is not the API, the moat is all the millions of programmers and millions of programs that have built to that API. With a new API, that moat had to be recreated.
This was most transparent for the ARM-based Windows RT devices because Windows made the decision not to allow any existing Win32-based applications on these devices except Office. The only applications that could be installed were new WinRT apps. Office was viewed as a critical differentiator for these tablets, so we signed up to do the work to produce an ARM-based version of our desktop product. In fact, because Windows had done a very complete job of moving their Win32 APIs to ARM (as part of getting all of Windows working) our job was relatively straight-forward — much easier than moving to Intel 64-bit processors which we had done a couple years earlier. In fact, most of our work was performance and power use changes to improve battery life that accrued to the Intel-based versions as well. There were tremendous engineering and testing challenges because the OS, tool set and devices were all rapidly moving targets during our development. There were a few areas where we did more extensive development, especially adopting new media APIs that ensured efficient use of hardware media support (image, sound, video encoding and decoding). There was also significant shared work between our Intel and ARM versions to better support highly responsive touch interfaces (a deep topic probably worth a separate post).
Microsoft could have allowed third-party Win32 apps on these ARM machines — the decision to exclude was based on the desire to bootstrap the WinRT app marketplace and ensure they could maintain the performance and stability required for a device competitive with the iPad. The Windows team was also concerned that the temptation for developers to do “the easy thing” and port existing Win32 applications would result in a mush of loosely adapted applications that worked poorly on these touch devices. Essentially, the dilemma was do you try to bootstrap from zero a consistent overall user experience and app ecosystem or do you leverage your existing installed base and accept a compromised user experience that might persist for years? In fact the risks of this hybrid model could be seen in the final product as even Windows was unable to move all the basic system settings and controls over to a new touch-friendly interface. A user could also find themselves having to navigate from a new touch-friendly application into a set of dialogs and controls designed years earlier for a desktop environment.
Windows was eager for Office to aggressively adopt the new WinRT APIs and build “modern” applications (naming for these applications went through a confusing and endless series of revisions from Metro, Immersive, Modern, WinRT, Universal, Store). As I mentioned, a key part of the Microsoft origin story is around the virtuous cycle of Windows pulling up Office and Office pulling up Windows. Often the key question a potential developer would ask about a new Windows API was “does Office use it?”. The assumption was that if Office used it, then it was good enough for them.
As we looked at the work involved in moving to the new APIs, we struggled with how to rationalize the cost. Normally when you look at writing to a platform API, the outcome is clear — you have created a version that runs on devices that use that platform (although the longer term cost/benefit analysis is quite a bit more complex). That kind of analysis would apply for iOS and Android. In the case where the platform runs side-by-side on the same device with an alternate platform you already support, the arguments are murkier. The same confusion played out in the early days with DOS and Windows running side-by-side. In fact, there were legendary stories of Word developers who quit the company arguing that we should just continue extending the DOS-based versions (which already had mouse and graphics support) rather than do all the work to move to Windows. The slow decisions by Lotus and WordPerfect to move to Windows from DOS played a key role in their downfall.
The most straightforward argument was that there were going to be “Windows” machines that only supported the new API. Phone would eventually fall in this class (with Windows 10) and there were frequent hints of a small form-factor tablet (that never materialized) with the same constraint. Of course Windows Phone’s continued market share collapse in the face of iPhone and Android’s explosive growth made this argument weaker and weaker over time. But this argument does resemble the same argument as could be made for any other platform.
Another argument was that touch was a fundamentally new paradigm that could not be “patched on” to existing desktop apps but required significant re-thinking. Windows was making a significant bet on that point of view with a major redesign of the start screen and the way that touch applications interacted with system services like file dialogs and sharing. Related to this argument was one that said that WinRT was where Windows engineering was going to be investing all new innovation. Hard-won experience over the previous two decades had made it clear that there was much pain involved in being slow to move away from APIs that Windows was looking to obsolete or supersede with new designs. In fact, Windows efforts to keep the new touch world and the old desktop Win32 world separate at both the UI and API level would end up being one of the critical mistakes of the Windows 8 release.
OneNote
I want to call out OneNote here since our early learning from OneNote’s cross-platform experience ended up driving much thinking on our overall Office strategy. OneNote’s chief competitor, EverNote, was very aggressive about jumping on new platforms and leveraged their iPhone and iPad versions to significant user growth after several years of only limping along as a Windows desktop competitor. We had done an early version of OneNote for Windows Phone that initially had severe limitations (e.g. no support for tables in the content of pages) as well as missing other features like the ability to open password-protected notebooks. One thing this made clear is that to the extent mobile versions are highly valued as part of a cross-platform workflow, the product feature set ends up looking like the intersection of all the supported platforms rather than the union. The absence of a feature on one platform actually makes the application on other platforms less useful because users will stay away from features that they cannot use consistently everywhere.
The development manager for OneNote at the time, a brilliant engineer and now VP in Office development, Benoit Barabe, pushed hard for the teams working on mobile versions of OneNote to move away from simplified ports and forks and to move to a consistent sharable code base with the main Windows version that shared code from the storage model up through the content surface. For many of our apps, the detailed implementation of content surface behavior is some of the trickiest and most valuable code so sharing this across platforms would be highly leveraged.
Around this same time, another future Office VP, Igor Zaika, was leading the development of OneNote for Windows 8 on the WinRT API. In this role he served as the focal point for all Office input into the new evolving WinRT APIs. Igor established a mantra for our internal work that would end up driving much of our work across all the applications — “No #ifdefs!”. This is one of those simple messages that are key to managing large efforts.
This deserves a small explanatory diversion. For non C/C++ coders, an #ifdef in code is used to conditionally include some piece of code depending on the value of a compile-time value, e.g. whether you are compiling for the Mac or for Windows. When porting code, an #ifdef is often used to change behavior at a very granular level (e.g. for a single line of code in a larger function).
Although we used platform abstraction layers (PALs) to allow much code to be the same across platforms, in practice our Mac/Windows code was sprinkled with literally thousands of #ifdefs. Our experience over many years of Mac/Windows development is this made it virtually impossible for a developer focusing on one platform to keep from breaking things on other platforms. The “No #ifdefs” mantra said we would rigorously refactor code into separate platform-specific and platform-agnostic components. This might result in more short-term work but proved to be vastly simpler and more robust overall as we started supporting more platforms.
The OneNote experience also showed that we could use C++ as our common language across platforms for the large bulk of the application. There were non-trivial compiler and language nits that we needed to avoid, but overall it enabled us to share highly performant native code across all platforms. This great talk by Igor goes into much more detail as well as adding additional background to this overall problem space.
Office 365
A key strategic thread during this time was the move to a service model with Office 365. I discussed this in Taking Office Agile. For much of the prior two decades, the Microsoft business strategy was straightforward. Sell lots of Windows machines and include Office on a high percentage of them. From this perspective, the success of Office was wholly dependent on the success of Windows. Anything Office could do to enhance Windows strength was worth doing. That included holding off on a real version of Office for iOS (especially the iPad) and Android in order to support differentiation of Windows tablet and phone efforts. This was a point in time where various schools and businesses were experimenting with iPads as the core productivity device for their students or employees. A world where much of the laptop market evaporated was not inconceivable. The laptop form factor and the desktop interface (including hybrid devices like Surface Pro) has proved to be much more robust than was feared at the time. Mobile has come to dominate overall computing but has added to rather than simply replaced many scenarios.
This focus on Windows-only was in direct tension with the direction we were heading with a service model for Office 365. As the model of “every employee has a PC” changed to “every employee is a user of the service and accesses the service from a range of devices”, being able to access the service on any device you own becomes a key requirement. From the Office perspective, we were going through a generational inflection point to a service-based model. Our whole focus should be on ensuring that our dominant position in document editing, document storage and email was preserved and enhanced through this inflection point. Customers were directly questioning why they should take a dependency on our service if they could not access it from the devices they owned. With the Bring Your Own Device (BYOD) shift where companies did not get to even specify what device an employee used, broad device access was a requirement. Certainly it looked like Office had a much stronger opportunity to emerge in this new world in a dominant defensible position than Windows. We were risking that transition by failing to support other platforms. Windows still could play a key role in customer acquisition — our ability to acquire an Office customer who was buying a Windows laptop was vastly higher than our ability to acquire that customer if they were buying a Chromebook or Android phone, for example. In fact, the growth of Chromebooks and Google’s ability to attach Google Apps to a buyer of them during this time reinforced what a key role a device choice could have on the choice of the applications or services attached to them.
The service model changed how we thought about cross-platform support. We were not trying to cover a particular device with our software, we were trying to attract and retain a subscriber to our service. That subscriber might access the service from multiple devices, often simultaneously. We no longer have a Mac user or a Windows user, we have an Office user. We still had the standard cross-platform challenge of how much to invest in each platform, but it was less about trying to access a specific pool of customers on that device, but to serve a broader set of customers that use multiple devices.
In fact a similar dynamic plays out for any business with network effects. Expanding that network (e.g. the network effects of our Office file formats enhanced by coverage on the Mac platform) can enhance its value on other platforms. Coverage can also prevent those platforms from becoming a base for competitors to grow from (as we were seeing to some degree with web and mobile-based competitors). Determining precisely how to value the costs and benefits can be tricky — although in practice our Mac Office business was tremendously profitable independently in any case.
Stepping Back
OK, that was a lot of context. Seems pretty chaotic, huh? It was chaotic! That’s the nature of these things when the industry is getting flipped upside down. There were a lot of things swirling around:
Key Decisions
In the spring of 2010, as Windows started on what would become Windows 8 and we launched the Office 2013 product cycle, it was clear cross-platform was going to play a bigger and bigger role in our core development. Independent of when (not if) we decided to ship on iOS and Android, we already had major investments on the web, on Mac, on Windows desktop, Windows phone and now the new WinRT API. WinRT was shaping up to be a radical departure from Win32.
We made a number of directional decisions to guide our development. These were driven from a few fundamental premises.
This led to a number of key decisions:
As we were working through this plan, I had a series of private and then public conversations about cross-platform (through internal blog posts) with Steven Sinofsky, head of Windows at the time and formerly my boss in Office. He’s also published these thoughts outside Microsoft as well (here and an update here).
His reasoning starts from a simple basic premise — “Win the review.” I learned this from Chris Peters, one of the key early Office leaders, when I joined Microsoft. That is, as an app on a particular device, you want to win the competitive review — you want to be the best app in your category for that device. For Steven, the best app is exploitive of that device and integrates tightly with OS APIs and capabilities. This is a continually moving target since OS manufacturers are incented to differentiate. This implies an app will need to continually invest to stay the best on that platform. OS providers will often have special tools or languages that help developers build the best app for their devices so you want to be in a position to leverage these.
A cross-platform focus inherently makes that harder in both direct and indirect ways. It diverts resources away from that single platform. It focuses design energy on how to abstract platform differences instead of how to exploit a particular platform. In the worse case it leads to focusing on lowest common denominator functionality. If you use a third-party technology that tries to abstract OS differences for you, you introduce inevitable latency and friction in your ability to respond to OS innovation. Even in the best case, the team finds themselves essentially trying to duplicate the features of one OS on another — with a team that is significantly smaller than the OS team. Alternatively, you look at the cost of implementing that feature across all platforms — easy on one and hard on the others — and lots of little cost/benefit analysis across the team drives you unintentionally towards that lowest common denominator. Individual developers need to be experts across a range of platforms instead of focusing on one. Inevitably it is way more costly over time than you originally expect. Many of these issues only emerge over time and are messy and subjective and hard to measure, so they are also difficult to recognize.
I had a deep experience with cross-platform over almost 30 years and found that many of these arguments resonated. At the same time, I had a somewhat different perspective, especially for long-lived applications with complex data models. Most complex apps want to draw clear boundaries between OS functionality and app logic for good architectural reasons. This is deeper than “model/view” since view logic is frequently an important part of the app experience and app complexity. Not the least of the reason to draw these boundaries is precisely the point Steven made — OS APIs evolve. An application that does not draw clear boundaries between app logic and OS functionality will end up having a hard time evolving with the OS because the dependency on the OS has not been kept clear and isolated. I talked about these layering issues more deeply in Leaky by Design. We had lots of (painful) experience with exactly this issue when looking at how our interaction with Windows evolved over time.
Our experience across Windows, iOS and Android was that C++ was an effective way of writing high-performance native code on all these platforms despite each system having a preferred language. This was not least because all three OS’s leverage C/C++ internally for much of their core functionality. This looked like a good bet at the time (and continues to look like one). We also benefited because after a period where innovation in C++ stagnated in preference to managed languages like Java and C#, this period represented something of a renaissance in work on C++ with the adoption of the C++11 and C++14 standards. There was also a renewed investment in C++ tools, libraries and open source. This enabled us to both leverage C++ as well as modernize and standardize our usage of it.
While OSs continue to differentiate, form factors also reach levels of basic scenario sufficiency that means that over time much of the cost of development is spent integrating with and extending existing app functionality rather than being dominated by OS integration. This obviously depends on the complexity of the application but seems true for many high-value applications.
Another perspective that we were seeing play out on both web and mobile is that a new platform enables whole new scenarios. Hesitancy in supporting a new platform prevents you from exploiting those scenarios. Our early experience with GUI on the Mac, Evernote’s growth on iPhone, Google’s growth on the web all showed risks of ignoring new emerging platforms: in missing the chance to exploit new scenarios, gaining early understand of new opportunities and giving competitors room to emerge.
Ultimately the primary driving factor in our cross-platform decision process was the move to a service model with Office 365. That was (and is) the primary strategic imperative. Our client applications were the projection of the service onto the devices our customers owned. The best projection of that service required a client experience.
Evolving the WinRT Plan
As Windows 8 development proceeded, it became clear that WinRT was going to be a major departure from Win32 — essentially a new OS target. We made the decision not to try to build some kind of adaption layer — the changes were ones we had been pushing for and in fact enabled significant architectural consistency across the other mobile platforms as well. As the amount of work involved became clear, we made the decision to continue much of the underlying architectural work, including key changes in how our rendering and UI layers were architected on all platforms and how we integrated with the Windows DirectX graphics layers. We would defer shipping full WinRT versions of Word, Excel and PowerPoint but focus on finishing OneNote so we could still maintain full engagement with Windows on how WinRT was evolving and how it met (or didn’t meet) our requirements.
Office for iPad
Our Mac Office team had been eager to explore building on the iPad soon after its release. During 2012 they proceeded on active development of an iPad version and made enough progress that it was clear we could ship within a few months. At that point, Ballmer and staff made the decision to hold off on shipping Office on iPad until we had had a chance to see how our own tablet efforts evolved with the release of Windows 8 and the Surface line and what role Office exclusivity would play. It would be a year before the go ahead was given to ship on the iPad. This decision happened under Ballmer in 2013 although the release was not announced until shortly after Satya Nadella took over as CEO in early 2014.
We took a shortcut on the iPad work by primarily leveraging the adaptation layers we were using for our Mac product. The iPad work could leverage the fact that at the lowest levels of the OS and core graphics, rendering and animation layers, the iPad and the Mac shared core APIs and models. Ultimately (by 2016) we would go back and leverage the more rigorous re-architecture work we were doing for the Win32/WinRT split rather than continuing on this divergent fork. The motivation was to get onto a sustainable architectural path with a continuously stable main branch for all development.
The speed of standing up the iPad fork accentuated the risk Windows was taking with the divergence of WinRT and Win32. Even basic memory, threading and file APIs were changing in Windows. This not only meant much work for any developer looking to adapt an existing application, it also meant that any developer leveraging some open source library (essentially all modern developers) would probably have to wait for that library to be adapted as well, further complicating the timing and decision to move to WinRT for third parties.
Office for Android
Once we had broken through the strategic question of shipping on other platforms by shipping the iPad, we needed to address when to ship on Android. We had shipped a version of our mobile code base for Android but these apps were severely limited in both functionality and compatibility. Our original plan had been to first finish the WinRT version before moving on to Android. With the incredibly rapid rise of Android, especially in Asia, we decided to invert the order, ship on Android first and then follow with WinRT apps synchronized with the release of Windows 10. Windows 10 would be the first that unified phone and tablet, so it would enable us to have a clear device focus for these versions rather than the continued scenario and device confusion between our desktop apps and WinRT apps.
We stepped back and considered some different approaches for Android — further expand the functionality of the mobile apps, use the web app architecture or bet fully on the re-architecture we had embarked on with WinRT. I pushed hard for this final approach since it was the only strategy that would both deliver fully functional native versions and be aligned with our long-term technical approach and engineering strategy. Other approaches would be costly diversions.
Our Hyderabad, India team signed up for delivering these versions by the end of 2014, only nine months later. They had gained valuable experience with Android by shipping the mobile versions and went on a 6 day work week push to get them done. Just three months later, in June, it was clear that the core architectural refactoring work we had been doing was paying off as they stood up working versions with major functionality that “just worked” using core shared code. We were heavily leveraging our shift to agile since this shared code was at ship quality rather than in the mid-release destabilized state that had been typical for all releases prior to the Office 2016 product cycle.
Our core Office for Windows teams had been skeptical of the initial schedule for releasing the Android product and remained skeptical for most of the fall, although they fully engaged as partners to get it done. Ultimately it took actually shipping in early January for them to really become believers. The Android tablet release was followed by the Android phone release later that spring.
WinRT Thoughts
I am not a fan of “alternative history” — of claiming some specific different outcome would have occurred if some other path was followed. What is clear is that there was no great outpouring of WinRT apps. This says basically nothing about the quality of the design and implementation of the API but more about the collapse of Windows Phone market share and the lack of a distinct device category and set of scenarios to target. The decision to keep Windows Phone separate from Windows Desktop through the Windows 7 and 8 product cycles and only unifying with Windows 10 8 years after the iPhone was released resulted in the Windows Phone API strategy chaotically changing every release and meant that there were no set of distinct devices to target with WinRT until very late. The Windows Phone team built some great devices but they were not building up an API strategy and body of applications with which to compete.
Even devices of the premier Surface Pro line are primarily used in a laptop form factor where Win32 is generally just fine for app developers. The work in Windows 10 to leverage application virtualization (Project Centennial) along with OS improvements to resource constrain Win32 apps allowed Win32 apps into the store while still giving OS control to deliver a consumer app experience.
Developers will walk over hot coals to access a large target of users and devices that represent new scenarios and markets. Likewise, even the best API will not attract developers if the market is not there.
Overall
For a post so long, I know it can be hard to believe I’ve omitted anything. But there were many other players and threads over these years. I have omitted a discussion of Outlook or Skype which are separate parallel stories. There were also many that believed HTML5 would be the salvation for several years. The sharing of code between our native clients and web clients is also a separate complex story.
Ultimately, my main focus was on ensuring that we were on a path that that recognized what a large inflection the cloud represented and a technical path that would not constrain us from building the leading apps on these devices. We had seen short cuts lead to dead ends before. Picking the path required listening closely to what some brilliant engineers were learning as they built real solutions down in the trenches. Most of the hardest decisions had passionate advocates with direct concrete experience that helped lead the way.