Companies that have pioneered the application of AI at scale did so using their own in-house ML platforms (uber, LinkedIn, Facebook, Airbnb). Many vendors are now making these capabilities available to purchase off-the-shelf products. There’s also a range of open-source tools addressing MLOps. The rush to the space has created a new problem - too much choice. There are now hundreds of tools and at least 40 platforms available:
This is a very difficult landscape to navigate. But organizations have to figure it out because there is an imperative to get value from ML. Let’s understand the big challenges and then we’ll introduce some new free material that aims to address these problems.
Chip Huyen gathered data on the ML tool scene in 2020. Chip found 284 tools and the number keeps on growing. Evaluations don’t have to consider all of these tools provided we can narrow them down to the ones that are most relevant to us... but that’s not easy to do.
Typically we get a picture of which software does what by putting products in categories. There are attempts to do this with the LFAI Landscape diagram, the MAD landscape, and GitHub lists. But often ML software does more than one thing and could fit into any of several categories. Platforms are especially hard to categorise as they explicitly aim to do more than one thing.
Because software that does several things is hard to categorize, ML platforms all tend to end up in a single category of ‘platform’. This obscures what emphasis each platform has and also loses all of the nuances of how different platforms do similar things in different ways.
ML categories are hard to keep track of in part because new categories keep appearing. Feature stores, for example, haven’t been around very long but are now a significant category. This affects platforms too as platforms are introducing big new features and changing their emphasis (in part in response to what new tools appear).
ML is complex. It’s also a big field. Platforms do a wide range of things so that means understanding regression, classification, nlp, image processing, reinforcement learning, explainability, AutoML and a lot more.
Not only are the problems various and complex, there’s also a range of roles involved in the ML lifecycle. Data Scientists, ML Engineers, Data Engineers, SRE, Product Managers, Architects, Application Developers, Platform Developers, End Users etc. Different roles have different points of interaction with the software and different needs from it.
There’s a lot of discussion of build vs buy trade-offs in the ML space. How much control do organisations need over their stack? How much does that control cost? Build vs buy is often presented as an either-or but it is more of a spectrum. This is just one of the confusing controversies in the ML space (consider how controversial AutoML is).
We’ve launched two new resources to help. For understanding the landscape and dealing with the trade-offs we’ve launched a Guide to Evaluating MLOps Platforms:
https://www.thoughtworks.com/what-we-do/data-and-ai/cd4ml/guide-to-evaluating-mlops-platforms
This is available for free without any sign-up. It addresses how to distinguish MLOps platforms and how to structure an evaluation to suit the needs of your organisation.
We also need to apply this knowledge and see how to compare platforms against each other. For this we’ve released an open-source comparison matrix:
https://github.com/thoughtworks/mlops-platforms
The matrix is structured to highlight how vendors do things in their own ways and also point to more detail in the product documentation. We’ve also included in the repository a series of profiles that describe the product directions of popular platforms concisely and in a marketing-free way.
We hope you find this material helpful and welcome contributions in GitHub. Feel free to ask any questions either on github or to me directly on twitter, linkedin.
Title image by chenspec on pixabay.