The observability industry is projected to grow about 8% in 2023, a continuation of a strong growth trend since the 90’s. It’s a projection that’s easy to agree with from the inside looking out. Old players continue to scale, and new players crop up and move into new niches. There’s no horizon in sight. The continued economic growth is good news for investors, who pour billions more into ventures, new and old, year after year. But what’s the driver behind the unending expansion? The following are my predictions for observability in 2023, focused on new trends, capabilities, technologies, and fundamental drivers of value across a complex and evolving landscape.
Observability is all about data. Specifically, it’s about machine and network data. As our machines and networks increase in size and complexity, so does the data we collect. What began as an industry for collecting and managing data in broad strokes has branched into a complex entanglement of increasingly differentiated technologies handling increasingly niche data and use cases. For example, some are focussed on real time metric data across global networks to optimize performance, while others are concerned with long term, meticulous retention of log data at a single source for security and compliance, but both are considered to sit on the foundation of “observability.” The problem is that said “foundation” exists in name only. There is nothing material that standardizes technology across the industry – not yet – but a few key players are working on it, and 2023 will be the year that the industry takes a definitive step in one direction or another. It will be the year that collaboration supersedes competition, or the industry splits into multiple sub-categories that continue to obfuscate.
Industry standards shared between competitors is a strong driver of value and innovation in dozens of industries. Telecom companies share common protocols, auto manufacturers share global standards and infrastructure, and so on. The rising tide lifts all boats. In observability, that shared foundation emerged as a foundling CNCF project in 2019 called OpenTelemetry. It aims to produce the technological backbone for the observability space through new protocols and vendor-agnostic technologies that break down the barriers between the aforementioned niches so that overall better solutions are available to end-users. Some of the biggest players, who are otherwise locked in fierce competition, are heavily invested in the OpenTelemetry project. Splunk, Datadog, and dozens more all contribute to the collaborative, open source foundation. But not everyone is on board. A line is drawn in the sand between those that believe now is the time for the industry to unify around some common infrastructure before it fragments permanently, and those who prefer to continue to innovate independently.
It’s nice to think we live in a world where collaboration always wins, but that isn’t the case. Building on the same foundation as your competitors means you do some leg work for them, they get to know a lot about how your technology works, and switching between solutions becomes easier for customers. Like in the telecom and automotive examples, it’s relatively easy for consumers to change cell providers and buy different cars. Despite those downsides, it’s hard to argue that telecom companies would be more valuable if you couldn’t call your friend on a different network, or that cars would have the same appeal if they had to use different roads, tires, and gas stations depending on the make (a contemporary problem for EV’s). So the question for observability is really about timing and details.
My prediction is that 2023 is the year we see OpenTelemetry become the common foundation that all future observability technologies are built on top of. In OpenTelemetry’s own words, it’s “a collection of tools, APIs, and SDKs [used] to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help analyze the performance and behavior of software [and hardware].” OpenTelemetry is not a platform, but it can be assembled as such and modified to specific needs. Support for OpenTelemetry as a common foundation across the industry means competition happens at a more nuanced level, such as unique methods of data parsing and analysis, the design and implementation of telemetry tools in specific environments, or even the classic arenas of price and support. The big appeal is that the common foundation means any tool can reliably integrate with any other. That does mean that the biggest players will lose some business to smaller ones that will be able to service the most niche use cases more easily, but the overall growth that follows will far outweigh the losses because customers will buy into the market that would otherwise opt not to participate, especially as the effects expand horizontally into other industries. Take networked medical devices, for example, a very niche use case which needs to maintain HIPAA compliance as they record and transmit medical information. Some amount of monitoring is required to maintain that compliance, adding additional operating costs. Using OpenTelemetry as a pre-built foundation, a new player could create a platform specifically for that niche, bringing new customers into the observability market and creating more data and value for everyone.
The losers during this transition will be those that don’t share the open source vision. That doesn’t mean every firm has to open source their entire product line, but it does mean that the common foundation needs to be known to all, customers and competitors alike. It’s a scary move because it can feel like giving up the secret sauce, but the result, in my opinion, will be an explosion of new value that unifies and unlocks new frontiers in the observability space.