The last decade has seen information technology enjoy a breathtaking period of innovation. As a result of this ongoing evolution, our industry has developed a complicated relationship with obsolescence. Often we keep old tech around, even after a much-improved alternative becomes available.
However, this legacy technology can add complexity and slow down innovation and has a negative impact on operational costs and information security.
Today, we run virtual machines (VMs) on top of bare metal Linux, with containers inside, managed by Kubernetes which - despite often being described as an “operating system for data centers” did not replace Linux, an operating system that was born in simpler times. In fact, the number of technology layers that our applications depend on just keeps growing.
Sometimes there is a valid reason to keep the old stuff around. Containers don’t provide secure process isolation like VMs do, for example. But we don’t pay enough attention to getting rid of the tech that is obsolete and that creates complexity and security gaps.
Complex systems are harder to secure than simple ones. For the most part, human error is the foundation of every successful hacking attempt. And the probability of a human error grows higher as complexity in the tech stacks increases.
To put it simply, every existing technology in your computing environment:
In almost every use case, configuring all of these components properly is crucial to infrastructure security and for meeting compliance requirements. As engineering teams and computing infrastructure grow over time, properly configuring every computing resource and maintaining secure configurations have proven to be an insurmountable task for most DevOps teams.
But it’s not just the teams that grow. The surface area needing protection continues to increase as well. The same valuable data can be stolen via multiple attack vectors, including but not limited to:
The difficulty of managing secure infrastructure access across all of these “access doors” is compounded by the fragmentation of the information security industry. Different vendors offer point solutions for network perimeter security, operating systems, databases, software supply chain, and so on. This leads to “infrastructure access silos.”
As recently as 10 years ago, servers weren’t as numerous. Each piece of middleware installed on a server had its own remote access interface. Fast forward to today’s cloud-native reality and servers come and go by the thousands. For that reason alone, the number of remote access interfaces that exist at any given time is hard to keep track of.
The obvious answer is to consolidate all access into a single, easy to manage component. In other words, an organization should consider having a single source of truth for remotely accessing everything, one which consolidates the following elements:
If we could consolidate all four pillars of remote access with a single simple solution, that would finally create a single “door” for accessing all components of computing infrastructure. Can we securely consolidate all access? With a not-so-secret ingredient called “identity,” the answer is a resounding yes.
For this consolidation to work, every DevOps engineer, every software developer, every server, database, microservice must be issued an identity by the same authority which also stores access policy and maintains a unified audit log.
Having a single source of truth for the identities of everyone and everything allows stakeholders to consolidate authentication, authorization and audit. In turn, this dramatically simplifies the configuration burden and reduces the attack surface area to the bare minimum. When each access event is processed according to the identities of hardware, software and a human involved, it allows security-focused teams to implement a highly granular policy with minimal required privileges. This dramatically reduces the blast radius from a compromised account.
Consolidation through Certificates
The prime technology candidate to consolidate identity-based access is certificates issued by a central certificate authority (CA) that are portable, remove the need for passwords, contain arbitrary metadata about their owner and have an expiration date.
Historically, there were several primary factors that hindered the widespread adoption of certificates:
When combined with identity-aware reverse proxies, certificates can be used for accessing legacy resources that do not support certificates natively. The key aspect is to both connect to an identity platform and find a way to issue certificates which can be later used to establish connections to legacy resources that only listen on a localhost socket.
By only supporting certificates and nothing else, an identity management platform eliminates the complexity that has historically been associated with using certificates in practice.
This level of simplification of access, and reduction of components involved, is a welcome development. More importantly, it provides a single source of truth across all existing tech stacks, treating the ones that should have been made obsolete in the same manner as the next-generation compute systems that developers and engineers need to do their job.