Hey there. I'm Mrinal Wadhwa, CTO at Ockam. I met some lovely old friends and many amazing new people at the RSA Conference last week.
On Tuesday afternoon, I walked through the bustling expo floor with hallway after hallway of products that keenly monitor, scan, and scrutinize every event, every log message, and every asset to detect and thwart any suspicious behavior as soon as possible.
It was a stark reminder that the cybersecurity industry has devolved into a game that is impossible to win. We’re about to see major disruption.
One booth had an arcade-like whack-a-mole game (see image above) to help visualize what they do. This is a pretty good illustration of what most (maybe greater than 90%) of cybersecurity products do; they scan some part of your vulnerability surface to detect anomalous behavior.
Some scan network traffic, some monitor endpoints, and others inspect API requests. They’re all looking to whack that next mole - as fast as possible.
There are thousands of products that specialize in scanning specific corners of your vulnerability surface. Some specialize within a specific industry, others are tuned for logs from specific popular tools. Many are incredibly sophisticated and use advanced machine learning and AI to detect bad behavior. They all highlight that they can whack-a-mole with higher accuracy than their competitors.
Most scanning products do the job as advertised. Hey, it is incredibly satisfying to whack moles. But, if we take a step back to look at trajectory, we notice that our vulnerability surface is growing way faster than we can build scanners to monitor every part of it. Our personal and business data is, paradoxically, becoming less secure over time.
Like Sisyphus, from Greek mythology, our industry is endlessly rolling a big boulder up a steep hill only to watch it roll back down.
To develop an intuition, stay with me as I do some napkin math about an example application and the vulnerability surface of data it manages for users.
For the purpose of illustration, assume that the odds of any one person making a bad security decision is a fixed constant. With a development team of 10 people, the vulnerability surface created by the application's source code would have an area of 10.
If this app depends on say 100 libraries and each library has 5 developers, then the area of the vulnerability surface created by these libraries is 100 * 5 = 500. Assuming the app runs in a container that has 100 operating system packages and each package has 5 devs, then the area of the vulnerability surface created by OS packages is also 100 * 5 = 500. Together, dependencies create a vulnerability surface of 500 + 500 = 1000.
This example app would operate in some network and that network, let’s say, has 9 other microservices in it. If each service has a similar-sized team and dependency tree, then total vulnerability surface area of our app’s data becomes 10 * 1000 = 10000.
Our app also depends on 9 cloud-based services (Databases, Event Streaming, Gateways, etc.) and each third-party service similarly is made of 10 microservices operating in a network. Total vulnerability surface area of our app’s data becomes including third-party services becomes 10 * 10000 = 100000.
This napkin math, while it ignores overlaps across surfaces, is a good mental model for the exponential scale of the problem facing typical modern applications - Tens of thousands of developers, platform engineers, and system administrators must never ship a mistake for our example application’s data to remain secure.
This is untenable; we’ll never build enough scanners to catch all possible mistakes that can be exploited by attackers. Vulnerability surfaces are too big and are growing too fast.
To make matters even more challenging, a typical application development team has limited control over most of the vulnerability surface described above. They can’t choose to not use open-source libraries or not rely on cloud services.
Building everything from scratch will take too much time, cost too much money, and will likely result in poor-quality solutions.
Instead of attempting to scan every part of an exponentially expanding surface, the only tenable approach is to make design choices that completely eliminate large portions of our vulnerability surface. We have to make entire classes of attacks impossible.
We must build software in ways that drastically reduce the size of potential targets and limit the blast radius. Our software must become private and secure by design.
In the past, this was challenging and costly. The following tools are changing the game:
Strongly typed languages like Rust and Typescript turn invariants into compile-time errors. This reduces the set of possible mistakes that can be shipped to production by making them easier to catch at build time.
Memory-safe languages eliminate the possibility of buffer overflows, use-after-free, and other memory safety errors. An attack vector that is known to cause 60-70% of high-severity vulnerabilities in large C or C++ codebases. Rust provides this safety without the performance costs of garbage collection at runtime.
Supply chain security practices described in emerging standards like SLSA help us build controls that guarantee artifact integrity within our dependency trees. This diminishes the possibility of malicious libraries, packages, and container images exploiting developer workstations, build pipelines, and runtime environments.
Cryptographic keys, stored in secure hardware, combined with passwordless and tokenless approaches eliminate the possibility of attacks using stolen passwords and access tokens.
Mutual authentication and granular authorization, at the application level, using tools like Ockam, enables zero trust in operating networks, VPNs, and VPCs. This removes other applications within the same network from an application’s vulnerability surface.
Application layer, end-to-end encryption of all data, and using Ockam
All these approaches shift security left and allow an application’s development team to be in control of the security and privacy properties of their application. This team no longer has to cross their fingers and hope a third-party service won’t be compromised; they can simply end-to-end encrypt data as it passes through that service.
Such design decisions turn security and privacy into problems that can be methodically solved instead of endlessly rolling a big boulder up a steep hill.
Another interesting effect of relying on scanners for security is that it shifts the costs of securing software, away from vendors, to the customers of software.
In the past, software vendors were able to require that customers operate their software within tightly controlled network boundaries like VPNs. They were able to specify environmental configurations that must be set up just right for their products to be secure. They were able to pass the costs and liabilities of security to their customers. As security threats escalate and vulnerability surfaces become unmanageable, both customers and regulators have realized that this shift of burden is ineffective and too expensive.
The chief executive at insurer Zurich recently told the Financial Times that
This shift of accountability to senior leadership at technology vendors is reflected in new regulations across countries. For instance, the SEC has proposed
In this turbulent landscape, savvy businesses are turning security and privacy into a competitive advantage. Software products that are designed to be secure are winning customers because they result in much safer systems that remove risks, ease compliance hurdles, and are significantly cheaper to operate.
As I walked out of the expo halls at RSAC, it was obvious to me that the cybersecurity industry is about to see some disruptive change and the only companies that will survive the next few years are the ones that embrace this new reality.