Co-founder of www.env0.com
The Rise of the Virtual Machines
I remember the first time I heard about VMware. It was 2002 and we needed a way to run Linux on a Windows OS. I was blown away to see it working for the first time, on what was then a VMware Workstation. Those years were the beginning of VMWare ESX and its competition Xen (later acquired by Citrix).
Back then, if you ran a company you needed a dedicated room for the physical servers, switches, air-conditioning, and everything else that surrounded the computers.
It wasn’t clear at first just how much virtualization would change the IT market.
A few years later, in 2007, I was lucky enough to work at a startup named B-Hive networks. Why lucky? Because in 2008 we were acquired by VMWare and we saw first-hand how everybody was starting to talk about “the cloud”.
However, VMWare struggled to build its own cloud and focused on working with other companies to build cloud datacenters on top of VMWare’s technology.
VMWare, using its products ESX and vCenter (later vSphere), partnered with Terremark and saw how AWS was becoming bigger and bigger (starting with EC2 and S3 services).
Around 2010 it was amazing to see how many new solutions were being introduced, and at such a rapid pace. Engineers started to think more about the operational side of their software.
Where will my code run? On an EC2 instance (IaaS)? Will I manage my own server or maybe it would be better to run it on a Platform-as-a-Service (PaaS)? Heroku, AWS Elastic Beanstalk and later Azure PaaS were great choices.
Or, even better, I could choose not to write any code at all and use Software-as-a-Service (SaaS) that solved my own problem. NewRelic, and then SendGrid, Stripe and Auth0 were (and still are) great choices.
At the same time, teams wrote code (mainly scripts in languages like Bash, Perl, and Python) in order to manage different environments.
It was becoming impossible to manage these larger and more complex environments without automation. Chef, Puppet, and later Ansible became the standard way to manage different environments, each environment with its different configuration.
Environment creation was still a largely infrequent and manual exercise, with configuration management scripts that used to bring life to the otherwise empty infrastructure.
Around 2014, Docker made containers easy to use, and developers everywhere enthusiastically embraced the power of containers, seemingly overnight. Developers could write their own Dockerfile and have exactly what they needed (and nothing more) running in the container.
A quick deployment of software into production multiple times a day became common practice at leading companies.
The challenge of orchestrating thousands of containers, and managing things like networking, service discovery, etc., led to Google releasing Kubernetes and the era of cloud-native computing.
In 2014, AWS launched its Lambda service as an alternative to having any infrastructure whatsoever. From now on, there would be no need to pay for compute resources until they were being used.
Pay only for what you consume. Your entire system would run purely on demand.
Lambda was initially used for isolated, specific tasks. Nowadays, we see more and more systems that are built using serverless for the system as a whole, not merely using it for small parts of the system.
The transition from a few monolithic systems to hundreds or thousands of microservices composed into cloud-native applications has made production environments much more complicated.
Clicking different buttons in the AWS/GCP/Azure web interface is not scalable if you wish to manage those clicks, run them again in a similar, but slightly different environment, and align your ops organization with the developers.
Several tools were created to help tackle this problem, including HashiCorp Terraform, AWS CloudFormation, and Pulumi. They all strive to enable automated, reproducible, testable and self-documented infrastructure. More and more companies of all sizes and types are using Infrastructure as Code to manage their cloud resources.
If I were to predict, I’d claim that the use of Infrastructure as Code will continue to grow. Companies will increase their Infrastructure as Code usage, and use it in a more dynamic way, as well as for additional scenarios.
The challenge will shift. The main question will be how to work well in an organization that embraces Infrastructure as Code. A new set of questions will arise: How to synchronize the work between different Infrastructure as Code developers as well as the runs?
How do you enable self-service throughout the organization for people who are not Infrastructure as Code experts? How do you manage different users and permissions? How do you ensure that nobody abuses the access and triggers huge cloud provider costs?
How do you proactively reduce these costs, delegating the responsibility for different R&D teams? How do you provide the management, governance and visibility your organization needs?
We believe that solutions like env0 will become vital in order to manage the Infrastructure as Code work within organizations. The complexity and scale of modern software environments are simply too big for human operators to manage alone with no extensive automated help in place.
In the same way virtual machines became an industry standard a few years ago, we’re expecting platforms to become the new standard very soon.
(Disclaimer: The Author is the CEO and co-Founder at env0)
Create your free account to unlock your custom reading experience.