Can Multi-Cloud Kubernetes Platforms Make Infrastructure Prices Comparable?
VC @ Runa Capital
Kubernetes enables you to deploy, scale and manage your container-based applications and multi-cloud management tools simplify the use of different clouds. Together this might help us to finally understand and optimize the pricing of our infrastructure.
In spring of 2014, I founded the advertising technology company realzeit together with my two friends Heiko and Christian. While technically adept, we had no money and lots of naivety. Instead of embracing cloud services, we used the cheapest infrastructure we could think of — dedicated servers run by Hetzner
Based on them, we built a complicated infrastructure designed to work with around 150,000 ad requests per second. For sure not our salaries, but much rather our server instances were the biggest cost factor of our early startup.
Fortunately, we got supported by Microsoft’s BizSpark Plus program granting us around €60,000 in Azure cloud-service credits. We immediately switched to their cloud with most of our infrastructure. Boy was I surprised to see that for the same system, our server costs had more than tripled!
You may attribute my surprise to the above-described naivety until you try to calculate the costs of a cloud setup yourself. First of all, cloud providers use varying terminology and a bunch of different variables to measure usage making it really hard to see what you are actually spending. If this were not enough, prices change frequently.
Trust me, navigating the pricing of AWS, Google Cloud or any other cloud computing platform is utter chaos and very confusing. For your reference, please have a look at the complex price comparison performed by the online magazine InfoWorld
. Only for very specific cases, it is possible (though not easy) to compare your setup costs. If it were not because of our unplanned experiment, we would have never found out about the huge difference in costs.
I understand that setting up your complete system on different cloud platforms a hassle and may not be practical in all cases, but a simple way for all of us to perform such pricing experiments may be slowly emerging. But let me take a step back.
Cloud-native applications have been a hot topic for a while now. Making use of container-based environments, microservices and continuous delivery, they aim to leverage cloud infrastructure to the fullest. Let's have a quick look into the history of containers, to understand this emerging paradigm.
- Fig. 1: Traditional vs virtualized vs container (Source: kubernetes.io)
The first computers probably most of us have been in touch with are traditional PCs. They have specific hardware and operating systems (OSs), and, back in the days, all of our applications have been installed on such machines. Interestingly, the idea of virtualizing computer systems pre-dates the PC by decades.
In the late 1960s, IBM created a new mainframe that supported CP/CMS (Control Program/Cambridge Monitor System) developed earlier at MIT
. This was the first OS based on a virtual machine (VM).
VMs are an emulation of a complete computer system and multiple of them can run on a single physical server’s CPU. This efficient approach has now been used at a massive scale by cloud provider who use VMs to separate distinct applications. Like for actual hardware, in each of these machines, an operating system has to be installed.
I am working at Runa Capital
, where in a sense virtualization technology is deeply engraved in our DNA. Our founding partners Ilya and Serguei made heavy use of this technology to build their company Parallels
known e.g. for powerful technology to run Windows on your Mac.
A step even further is OS-level virtualization by so-called containers. An early approach for this were the LXC container
or Parallel's OpenVZ (Open Virtuozzo)
technology (fun fact: one of the brains behind the container technology Kirill Korotaev is one of Runa's longstanding technical advisors). Nowadays, technologies like Docker
have become more popular.
Containers are very similar to VMs, but they do share the same operating system making them much more lightweight. Note that containers need a Linux kernel, but in principle are decoupled from the underlying infrastructure, which makes them very fast to spin-up and recycle.
A nice side effect is that they can be ported to other systems fairly easy. See Fig. 1 to understand the different approaches.
- Fig. 2: Containers are great and practical.
With containers, developers have found a good way to abstract away the hardware and to run applications. However, your system still needs to be managed. This can be done with orchestration software. The winning platform is clearly Kubernetes
(K8s) designed by Google and consequently open-sourced in 2014. However, there are quite a few competing cluster management tools like RedHats' OpenShift
or Apache Mesos
. You may find an exhaustive list of container orchestration software here
These platforms take care of the things few people enjoy dealing with: service discovery, load balancing, storage orchestration, automated rollouts/rollbacks, automatic bin packing, self-healing and secret/configuration management.
Sounds like a lot? Good that such platforms are taking care that everything is launched and running properly!
Cloud providers like AWS constantly update their services and develop new ones in order to make their platforms even more attractive. This may lead to increased switching costs and a kind of vendor lock-in for their clients. Cloud-native applications may liberate developers once again.
Containers and orchestration software help to split large applications into small and independent microservices.
While you are able to set up such systems from scratch, there are some third-party tools like Weaveworks
which automate deployment and remove manual processes. A side effect of abstracting away the hardware is making it easier to switch between data centres.
There are some interesting companies out there that unify the user interface of the different cloud providers and allow to seamlessly move cloud-native applications between clouds, on-prem or to the edge.
What is important in our context is that containerization combined with orchestration tools like Kubernetes and multi-cloud management platforms allow for flexibility previously unheard of.
I want to remind the reader, that we started out being shocked about the costs of cloud services. I really used to believe the repeated promise of savings in the cloud
. Finally, a combination of multi-cloud platforms and cloud-native approaches may help me quantify them.
Fig. 3: Japanese Maneki-neko raising her right paw to wish you lower cloud-infrastructure costs.
To measure cloud costs, you may switch your complete setup between different providers and let the service run for a while. This approach had been prohibitively complex in the past. Nowadays, multi-cloud orchestration platforms enable you to do this with only a few clicks.
Maybe you will even find out that bare-metal or smaller cloud platforms swimming in the overwhelming shadows of the giants AWS, Google Cloud or Azure are a cost-efficient alternative for your applications.
Subscribe to get your daily round-up of top tech stories!