Can you run serverless on kubernetes?
Can you run your own private serverless?
There is a lot of conversation about containers at the moment with regards to serverless and whether some form of containers can be described as serverless or not. It’s mainly kubernetes people who are suggesting that because they can build a “Function as a Service” platform (FaaS) with kubernetes that means that therefore you can build a serverless solution with kubernetes.
I have a very strong opinion on this, not because of who I work for and their opinion (Full Disclosure: I work for AWS as a Senior Developer Advocate for Serverless), but because of my background as a CTO.
My take? If you’re at any point responsible for running containers, even if that’s on a managed kubernetes service, you’re not serverless.
In fact, I’d say this:
Serverless FaaS on Kubernetes is an oxymoron
Here’s a few arguments I hear about why people think that FaaS on kubernetes is serverless, and my responses.
So this is the first argument I get. If you’re building a serverless solution, and you’re building functions, it doesn’t matter “where” it runs, the only thing the developer cares about is whether his code runs correctly.
Well that’s absolutely right, if you only care about the code, and unit tests.
If you care about system tests and have any kind of integrations with other systems, then you might want to worry about whether it works elsewhere.
Or you might have a different set of constraints than you thought. You’ve been expecting it to run on AWS Lambda, and all of a sudden your engineering lead walks up and says “Oh, somebody over there decided we’re running our own private FaaS on kubernetes. That’s ok right? It’s just code”.
You’re expectation is that your teams are siloed. Somebody writes the code. Somebody else is responsible for deploying and managing it.
Thinking as a CTO I would want my developers to understand what environment they are deploying into, even if they don’t have responsibility for that environment in the end. It does matter to me, because the environment will have advantages and constraints to which a developer needs to understand and to build to.
So the experience is not the same simply because it should be important to know the environment you are deploying into.
What I think is meant is “We can provide a way of developers deploying functions and we can run those functions on demand”.
What you probably can’t do is provide the same level of scalability, reliability, support and maintenance as this mythical cloud provider.
Well, you might be able to, with massive amounts of people and money and time.
In fact, I’m pretty sure that if you went to a CTO with that argument, they’d either look at you with confusion or laugh at you for a long time. Then, they might ask you to write up a business case for it against AWS as an example over the lifetime of the project, say three years, and then come back with the overall cost-benefit analysis.
Without some serious resources, big data centres, and some serious people behind you, I reckon it’s going to be difficult to compete with AWS in a cost benefit analysis with this argument.
It’s not the same.
This argument I have some sympathy for. It is a relatively good argument. A developer needs to be able to develop solutions in an environment as close as possible to the actual environment that is being delivered as possible. This makes sense.
This argument harks back to a time when it was quite hard to get a development environment that looked like the servers we used to run things on. Development happened on machines that looked nothing like the machines we were going to deploy on. In fact, deployment was a nightmare (FTP aaaargh).
However, let’s switch the argument around a little. If you are deploying into a cloud environment, and are using some form of infrastructure as code or templating — e.g. AWS CloudFormation — then you can generate a copy of your environment in another account.
So give a developer an AWS account, give them the CloudFormation Stack, run the Stack, and you have a working copy of the environment.
And the advantage of this is that it’s actually running in the same environment as the production environment. It’s not “local”.
It’s different, certainly, but it’s not “wrong”.
I don’t think this is true. I think we should define something as serverless, and either define something as “mostly serverless” and not just encompass everything under an umbrella and say “it’s all serverless, but this is more serverless than that”.
I believe we need some consensus in terms of a definition or everybody will just create something and call it “serverless” (which is what is happening now).
My personal definition from last year is this:
A Serverless solution is one that costs you nothing to run if nobody is using it (excluding data storage)
AWS has a more complete definition on the AWS Serverless page:
I think these definitions are pretty strong and solid.
These definitions exclude a kubernetes solution from being serverless unless they are run by a provider who can provide the scaling and high availability which is possible.
But you still have the “server management” problem. It’s still containers. You still have to manage and maintain the container.
If you assume that you have to manage the code that runs the logic (as you do in the AWS Lambda environment too), you can see that you still have to manage and update the container itself.
You’re still managing the container. It’s therefore not serverless in my book.
I don’t think serverless is a continuum. It’s either serverless or it isn’t.
There may be really good and valid reasons to run a FaaS on kubernetes for your use case and organisation.
But it isn’t serverless. It’s simply FaaS.
Don’t believe the container crowd when they say that serverless is just a part of their world. It isn’t.
Serverless is a different paradigm. It shouldn’t be seen as the same, and I for one don’t think there is an obvious equivalence.
Opinions expressed in this blog are mine and may or may not reflect the opinions of my employer.