Serverless provides benefits far beyond the ease of management…it strongly encourages “useful” engineering practices. Here’s how.
It’s hard to determine what can be considered a “good” or “bad” engineering practice. We often hear about best practices, but everything really boils down to a specific use case. Therefore, I deliberately chose the word “useful” rather than “good” in the title.
The modern DevOps culture introduced several paradigms that are useful regardless of the circumstances: building infrastructure in a declarative and repeatable way, leveraging automation to facilitate seamless IT operations, and developing in an agile way to keep improving our end-results over time. I would argue that serverless can be considered an enabler for many of those useful practices.
I don’t want to argue whether microservices are better than monolithic applications. It all depends on your use cases. But we can certainly agree that it’s beneficial to build individual software components in such a way that they are responsible for only one thing. Examples of those benefits:
1. They are easier to change. After reading the book “The Pragmatic Programmer”, I realized that making your software easy to change is THE de-facto principle to live by as an IT professional. For instance, when you leverage functional programming with pure (ideally idempotent) functions, you always know what to expect as input and output. Thus, modifying your code is simple. If written properly, serverless functions encourage code that is easy to change and stateless.
2. They are easier to deploy — if the changes you made to an individual service don’t affect other components, redeploying a single function or container should not disrupt other parts of your architecture. This is one of the main reasons why many decide to split their Git repositories from a “monorepo” to one repository per service.
With serverless, you are literally forced to make your components small. For instance, you cannot run any long-running processes with AWS Lambda (at least for now). At the time of writing, the maximum timeout configuration doesn’t allow for any process that takes longer than 15 minutes. You could switch to a serverless container with services such as ECS, but the point is, you need to break larger functionality into smaller components.
How small should your serverless components be? There is no single answer. It’s something that can only be learned through experience. In this article, you can find out what to consider when deciding about the scope of your serverless microservices.
When we talk about serverless, we are not limited to execution environments such as AWS Lambda or ECS. When you use other serverless components, you will notice that they are designed to do ONE thing really well (again, giving AWS examples, but the same relates to other cloud vendors):
SQS — simple yet highly effective message queuing service,
SNS — as the name suggests, a simple yet powerful notification service,
SES — the same but for sending emails,
S3 — I can’t think of any simpler service for storing data — the same is true for GCP’s cloud storage and Azure’s Blob storage.
There are much more services we could talk about to demonstrate this paradigm of doing one thing well in a serverless world, but you get the idea.
Serverless doesn’t only force you to make your components small, but it also requires that you define all resources needed for the execution of your function or container.
This means that you cannot rely on any pre-configured state — you need to specify all package dependencies, environment variables, and any configuration you need to run your application. Regardless of whether you use FaaS or a serverless container — your environment must remain self-contained since your code can be executed on an entirely different server any time you run it.
TL;DR: You are forced to build reproducible code.
3. It encourages more frequent deployments
If your components are small, self-contained, and can be executed independently from each other, nothing stops you from more frequent deployments. The need for a consolidation of functionality across single components still exists (especially when it comes to the underlying data!), but the individual deployments inherently become more independent.
In theory, your serverless components may still use an admin user with permission to access and do everything. However, serverless compute platforms, such as AWS Lambda, encourage you to grant the function permissions to only services strictly needed for the function’s execution, effectively leveraging the least privilege principle. On top of that, by using IAM roles, you can avoid hard coding credentials or rely on storing secrets in external services or environment variables.
With small serverless components, you are encouraged to grant permissions on a per-service or even per-function level.
5. It allows you to achieve high availability and fault tolerance easily
Most serverless components are designed in such a way that they offer high availability (HA). For instance, by default, AWS Lambda is deployed to multiple availability zones and retries two times in case of a failure of any asynchronous invocation. Achieving the same with non-serverless resources is feasible but far from trivial.
Similarly, your containerized ECS tasks, your DynamoDB tables, and your S3 objects are, or can easily be, deployed to multiple availability zones (or subnets) for resilience.
There is great merit in treating your servers like cattle rather than pets. Most DevOps engineers that leverage the “Infrastructure as Code” paradigm would agree with that.
You’ve probably experienced this at some point in your IT career: you meticulously took care of installing everything on your compute instance and building all resources in such a way that this server (your “pet”) is configured perfectly. Then, one day you come to the office, and you notice that your server is down.
You have no backup, and you didn’t store the code you used to configure the entire system. And it turns out that you had some environment variables that were responsible for defining user access to various resources. Now all that is gone, and you need to start entirely from scratch.
We don’t have to look only at such extreme failure scenarios to see the danger in treating servers like pets. Imagine that you simply need a copy of the same server and resource configuration to create a development or user-acceptance-test environment. Perhaps you want to create a new instance of the same server for scale or provide high-availability.
With a manual configuration, you always risk that the environments can end up being different.
The serverless approach forces you to take a completely different perspective about defining the resources needed for your application. You are required to build a self-contained code package that can run on any server in an environment-agnostic way. If this server dies, you don’t lose anything since simply rerunning the serverless application provisions all new resources (i.e., cattle) needed for it to run.
Is it more difficult? Of course, it is! But once you’ve built this repeatable process, you gain so many benefits, as discussed in this article.
If you decide on building a serverless architecture, it’s quite unlikely that you would end up building your own message queuing system or notification service. You would rather rely on common, well-known services offered by your cloud provider. Some examples based on AWS:
Why is that beneficial? The reality is that many software engineering projects are often not particularly challenging to developers, especially for very experienced programmers who already repeatedly tackled similar problems in the past. Given that software engineers are incredibly smart and talented people, they often start building their own, sometimes overly complex and difficult to maintain solutions when they get bored.
Offering them a platform that provides standardized well-known, and well-documented building blocks (such as SQS, SNS, IAM, S3, …) that are fully-managed by the cloud provider can greatly improve the maintainability of the entire architecture. And the above-mentioned services allow us to build various types of projects in a resilient and decoupled way.
As with anything that comprises many small individual components, it’s often hard to see the bigger picture. It may become more difficult to see relationships between individual elements of a system. This is where observability platforms such as Dashbird shine. You can build a dashboard with all services belonging to a specific application and see (among others) which components were successful and how long they ran.
You also gain additional insights into the overall health of your system. For instance, in the image below, you can see a confirmation of point #5 — AWS Lambda gives you high availability and resilience out of the box. Within the dashboard, you can immediately see when your function was retried and why (here: due to a timeout error).
Dashbird observability platform demonstrating the resilience of AWS Lambda by providing insights about retries — Source: courtesy of Dashbird
In this article, we investigated seven reasons why serverless platforms encourage useful engineering platforms. Among them, we could see that it encourages small self-contained components that can be deployed independently of each other.
We noticed that it also helps with security and high availability of the overall infrastructure. Finally, we looked at different serverless building blocks that allow us to build resilient and cost-effective architectures, and how observability platforms such as Dashbird can help us gain additional insights about it.
Thank you for reading!
Resources:
[1] AWS Whitepaper on serverless architectures with Lambda
Previously published at https://dashbird.io/blog/serverless-enforces-useful-engineering-practices/