Serverless provides benefits far beyond the ease of management…it strongly encourages “useful” engineering practices. Here’s how. It’s hard to determine what can be considered a “good” or “bad” engineering practice. We often hear about , but everything really boils down to a specific use case. Therefore, I deliberately chose the word “useful” rather than “good” in the title. best practices The modern DevOps culture introduced several paradigms that are regardless of the circumstances: building infrastructure in a and way, leveraging to facilitate seamless IT operations, and developing in an way to keep improving our end-results over time. I would argue that for many of those useful practices. useful declarative repeatable automation agile serverless can be considered an enabler 1. It encourages components that do ONE thing I don’t want to argue whether microservices are better than monolithic applications. It all depends on your use cases. But we can certainly agree that it’s beneficial to build individual software components in such a way that they are responsible for only . Examples of those benefits: one thing 1. They are . After reading the book , I realized that is THE de-facto principle to live by as an IT professional. For instance, when you leverage functional programming with ( ) functions, you always know what to expect as input and output. Thus, modifying your code is simple. If written properly, serverless functions encourage code that is . easier to change “The Pragmatic Programmer” making your software easy to change pure ideally idempotent easy to change and stateless 2. They are — if the changes you made to an individual service don’t affect other components, redeploying a single function or container should not disrupt other parts of your architecture. This is one of the main reasons why many decide to split their Git repositories from a “ ” to one repository per service. easier to deploy monorepo With serverless, you are literally forced to . For instance, you cannot run any long-running processes with AWS Lambda ( ). At the time of writing, the maximum timeout configuration doesn’t allow for any process that takes longer than . You could switch to a serverless container with services such as ECS, but the point is, you need to break larger functionality into smaller components. make your components small at least for now 15 minutes There is no single answer. It’s something that can only be learned through experience. In this article, you can find out what to consider when deciding about the scope of your serverless microservices. How small should your serverless components be? When we talk about serverless, we are not limited to execution environments such as AWS Lambda or ECS. When you use , you will notice that they are designed to do ONE thing really well ( ): other serverless components again, giving AWS examples, but the same relates to other cloud vendors — simple yet highly effective message queuing service, SQS — as the name suggests, a simple yet powerful notification service, SNS — the same but for sending emails, SES — I can’t think of any simpler service for storing data — the same is true for GCP’s cloud storage and Azure’s Blob storage. S3 There are much more services we could talk about to demonstrate this paradigm of doing one thing well in a serverless world, but you get the idea. 2. It enforces self-contained execution environments Serverless doesn’t only force you to make your components small, but it also that you needed for the execution of your function or container. requires define all resources This means that you cannot rely on any pre-configured state — you need to specify , and any configuration you need to run your application. Regardless of whether you use FaaS or a serverless container — your environment must remain self-contained since your code can be executed on an entirely different server any time you run it. all package dependencies, environment variables You are forced to build reproducible code. TL;DR: 3. It encourages more frequent deployments If your components are small, self-contained, and can be executed independently from each other, nothing stops you from more frequent deployments. The need for a consolidation of functionality across single components still exists ( ), but the individual deployments inherently become more independent. especially when it comes to the underlying data! 4. It encourages the least-privilege security principle In theory, your serverless components may still use an admin user with permission to access and do everything. However, serverless compute platforms, such as AWS Lambda, encourage you to grant the function for the function’s execution, effectively leveraging the least privilege principle. On top of that, by using IAM roles, you can or rely on storing secrets in external services or environment variables. permissions to only services strictly needed avoid hard coding credentials With small serverless components, you are encouraged to grant permissions on a or even level. per-service per-function 5. It allows you to achieve high availability and fault tolerance easily Most serverless components are designed in such a way that they offer . For instance, by default, is deployed to multiple availability zones and two times in case of a failure of any asynchronous invocation. Achieving the same with non-serverless resources is feasible but far from trivial. high availability (HA) AWS Lambda retries Similarly, your containerized ECS tasks, your DynamoDB tables, and your S3 objects are, or can easily be, deployed to multiple availability zones ( ) for resilience. or subnets 6. It enforces Infrastructure as Code There is great merit in . Most DevOps engineers that leverage the “Infrastructure as Code” paradigm would agree with that. treating your servers like cattle rather than pets You’ve probably experienced this at some point in your IT career: you meticulously took care of installing everything on your compute instance and building all resources in such a way that this server ( ) is configured perfectly. Then, one day you come to the office, and you notice that your . your “pet” server is down You have no backup, and you didn’t store the code you used to configure the entire system. And it turns out that you had some environment variables that were responsible for defining to various resources. Now all that is gone, and you need to start entirely from scratch. user access We don’t have to look only at such extreme failure scenarios to see the danger in treating servers like pets. Imagine that you simply of the same server and resource configuration or you want to create a or provide . need a copy to create a development user-acceptance-test environment. Perhaps new instance of the same server for scale high-availability With a manual configuration, you always risk that the environments can end up being different. The serverless approach forces you to take a completely different perspective about defining the resources needed for your application. You are required to build a self-contained that can run on any server in an way. If this server dies, you don’t lose anything since the serverless application provisions all new resources ( ) needed for it to run. code package environment-agnostic simply rerunning i.e., cattle Is it more difficult? Of course, it is! But once you’ve built this repeatable process, you gain so many benefits, as discussed in . this article 7. It encourages using existing battle-tested components If you decide on building a serverless architecture, it’s quite unlikely that you would end up building your own message queuing system or notification service. You would rather rely on common, well-known services offered by your cloud provider. Some examples based on AWS: Do you need a ? Use SQS. message queue Do you need to send ? Use SNS. notifications Do you need to handle ? Use Secrets Manager. secrets Do you need to build a ? Use API Gateway. REST API Do you need to manage or ? Use IAM or Cognito. permissions user access Do you need to store some or ? Use DynamoDB or simply dump data to S3. key-value pairs data objects Why is that beneficial? The reality is that many software engineering projects are often not particularly challenging to developers, especially for very experienced programmers who already repeatedly tackled similar problems in the past. Given that software engineers are incredibly smart and talented people, they often start building their own, sometimes overly complex and difficult to maintain solutions when they get bored. Offering them a platform that provides standardized well-known, and well-documented building blocks ( ) that are fully-managed by the cloud provider can greatly improve the maintainability of the entire architecture. And the above-mentioned services allow us to build various types of projects in a resilient and decoupled way. such as SQS, SNS, IAM, S3, … Things that are harder to accomplish with serverless As with anything that comprises many small individual components, it’s often hard to . It may become more difficult to see relationships between individual elements of a system. This is where platforms such as Dashbird shine. You can build a dashboard with all services belonging to a specific application and see ( ) which components were successful and how long they ran. see the bigger picture observability among others You also gain additional insights into the overall health of your system. For instance, in the image below, you can see a confirmation of point #5 — AWS Lambda gives you high availability and resilience out of the box. Within the dashboard, you can immediately see when your function was retried and why ( ). here: due to a timeout error Dashbird observability platform demonstrating the resilience of AWS Lambda by providing insights about retries — Source: courtesy of Dashbird Conclusion In this article, we investigated seven reasons why serverless platforms encourage useful engineering platforms. Among them, we could see that it encourages that can be deployed independently of each other. small self-contained components We noticed that it also helps with and of the overall infrastructure. Finally, we looked at different that allow us to build resilient and cost-effective architectures, and how observability platforms such as Dashbird can help us gain additional insights about it. security high availability serverless building blocks Thank you for reading! Resources: [1] on serverless architectures with Lambda AWS Whitepaper Previously published at https://dashbird.io/blog/serverless-enforces-useful-engineering-practices/