paint-brush
AWS Lambda — should you have few monolithic functions or many single-purposed functions?by@theburningmonk
48,003 reads
48,003 reads

AWS Lambda — should you have few monolithic functions or many single-purposed functions?

by Yan CuiJanuary 8th, 2018
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

A funny moment (at 38:50) happened during Tim Bray’s session (<a href="https://www.youtube.com/watch?v=sMaqd5J69Ns" target="_blank">SRV306</a>) at re:invent 2017, when he asked the audience if we should have many simple, single-purposed functions, or fewer monolithic functions, and the room was pretty much split in half.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - AWS Lambda — should you have few monolithic functions or many single-purposed functions?
Yan Cui HackerNoon profile picture

A funny moment (at 38:50) happened during Tim Bray’s session (SRV306) at re:invent 2017, when he asked the audience if we should have many simple, single-purposed functions, or fewer monolithic functions, and the room was pretty much split in half.

Having been brought up on the SOLID principles, and especially the single responsibility principle (SRP), this was a moment that challenged my belief that following the SRP in the serverless world is a no-brainer.

That prompted this closer examination of the arguments from both sides.

Full disclosure, I am biased in this debate. If you find flaws in my thinking, or simply disagree with my views, please point them out in the comments.

update 28/01/18: as Quentin Ventura pointed out in the comments, it’s not immediately obvious to the reader how I organize my codebase so it’s not clear the context I’m discussing these two approaches from, so I’m laying them out here.

I use the Serverless framework, and all the functions for a service would be organized into the same repo, so sharing code within the repo is done via shared modules.

And for every repo, there’s a serverless.yml which configures all the functions with relevant event sources, etc. and all the functions are deployed together when we run sls deploy.

Every repo would also have a build.sh script which encapsulates the different build steps to make CI/CD simple and make the build steps executable locally and independent from the CI tool — different CI tool have their own requirement for a yml file but they would be a simple one-liner to invoke our own build.sh script.

So, a typical repo would look something like this:

What is a monolithic function?

By “monolithic functions”, I meant functions that have internal branching logic based on the invocation event and can do one of several things.

For example, you can have one function handle several HTTP endpoints and methods and perform a different actions based on path and method.











module.exports.handler = (event, context, cb) => {const path = event.path;const method = event.httpMethod;if (path === '/user' && method === 'GET') {.. // get user} else if (path === '/user' && method === 'DELETE') {.. // delete user} else if (path === '/user' && method === 'POST') {.. // create user} else if ... // other endpoints & methods}

What is the real problem?

One can’t rationally reason about and compare solutions without first understanding the problem and what qualities are most desired in a solution.

And when I hear complaints such as:

having so many functions is hard to manage

I immediately wonder what does manage entail? Is it to find specific functions you’re looking for? Is it to discover what functions you have? Does this become a problem when you have 10 functions or 100 functions? Or does it become a problem only when you have more developers working on them than you’re able to keep track of?

Drawing from my own experiences, the problem we’re dealing with has less to do with what functions we have, but rather, what features and capabilities do we possess through these functions.

After all, a Lambda function, like a Docker container, or an EC2 server, is just a conduit to deliver some business feature or capability you require.

You wouldn’t ask “Do we have a _get-user-by-facebook-id_ function?” since you will need to know what the function is called without even knowing if the capability exists and if it’s captured by a Lambda function. Instead, you would probably ask instead “Do we have a Lambda function that can find a user based on his/her facebook ID?”.

So the real problem is that, given that we have a complex system that consists of many features and capabilities, that is maintained by many teams of developers, how do we organize these features and capabilities into Lambda functions so that it’s optimised towards..

  • discoverability: how do I find out what features and capabilities exist in our system already, and through which functions?
  • debugging: how do I quickly identify and locate the code I need to look at to debug a problem? e.g. there are errors in system X’s logs, where do I find the relevant code to start debugging the system?
  • scaling the team: how do I minimise friction and allow me to grow the engineering team?

These are the qualities that are most important to me. With this knowledge, I can compare the 2 approaches and see which is best suited for me.

You might care about different qualities, for example, you might not care about scaling the team, but you really worry about the cost for running your serverless architecture. Whatever it might be, I think it’s always helpful to make those design goals explicit, and make sure they’re shared with and understood (maybe even agreed upon!) by your team.

Discoverability

Discoverability is by no means a new problem, according to Simon Wardley, it’s rather rampant in both government as well as the private sector, with most organisations lacking a systematic way for teams to share and discover each other’s work.

courtesy of Simon Wardley’s posts on Twitter

As mentioned earlier, what’s important here is the ability to find out what capabilities are available through your functions, rather than what functions are there.

Ask not what functions you have, what your functions can do.

An argument I often hear for monolithic functions, is that it reduces the no. of functions, which makes them easier to manage.

On the surface, this seems to make sense. But the more I think about it the more it strikes me that the no. of function would only be an impediment to our ability to manage our Lambda functions IF we try to manage them by hand rather than using the tools available to us already.

After all, if we are able to locate books by their content (“find me books on the subject of X”) in a huge physical space with 10s of thousands of books, how can we struggle to find Lambda functions when there are so many tools available to us?

Libraries, yup, they still exist!

With a simple naming convention, like the one that the Serverless framework enforces, we can quickly find related functions by prefix.

For example, if I want to find all the functions that are part of our user API, I can do that by searching for user-api.

With tagging, we can also catalogue functions across multiple dimensions, such as environment, feature name, what type of event source, the name of the author, and so on.

By default, the Serverless framework adds the STAGE tag to all of your functions. You can also add your own tags as well, see documentation on how to add tags.

The Lambda management console also gives you a handy dropdown list of the available values when you try to search by a tag.

If you have a rough idea of what you’re looking for, then the no. of functions is not going to be an impediment to your ability to discover what’s there.

On the other hand, the capabilities of the user-api is immediately obvious with single-purposed functions, where I can see from the relevant functions that I have the basic CRUD capabilities because there are corresponding functions for each.

I can see what capabilities I have as part of the suite of functions that make up the user-api feature.

With a monolithic function, however, it’s not immediately obvious, and I’ll have to either look at the code myself, or have to consult with the author of the function, which for me, makes for pretty poor discoverability.

Because of this, I will mark the monolithic approach down on discoverability.

Having more functions though, means there are more pages for you to scroll through if you just want to explore what functions are there rather than looking for anything specific.

Although, in my experience, with all the functions nicely clustered together by name prefix thanks to the naming convention the Serverless framework enforces, it’s actually quite nice to see what each group of functions can do rather than having to guess what goes on inside a monolithic function.

But, I guess it can be a pain to scroll through everything when you have thousands of functions. So, I’m going to mark single-purposed functions down only slightly for that. I think at that level of complexity, even if you reduce the no. of functions by packing more capabilities into each function, you will still suffer more from not being able to know the true capabilities of those monolithic functions at a glance.

Debugging

In terms of debugging, the relevant question here is whether or not having fewer functions makes it easier to quickly identify and locate the code you need to look at to debug a problem.

Based on my experience, the trail of breadcrumbs that leads you from, say, an HTTP error or an error stack trace in the logs, to the relevant function and then the repo is the same regardless whether the function does one thing or many different things.

What will be different, is how you’d find the relevant code inside the repo for the problems you’re investigating.

A monolithic function that has more branching and in general does more things, would understandably take more cognitive effort to comprehend and follow through to the code that is relevant to the problem at hand.

For that, I’ll mark monolithic functions down slightly as well.

Scaling

One of early arguments that got thrown around for microservices is that it makes scaling easier, but that’s just not the case — if you know how to scale a system, then you can scale a monolith just as easily as you can scale a microservice.

I say that as someone who has built monolithic backend systems for games that had a million daily active users. Supercell, the parent company for my current employer, and creator of top grossing games like Clash of Clans and Clash Royale, have well over 100 million daily active users on their games and their backend systems for these games are all monoliths as well.

Instead, what we have learnt from tech giants like the Amazon, and Netflix, and Google of this world, is that a service oriented style of architecture makes it easier to scale in a different dimension — our engineering team.

This style of architecture allows us to create boundaries within our system, around features and capabilities. In doing so it also allows our engineering teams to scale the complexity of what they build as they can more easily build on top of the work that others have created before them.

Take Google’s Cloud Datastore for example, the engineers working on that service were able to produce a highly sophisticated service by building on top of many layers of services, each provide a power layer of abstractions.

http://bit.ly/2CQx3C4

These service boundaries are what gives us that greater division of labour, which allows more engineers to work on the system by giving them areas where they can work in relative isolation. This way, they don’t constantly trip over each other with merge conflicts, and integration problems, and so on.

Michael Nygard also wrote a nice article recently that explains this benefit of boundaries and isolation in terms of how it helps to reduce the overhead of sharing mental models.

“if you have a high coherence penalty and too many people, then the team as a whole moves slower… It’s about reducing the overhead of sharing mental models.” — Michael Nygard

Having lots of single-purposed functions is perhaps the pinnacle of that division of task, and something you lose a little when you move to monolithic functions. Although in practice, you probably won’t end up having so many developers working on the same project that you feel the pain, unless you really pack them in with those monolithic functions!

Also, restricting a function to doing just one thing also helps limit how complex a function can become. To make something more complex you would instead compose these simple functions together via other means, such as with AWS Step Functions.

Once again, I’m going to mark monolithic functions down for losing some of that division of labour, and raising the complexity ceiling of a function.

update 09/02/2018: As several folks have asked about cold starts in the context of monolithic vs single-purposed functions, so here are my thoughts.

As Kostas Bariotis asked:

Monolithic functions have also the benefits of being used more frequently thus they are less likely to be in cold state, while single-purposed functions that are not being used frequently may always be cold state, don’t you think?

That seems like a fair assumption, but the actual behaviour of cold starts is a more nuanced discussion and can have drastically different results depending on the rate the requests come in. Check out my other post that goes into this behaviour in more detail.

The effect of consolidation into monolithic functions (on the no. of cold starts experienced) quickly diminishes with load

To simplify things, let’s consider “the number of cold starts you’ll have experienced as you ramp up to X req/s”. Assuming that:

  • the ramp up was gradual so there was no massive spikes (which could trigger a lot more cold starts)
  • each request’s duration is short, say, 100ms

At a small scale, say, 1 req/s per endpoint, and a total of 10 endpoints (which is 1 monolithic function vs 10 single purposed functions) we’ll have a total of 10 req/s. Given the 100ms execution time, it’s just within what one concurrent function is able to handle.

To reach 1 req/s per endpoint, you will have experienced:

  • monolithic: 1 cold start
  • single-purposed: 10 cold starts

As the load goes up, to 100 req/s per endpoint, which equates to a total of 1000 req/s. To handle this load you’ll need at least 100 concurrent executions of the monolithic function (100ms per req, so the throughput per concurrent execution is 10 req/s, hence concurrent executions = 1000 / 10 = 100). To reach this level of concurrency, you will have experienced:

  • monolithic: 100 cold starts

At this point, 100 req/s per endpoint = 10 concurrent executions for each of the single-purposed functions. To reach that level of concurrency, you will also have experienced:

  • single-purposed: 10 concurrent execs * 10 functions = 100 cold starts

So, monolithic functions don’t help you with the no. of cold starts you’ll experience even at a moderate amount of load.

Also, when the load is low, there are simple things you can do to mitigate cold starts by pre-warming your functions (as discussed in the other post). You can even use the serverless-plugin-warmup to do that for you, and it even comes with the option to do a pre-warmup run after a deployment.

However, this practice stops being effective when you have even a moderate amount of concurrency. At which point, monolithic functions would incur just as many cold starts as single-purposed functions.

Consolidating into monolithic functions can increase initialization time, which increases the duration of cold start

By packing more “actions” into one function, we also increase the no. of modules that need to be initialized during the cold start of that function, and are therefore highly to experience longer cold starts as a result (basically, anything outside of the exported handler function is initialized during the Bootstrap runtime phase (see below) of the cold start.

from Ajay Nair’s talk at re:invent 2017 — https://www.youtube.com/watch?v=oQFORsso2go

Imagine in the monolithic version of the fictional user-api I used to illustrate the point in this post, our handler module would need to require all the dependencies used by all the endpoints.




const depA = require('lodash');const depB = require('facebook-node-sdk');const depC = require('aws-sdk');...

Whereas in the single-purposed version of the user-api, only the get-user-by-facebook-id endpoint’s handler function would need to incur the extra overhead of initializing the facebook-node-sdk dependency during cold start.

You also have to factor in any other modules in the same project, and their dependencies, and any code that will be run during those modules’ initialization, and so on.

Wrong place to optimize cold start

So, contrary to one’s intuition, monolithic functions don’t offer any benefit towards cold starts outside what basic prewarming can achieve already, and can quite likely extend the duration of cold starts.

Since cold start affects you wildly differently depending on language, memory and how much initialization you’re doing in your code. I’ll argue that, if cold starts is a concern for you, then you’re far better off switching to another language (i.e. Go, Node.js or Python) and to invest effort into optimizing your code so it suffers shorter cold starts.

Also, keep in mind that this is something that AWS and other providers are actively working on and I suspect the situation will be vastly improved in the future by the platform.

All and all, I think changing the deployment units (one big function vs many small functions) is not the right way to address cold starts.

Conclusion

As you can see, based on the criteria that are important to me, having many single-purposed functions is clearly the better way to go.

Like everyone else, I come preloaded with a set of predispositions and biases formed from my experiences, which quite likely do not reflect yours. I’m not asking you to agree with me, but to simply appreciate the process of working out the things that are important to you and your organization, and how to go about finding the right approach for you.

However, if you disagree with my line of thinking and the arguments I put forward for my selection criteria — discoverability, debugging, and scaling the team & complexity of the system — then please let me know via comments.

Hi, my name is Yan Cui. I’m an AWS Serverless Hero and the author of Production-Ready Serverless. I have run production workload at scale in AWS for nearly 10 years and I have been an architect or principal engineer with a variety of industries ranging from banking, e-commerce, sports streaming to mobile gaming. I currently work as an independent consultant focused on AWS and serverless.

You can contact me via Email, Twitter and LinkedIn.

Check out my new course, Complete Guide to AWS Step Functions.

In this course, we’ll cover everything you need to know to use AWS Step Functions service effectively. Including basic concepts, HTTP and event triggers, activities, design patterns and best practices.

Get your copy here.

Come learn about operational BEST PRACTICES for AWS Lambda: CI/CD, testing & debugging functions locally, logging, monitoring, distributed tracing, canary deployments, config management, authentication & authorization, VPC, security, error handling, and more.

You can also get 40% off the face price with the code ytcui.

Get your copy here.