Serverless computing is on the verge of exploding and everybody wants a piece. Amazon, Google, and Microsoft (amongst others) are competing against each other to capture this lucrative market. Who will win this war, though? Many people have bet on Amazon, who introduced AWS Lambda in 2014. Since then, Microsoft and Google have introduced their own product offerings, Microsoft Azure Functions and Google Cloud Functions, respectively. The Function-as-a-Service (FaaS) market is forecasted to grow to $7.72 billion by 2021. This is a rapidly growing sector that nobody can afford to ignore.
What is serverless computing, though? It certainly isn’t ‘serverless’, as the name would imply! Serverless computing is an event-driven application architecture, wherein resources are only consumed when certain events occur. This ephemeral process of resource allocation is highly valued, as it is considered to be incredibly scalable, given that compute power only happens at the time of an event. Practically, what this means, is that you create functions in a service like AWS Lambda, that can be requested on-demand.
This type of thinking flies directly in the face of the traditional monolithic approach to application building. Recently, this term ‘monolith’ has become synonymous with ‘uncool’, but in actuality, it is just another way to describe application architecture. Typically, a monolithic application is one in which all of the application logic is contained within one service, or ‘monolith’, meaning that all of the logic for your application is tightly coupled. Instead of having independent services that react to each other based on messaging interfaces, a monolith contains all the logic necessary to drive the application functionality. Every bit of logic required to run the application exists at all times inside a monolith, regardless of necessity.
Serverless computing is at the other end of the spectrum. In a serverless architecture, functions exist independently of each other. To create an application, these functions are called in concert with each other, each one only called as necessary. For all intents and purposes, these functions only ‘exist’, from a compute perspective, when they are triggered by an event.
The element of ephemeral functions is only one part of a serverless architecture, however. When you introduce this idea of multiple functions, you also introduce more complexity; namely, the complexity involved in facilitating communication between all these functions. As mentioned before, all of these functions must work in concert with one another. In order to keep functions from becoming tightly coupled (i.e., one function invokes another function invokes another function invokes another function, and so on), it is necessary to create an orchestration layer between these functions. Let’s return back to this in a minute and first focus on ‘why’ you would want to use a serverless approach.
A serverless architecture, first and foremost, is designed to be highly scalable. Each function is only called as necessary. If you’re deployed on a service such as Amazon, this means that you’re only paying for exactly what you need. This is the power of an event-driven approach. The cost associated with ‘hosting’ your application is directly proportional to the functions being requested. Scalability doesn’t only apply to cost; when you choose to use a service such as AWS Lambda, you also get the benefit of ensuring application uptime in respect to large spikes of traffic.
Serverless also entirely eliminates the complex deployment procedure and dance we’ve seen become increasingly common. Gone are the days of instance and container management, messy configs, and piping between various services and processes. Serverless simplifies the DevOps process considerably. In fact, it transforms the DevOps process into something entirely different. This brings us back around to the idea of orchestration.
There are other services, like Amazon Step Functions, that have attempted to address this issue of orchestration. However, these services are limited and not able to address the complexity that often comes when dealing with coordination between distributed components. It is also often the case that while these services say they offer a visual method for orchestration, the process by which to create these orchestration workflows is still done by text, as is the case with Amazon Step Functions.
Broadly, the features that truly enable serverless orchestration are: transparency, build capability, orchestration capability, and error handling.
An orchestration layer should not only allow you to effectively orchestrate between functions, but also enable transparency into the overall process. As each function is invoked, this data should be tracked, and the accuracy of the data should be verified. This prevents any malformed or inaccurate data from getting too far downstream in your application.
Features such as real-time and historical execution history allow you to determine if your application is performing as expected. The ability to examine the output of each function, and map that output to another function from inside the orchestration layer, is immensely powerful.
There are a number of different approaches you can take to serverless computing, but the key concepts stay the same: functionality stays where it belongs, inside a function; orchestration binds these functions together, providing state in an otherwise stateless architecture; and scalability through the use of distributed components. However this issue is approached, these are a few of the concepts that are important to always take note of when thinking about it architecturally.
Serverless computing is growing. Expect a depth of new information and scenarios to emerge day-after-day, as people discover the power of serverless, highly distributed, scalable systems.