paint-brush
Cons of Serverless Architecturesby@aroragary
14,902 reads
14,902 reads

Cons of Serverless Architectures

by Gary AroraOctober 22nd, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The excitement and fanfare around ‘<a href="https://hackernoon.com/tagged/serverless" target="_blank">serverless</a>’ — one of the industry’s favorite buzzwords — continues to grow. Numerous articles, books, and conferences have spun up a hype around serverless talking mostly about the benefits and innovative use cases that serverless enables.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Cons of Serverless Architectures
Gary Arora HackerNoon profile picture

A critical look at some of the potential drawbacks of serverless architecture that often get overlooked amidst the hype.

The excitement and fanfare around ‘serverless’ — one of the industry’s favorite buzzwords — continues to grow. Numerous articles, books, and conferences have spun up a hype around serverless talking mostly about the benefits and innovative use cases that serverless enables.

Source: Google Trends for Serverless 2016-2018

But what about the downsides of Serverless? What are the current limitations that could require complex workarounds, sometimes outweighing the benefits? Amidst all the excitement and the lowered barriers to entry, many are quick to take the plunge without understanding the potential cons that require careful consideration to reap the full potential.

Having built serverless solutions for over 3 years, this is my attempt at capturing some of the most common cons of serverless architectures:

1. It’s Expensive!

This probably goes against everything you heard about serverless, so let me qualify this brash claim before the pitchforks come out. Depending on the workload, serverless can quickly become expensive. Here’s an example:

Let’s assume you are running a simple serverless application with 1 Lambda function and 1 API Gateway that needs to sustain 100 API requests per second 24x7. This gives us:


  • API Gateway: $917/Month[$3.50/million API calls * 262 million API requests/month = $917]


  • Lambda: **$1,300/Month**[$0.00001667 GB-second * (262 million requests * 0.3 seconds per execution * 1 GB Memory - 400K free tier seconds) = $1,308]

  • Total: $2,217/Month

$2,217 is a LOT!

Consider what you can get by running your application server-based on cloud:


  • 3 Highly Available EC2 Servers: $416[General Purpose Extra Large m5.xlarge: 16.0 GiB RAM, 4 vCPUs @ $0.19 per hour x 730 hours in month x 3 load balanced instances for high availability]

  • Application Load Balancer: $39
  • Total: $455/Month
  • OR $308/Month if you get a reserved instance type

That’s ~80% cheaper than Serverless!

Now, serverless architectures can abstract away a lot of the expensive overheads in terms of operating & maintaining the underlying infrastructure, which is a huge part of the appeal. But once you are on cloud, there are several PaaS offerings (e.g. AWS Elastic Beanstalk in case of AWS, and others) that can manage a whole lot of infrastructural overheads for you, including security patches, health-checks, auto-scaling, monitoring, etc. This helps close the gap quite a bit when looking at the capabilities of serverless vs. server-based managed offerings.

Cost Comparison: Serverless vs Server based application. This example compares AWS Lambda with a highly available AWS Elastic Beanstalk application

2. Unicorn Solutions that only work in one Cloud a.k.a. Vendor Lock-In

Integration reduces portability

Serverless offerings have been evolving at an unprecedented rate with single-purpose services that can be glued together natively as building blocks to create a holistic solution. For example, AWS Lambda can be integrated with AWS Kinesis for data streaming triggers, AWS SNS for notifications, and AWS Step Functions for microservices choreography to create an end-to-end serverless solution. Though the basic FaaS capability is available across clouds, you lose portability as soon as you integrate with other native services.

Proprietary services that are unique to specific cloud providers

Many of the serverless offerings are proprietary with unique features that simply cannot be transported. For example, AWS DynamoDB and Azure CosmosDB are both serverless NoSQL DBs. But their indexing structures, nesting, and limitations are so different that you’re pretty much vendor locked.

FaaS lacks consistency & flexibility of runtime across cloud providers

Even within FaaS offerings across cloud, there is still no consistency in platform choices. If you are a Java shop that is using AWS Lambda, you cannot move your FaaS to Azure or Google without having to rewrite the entire application as Java is still not production ready. Node.JS is the most common FaaS runtime across clouds but the runtime choices are still limited and often a couple of versions behind the latest. By contrast, a server based application allows flexibility of language, OS, & runtime version.

AWS Lambda vs Azure Functions vs Google Functions — as of November, 2018

3. Limitations Spawning Multiple Workarounds

Once you get down to refactoring an on-premise application to run as serverless, you’ll likely discover the multiple limitations. Some of these are good as they lead to a better design, but they are limitations nonetheless — especially if that means more refactoring. Limitations alone could be its own article, so I’ll list a few of the most common ones to keep this brief:

  • Hard limit to execution time (from 5–15 minutes)
  • No support for stateful applications
  • No local storage
  • Hard limit on invocation payload size (e.g. 128 KB for AWS Lambda)
  • Cold starts due to instantiating new containers during scaling — potentially leading to latency
  • Lack of local testing options
  • Tooling limitations for deployment, management, and development
  • Orchestrating deployments of large-scale serverless applications atomically
  • Concurrency & account-wide platform limits
  • Security limited to platform-specific unportable security features instead of operating system level controls

4. Troubleshooting is Painful

As a serverless application grows, the complexity to troubleshoot explodes because of the way FaaS applications are designed to work.

Distributed Monitoring

Serverless allows decomposing an application into smaller modules. But this could lead to a new problem of distributed monitoring. With a bunch of serverless components chained together, the ability to trace a request/response end-to-end becomes critical but tends to be very cumbersome to use with legacy monitoring tools.

Debugging

Distributed applications mean that you need to rely much more on log trace to reverse engineer the root cause of an issue. The classic runtime debugger that allows introspection and line-by-line stepping is not possible for serverless applications.

Local & Remote Testing

Local testing requires replicating all the serverless limitations locally. There is a lot of growth in this area making it relatively easy to test locally. But the progress has mostly been at the component level (for example, an individual function), and not at the serverless application level. In several cases, a parallel serverless stack needs to be spun up on a separate account to ensure that account-wide platform limits are not exceeded by testing.

5. Needs Significant Mindset Shift

An area that may be the most lagging is the mindset shift of ITOps that are still operating in a VM provisioning model. I’ve experienced a few organizations where creating a single AWS Lambda function would initiate an archaic approval process including irrelevant infrastructure information and waiting 1–2 weeks for an approval. ITOps that haven’t upgraded to work in the new serverless world can be a significant bottleneck.

Starting a new project with serverless architecture while using outdated processes & resources is doomed from the outset. There are training options available but that requires an investment of time, money, and open-minded participants that can unlearn and learn new ways.

Summary

Knowing some of the potential drawbacks of serverless architectures can help you make informed decisions. As with all new technologies, carefully evaluate your application needs, pros & cons, before aligning with serverless offerings.

Don’t just blindly follow the serverless hype!

Having said that, I’m a firm believer that the future is vastly going to be serverless with a mature toolset and frameworks to address many of the limitations mentioned above. As the wider serverless community gains experience in using these new technologies, it will only improve the serverless ecosystem further.


_Endnotes_The cons related to cost comparison and vendor lock-in is inspired by a discussion thread on the aws subreddit by u/daveinsurgent