Serverless Benefits And Challenges: 2020 Editionby@taavi-rehemagi
174 reads

Serverless Benefits And Challenges: 2020 Edition

by Taavi RehemägiJuly 22nd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The benefits of going serverless can be seen as over-reliance on the cost of building a new system. There is also a lack of visibility in the process of deploying new systems to the cloud. Using the cloud is a way to ensure the system is secure and users are always aware of the risks and risks of deploying systems that are not always working on the best-weather conditions. There are also a number of ways to monitor and monitor systems in the event of a change in the way the system works.
featured image - Serverless Benefits And Challenges: 2020 Edition
Taavi Rehemägi HackerNoon profile picture

While we know the many benefits of going serverless - reduced costs via pay-per-use pricing models, less operational burden/overhead, instant scalability, increased automation - the challenges are often not addressed as comprehensively. The understandable concerns over migrating can stop any architectural decisions and actions being made for fear of getting it wrong and not having the right resources. This article discusses the common concerns around going serverless and our advice to minimise their impact.

If you’d like to learn more about the challenges of going serverless and other contepts more in depth, make sure to check out our Knowledge Base. We’ve also written about the threats and opportunities of going serverless a while back but as serverless is evolving at the speed of light, we thougt we’d put together a little refresher.

Security Risks Caused By Misconfiguration and Premature Deployment


Misconfiguration and subsequent premature deployment is a very real, real-life issue when it comes to technology. Even though Serverless is a managed service and there are usually fewer configuration concerns to take into account, you are still in charge of making your application secure, just as in a traditional server-based infrastructure.

As teams start to migrate, using new cloud applications without full insight into the deployment until it’s too late, their infrastructure is at risk of exposure to data leaks, Distributed Denial of Service (DDoS) attacks and Man-in-the-Middle (MiTM) attacks to name a few.

There have been plenty of stories over the years showing well-known organizations’ general data breaches and leaks, and various successful security hacks into their infrastructure leading to their customer base questioning their reputation and security, not to mention the huge financial repercussions too. Serverless infrastructures, on the other hand, have proved to be rather bullet-proof when it comes to security breaches.


Learning any new language or skill requires mistakes to be made, but the key is to avoid having these mistakes create any true impact. There are plenty of resources and platforms available that check your infrastructure follows the correct security best practices.

A simple method to check configurations is to deploy small and often into a test environment, letting it run in there for some time while using one of these platforms to cross check. Once all proves successful and safe, you’ll be confident when you deploy it into production.

Observability Lessens


As we already know, insight is key and the main driver for architectural changes and improvements. A common stumbling block for anyone new to serverless infrastructure is the lack of visibility, or rather the seemingly reduced visibility in comparison to what they were used to.

Serverless, by design, encourages event-based architecture and is often stateless so having access to logs and application traces is the only way to understand any gaps in your infrastructure.


All public cloud platforms offer services to increase visibility and observability of your infrastructure, however specialist monitoring platforms such as Dashbird, are able to give further insight. Such services make observability easier by providing intuitive dashboards that can drill down into detail should you need it, offer 3rd party integration for automated alerts, and seamlessly remain updated with any infrastructure changes.

These features offer full and comprehensive observability to a level that would be difficult to have in a default cloud-provider monitoring service such as AWS CloudWatch.

Reliance on Vendors


There is often a fear of a certain amount of loss of control when it comes to serverless, as management and application specifics are determined by the vendor. The very benefits of the cloud, such as hardware choices and upgrades, runtimes and resource parameters, can also be seen as over-reliance and inflexibility.

In addition to this, once infrastructure has been deployed and fully functioning, concerns are raised around vendor lock-in and limitations should, later down the line, users want to migrate.


As developers working within agile organizations, architectural adaptability is crucial in order to meet the needs of the business. While hardware choices are no longer down to the business, public cloud platforms and ways to work have come a long way to enable greater infrastructure autonomy.

A good example here is to “program to an interface, and not to an implementation”. Looking at applications using AWS Lambda and AWS DynamoDB (DDB), there can be hundreds of Lambda functions interacting with just a few DDB tables. If every Lambda queries using DDB standards - this is programming to an implementation - any database move will require an arduous change to each individual Lambda. A useful workaround here is to instead create an interface that can translate general Lambda requests to the DDB query standards - this is programming to an interface.

This change in programming will allow developers to simply write a new interface that still understands the requests and is capable of translating to the new database query language, when they need to move out of AWS DDB. The interface can be deployed as a Lambda layer, for an even greater decoupling level.

Distributed Computing


With serverless comes the design for distributed computing, where components are shared among multiple computers to enable greater efficiency and performance. The challenge comes from creating a balance of granular functions for high performance but not too many to make it unmanageable in the long term. Another consideration is to ensure that the functions aren’t so high level or broad that its very benefit is eliminated and instead you simply have multiple mini monoliths to contend with.


When looking at examples of large enterprises taking advantage of serverless benefits and the distributed computing model, after the “I want that too” thoughts comes the “but how?!” It’s important to keep clear that every organization started small and scaled. This sounds like very simple advice but it’s one that can get easily lost in the noise when it comes to starting out or migrating entire builds into the cloud.

Second thing to keep in mind when thinking about your system and how each function will communicate with another is the actor model. Limiting the actors to a set of functions such as, creating other actors, sending messages or deciding what to do with the next message will help to avoid overwhelm and encourage a communicative environment.

Round Up

To say going serverless is without its challenges and cause for confusion would be untruthful as there are plenty of unknowns in this now very accessible, fast moving technology. With so many large organizations using it today who have the budget for vast resources and teams and yet are still subject to security breaches and architectural failures, the new serverless world can seem daunting and unworthy.

However, in equal measure to the many questions and concerns are their solutions. The most prominent advice we can give from our own experience is to start and continue small for configurations and deployment, make use of monitoring platforms such as Dashbird to expand visibility, increase insights and continuously encourage best practice, and keep simplicity throughout to avoid overly complex systems.

It won’t be perfect straight away but Rome wasn’t built in a day either! At its core, the benefits of serverless are so vast that the operational and financial rewards could significantly outweigh the older, clunky and chunkier alternative making the switch highly valuable and incredibly worthy.

Previously published at