My previous article on “How Serverless Computing will Change the World in 2018” has seen an incredible amount of traffic and provoked a massive number of comments across a number of major platforms. I was shocked to see it was one of the most popular articles on Medium on the 1st of January! You can check the post out here:
Whilst it has been very well received by the community, it did highlight a number of pain points that people are still facing using popular services such as AWS’ Lambda, Azure’s Functions and Google’s Functions which has apparently just been released in November of last year! (Thanks Stephen Taylor for pointing that out!)
In this article I wanted to highlight some of the key challenges that Serverless computing in general still has to overcome in order for it to truly make its mark on the software development world.
Serverless offerings don’t typically have copies of your functions running on standby, this means that when someone hits the function it will be a cold-hit. This means that your code needs to be started, and then subsequently run as opposed to a warm-hit where your code is already running before it is hit.
Serverless providers may start offering potential paid services that allow you to keep a select number of your functions warmed up. This will help to ensure that the vast majority of requests made to your services will be met with a lower-latency response.
You may also find yourself creating timed lambdas that subsequently call other lambdas in order to keep them warmed up. The issue however with this is that your logs in the likes of CloudWatch will fill up with thousands of dummy requests and you’ll have to somehow filter these out which adds complexity.
Not only does this pad out your logging systems, it will also impact costs, if the average container lasts 15 minutes before being killed permanently then you’ll find yourself making (24*4) API calls per day just to keep one container alive for one lambda function. This very quickly scales up as you add more and more services and you may find costs increasing exponentially once you burst out of that free tier.
Another potential solution to this issue is the use of some fancy AI and utilizing a load-prediction system that will analyze when it thinks your service is about to come under huge load.
This could work for a number of wide-variety of different services and could help to address the cold-hit issue if it’s able to accurately predict these spikes. At the very least it could help minimize the number of cold-hits you do ultimately end up receiving.
The issue with this however is that it won’t come cheap. You will either end up paying extra for idle containers warmed-up from an over-optimistic algorithm or you will still receive “lump-latency” should your algorithm prove to be too conservative. (Thanks Jacques for this point!)
As the complexity of your Serverless architecture grows, so too does the task of managing your countless different endpoints and various environments. This also happens to be a very real issue facing microservice based architectures and it’s the reason for the adoption of tools like Kubernetes in order to orchestrate management of everything via code.
Projects like https://github.com/apex/apex are starting to take off due to the way they help you build, deploy and manage AWS Lambda functions. These will continue to develop in 2018 and continue to make the lives of developers working with Lambda’s simpler.
However, as time goes by, the providers themselves need to make large steps forward in order to improve this process natively in order for adoption to skyrocket.
One of the biggest complaints I have seen so far is the fact that testing your lambda functions locally is incredibly difficult. If you look to AWS’ official documentation you’ll see that they recommend that you test things by invoking it manually.
As we move to more efficient development streams, the inability to automatically test something you are building is painful. If you have to maintain thousands of different Lambda functions then this becomes increasingly complex and there needs to be a solution that will allow you to automatically test these with minimal manual input.
The main issue when it comes to trying to implement a local testing platform is that it’s near impossible to try and emulate the underlying platforms these functions will be running atop of.
I fear the only way this will improve is if the Serverless providers implement a decent native solution that allows developers to very quickly iterate through changes to their codebase.
The more time you spend around admin tasks such as pushing + manually triggering your lambda functions, the less time you spend actually delivering value to your company.
When it comes to imagining the perfect solution, I like to imagine something like how we test the likes of our Angular projects. You currently run the angular-cli command
ng test and it will watch your Angular project for changes and perform any unit tests any time it detects any changes.
This was a point that was often brought up in a number of comments and I feel it’s slightly unfair as Serverless providers such as AWS are starting to release tools like CodePipeline and CodeDeploy. These tools can be used in conjunction with your standard git repos and can subsequently deploy any changes into dev/test/production every time a commit is pushed up.
This represents a first step towards a smoother CI/CD experience and there is still a fair bit of work to be done to make this a far better experience for developers and devops pros alike.
Hopefully you found this article insightful! If you have any comments feel free to leave them in the comments section below or tweet me them: Elliot Forbes. I’m also on LinkedIn should you wish to connect!
In 2018 I’m also going to be pushing a hell of a lot of new video content to my YouTube channel: https://www.youtube.com/tutorialedge. Support me by subscribing and checking out my videos!
Create your free account to unlock your custom reading experience.