My previous article on “How [Serverless](https://hackernoon.com/tagged/serverless) Computing will Change the World in 2018” has seen an incredible amount of traffic and provoked a massive number of comments across a number of major platforms. I was shocked to see it was one of the most popular articles on Medium on the [1st of January](https://medium.com/browse/top/january-01-2018)! You can check the post out here:\n\n[**How Serverless Computing will Change the World in 2018** \n_Serverless computing is a fairly new concept that has somewhat exploded in terms of popularity. This is in-part due to…_hackernoon.com](https://hackernoon.com/how-serverless-computing-will-change-the-world-in-2018-7818fc06b447 "https://hackernoon.com/how-serverless-computing-will-change-the-world-in-2018-7818fc06b447")(https://hackernoon.com/how-serverless-computing-will-change-the-world-in-2018-7818fc06b447)\n\nWhilst it has been very well received by the community, it did highlight a number of pain points that people are still facing using popular services such as AWS’ Lambda, Azure’s Functions and [Google’s Functions](https://cloud.google.com/functions/) which has apparently just been released in November of last year! (Thanks [Stephen Taylor](https://medium.com/@stephen_taylor) for pointing that out!)\n\nIn this article I wanted to highlight some of the key challenges that Serverless computing in general still has to overcome in order for it to truly make its mark on the software development world.\n\n#### Latency Issues — Cold vs Warm\n\nServerless offerings don’t typically have copies of your functions running on standby, this means that when someone hits the function it will be a cold-hit. This means that your code needs to be started, and then subsequently run as opposed to a warm-hit where your code is already running before it is hit.\n\nServerless providers may start offering potential paid services that allow you to keep a select number of your functions warmed up. This will help to ensure that the vast majority of requests made to your services will be met with a lower-latency response.\n\nYou may also find yourself creating timed lambdas that subsequently call other lambdas in order to keep them warmed up. The issue however with this is that your logs in the likes of CloudWatch will fill up with thousands of dummy requests and you’ll have to somehow filter these out which adds complexity.\n\nNot only does this pad out your logging systems, it will also impact costs, if the average container lasts 15 minutes before being killed permanently then you’ll find yourself making **(24\\*4)** API calls per day just to keep **one** container alive for **one** lambda function. This very quickly scales up as you add more and more services and you may find costs increasing exponentially once you burst out of that free tier.\n\n#### Load Prediction\n\nAnother potential solution to this issue is the use of some fancy AI and utilizing a load-prediction system that will analyze when it thinks your service is about to come under huge load.\n\nThis could work for a number of wide-variety of different services and could help to address the cold-hit issue if it’s able to accurately predict these spikes. At the very least it could help minimize the number of cold-hits you do ultimately end up receiving.\n\nThe issue with this however is that it won’t come cheap. You will either end up paying extra for idle containers warmed-up from an over-optimistic algorithm or you will still receive “lump-latency” should your algorithm prove to be too conservative. (Thanks Jacques for this point!)\n\n#### Estate Management\n\nAs the complexity of your Serverless architecture grows, so too does the task of managing your countless different endpoints and various environments. This also happens to be a very real issue facing microservice based architectures and it’s the reason for the adoption of tools like Kubernetes in order to orchestrate management of everything via code.\n\nProjects like [https://github.com/apex/apex](https://github.com/apex/apex) are starting to take off due to the way they help you build, deploy and manage [AWS](https://hackernoon.com/tagged/aws) Lambda functions. These will continue to develop in 2018 and continue to make the lives of developers working with Lambda’s simpler.\n\nHowever, as time goes by, the providers themselves need to make large steps forward in order to improve this process natively in order for adoption to skyrocket.\n\n#### Local Testability\n\nOne of the biggest complaints I have seen so far is the fact that testing your lambda functions locally is incredibly difficult. If you look to AWS’ [official documentation](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-upload-deployment-pkg.html) you’ll see that they recommend that you test things by invoking it manually.\n\nAs we move to more efficient development streams, the inability to automatically test something you are building is painful. If you have to maintain thousands of different Lambda functions then this becomes increasingly complex and there needs to be a solution that will allow you to automatically test these with minimal manual input.\n\nThe main issue when it comes to trying to implement a local testing platform is that it’s near impossible to try and emulate the underlying platforms these functions will be running atop of.\n\nI fear the only way this will improve is if the Serverless providers implement a decent native solution that allows developers to very quickly iterate through changes to their codebase.\n\n> The more time you spend around admin tasks such as pushing + manually triggering your lambda functions, the less time you spend actually delivering value to your company.\n\nWhen it comes to imagining the perfect solution, I like to imagine something like how we test the likes of our Angular projects. You currently run the angular-cli command `ng test` and it will watch your Angular project for changes and perform any unit tests any time it detects any changes.\n\n#### CI/CD Pipeline To Production\n\nThis was a point that was often brought up in a number of comments and I feel it’s slightly unfair as Serverless providers such as AWS are starting to release tools like CodePipeline and CodeDeploy. These tools can be used in conjunction with your standard git repos and can subsequently deploy any changes into dev/test/production every time a commit is pushed up.\n\n!(https://hackernoon.com/hn-images/1*E_Cw-b_CjDysts2Q99g5KQ.png)\n\nThis represents a first step towards a smoother CI/CD experience and there is still a fair bit of work to be done to make this a far better experience for developers and devops pros alike.\n\n#### Conclusion\n\nHopefully you found this article insightful! If you have any comments feel free to leave them in the comments section below or tweet me them: [Elliot Forbes](https://medium.com/@elliot_f). I’m also on [LinkedIn](https://www.linkedin.com/in/elliotforbes/) should you wish to connect!\n\n> In 2018 I’m also going to be pushing a hell of a lot of new video content to my YouTube channel: [https://www.youtube.com/tutorialedg](https://www.youtube.com/tutorialedge)e. Support me by subscribing and checking out my videos!