How AWS Lambda made me a better cloud developer?

Written by engind | Published 2016/06/26
Tech Story Tags: aws | cloud-computing | aws-lambda | ami

TLDRvia the TL;DR App

AWS Lambda is a pretty big deal both for devops and REST APIs. It is a simple promise; instead of running a machine at all times to compute when needed, start a machine (fast!) when needed and release it after computation is done. Health checks, monitoring, scaling and even security (in some cases) problems are things of the past and you have more resources to focus on your value proposition.

After spending some time with Lambda I’ve realized that its on demand nature forces good cloud development rules and saves developers from some common pitfalls.

Everything is volatile

People like hacks, especially when things are on fire. When your servers start to go down, “I’ll connect to the server and change/install/stop ____” would be one of the first responses you’ll hear. You can fill the gap with a configuration, a monitoring tool or even a tiny local database. You should say NO. None of the changes will be carried to new instances, existing and future instances will be different from one another and worst if your setup includes an auto-scaling group AWS might choose to shut down your oldest instance, the one most likely to have extra stuff, when it is time to scale down.

If you want something to run in your servers it should be a part of your AWS AMI and all of your active instances should be started with the same AMI. Your instances should NEVER handle persistence, unless they are specifically designed to do that, not even for small static stuff. I also like to prohibit key based SSH access to ElasticBeanstalk environments I manage.

With Lambda, you don’t have access to rest of the system. You can’t save stuff locally, you don’t know where your code is running so you can’t access to underlying machine. You have to work in scope and persist anything outside of Lambda.

Single responsibility

When you have a machine running at all times and don’t have a great deal of requests, it is tempting to deploy more stuff on the same machine. I am not talking about improving the same functionality but deploying other functionalities and even unrelated systems on the same machine. Result; monolithic applications and/or multiple applications on the same machine, a coupling and debugging hell.

With Lambda your code is decoupled from everything else, it only exists at the time of execution.

Optimization matters

Lambda is priced by number of executions, reserved memory and execution duration. You need to make sure your Lambda code runs in the shortest time possible and is memory light. In Lambda pretty much nothing is premature optimization.

In the early days of your newly built backend the demand might not be high and you may skip optimizing number of calls from clients to the backend. In addition to creating scaling problems, unoptimized mobile clients bring hidden battery and network costs to your users. To keep number of Lambda invocations low, you might want to start with caching.

Never assume anything

You may not be looking for global domination in the near future. Your servers might not be distributed and you may not be using multiple availability zones. Maybe you are focusing on a geographic location or network performance is not on the top of your small team’s list. But most of these will be your top concerns some day. Your code should be completely independent of the execution environment and deployable to everywhere.

Don’t assume anything and with Lambda, you can’t.


Published by HackerNoon on 2016/06/26