Photo by Aaron Burden At , we decided to go the path as much as possible meaning we’ll focus all our efforts on our product and not on . While we enjoy doing devops, that’s not our main focus as a company. Torii no-ops operations Serverless, because we like to sleep at night. We can break our application into three parts: . These are frontend websites, written in React and statically generated at build time. Static websites These are jobs that are scheduled or triggered by events such as file uploads, webhooks or any other asynchronous event. Background jobs. A REST API server interacting with our databases and serving all client requests. API Server. Lessons Learned #1. Static websites Static website are fast, easy to scale and simple to distribute. We use React to build our frontend and the code is packaged as a simple HTML/JS/resources bundle ready for distribution. We use to host these static assets on a CDN and get . Netlify fast loading times from anywhere in the world No Nginx/Apache servers to configure here 👍 #2. API Server on Serverless The basic idea is that an API Server is a function: the input is an HTTP request and the output is an HTTP response. It’s ( ) where each HTTP request gets its own server instance handling it. perfect for FaaS Function as a Service . It also makes things simpler since there are less moving parts: no servers, no load balancers, no auto-scaling groups. All these are abstracted away and all we care about is one function. This setup leads to automatic scalability, high availability and reduces costs dramatically We take an entire Node.js app and package it as a single AWS Lambda function. An API Gateway routes all traffic to it and the Node.js app sees it as a regular HTTP request. We picked for setting up the stack, updating it and deploying our functions. It’s really as simple as writing in your terminal. It is highly configurable, so you can customize the deployment for your needs, but if you have no special requirements, the default is good to go. apex/up up Zero servers to provision, configure or apply security patches to 👏 #3. Packing for Serverless Deploying a Lambda function has a 52Mb limitation of the function including all of its dependencies. If you’ve coded a decent Node.js project recently, you’ll know we can pass this limit easily. Note: There’s a way to deploy it from S3 which allows to bypass this limitation, we haven’t tried that yet. To mitigate this, we‘re including only the required dependencies and trimming their size by excluding unused files like READMEs, history of package, tests, documentation and examples. We published a package that helps do this named . It will pack your code with to provide with the latest Node.js and JavaScript features, while keeping your as small as possible. fully integrates with so the build process is optimized and packed efficiently. [lambdapack](https://www.npmjs.com/package/lambdapack) webpack node_modules lambdapack apex/up Read more about on GitHub. [lambdapack](https://www.npmjs.com/package/lambdapack) #4. Deployments This works amazingly well, where . AWS allows to keep multiple versions of each Lambda and have aliases pointing to versions. Popular aliases include: , and . So a new deployment means uploading a new version of the Lambda and pointing the alias to it. Fortunately, does this automatically with . Rollbacks are just aliasing the pointer to the required version. each deployment creates a new version of the Lambda test staging production production up up deploy production #5. Local testing/development Since we are using a regular Node.js server, running locally just means running your server as usual. However, this doesn’t mimic the AWS infrastructure with all the important differences like: enforcing the same Node.js version, API gateway timeouts, Lambda timeouts, communicating with other AWS resources and more. Unfortunately, The best way to test is on the AWS infrastructure itself. #6. Background jobs For background jobs such as file processing or syncing with 3rd party APIs, we keep a set of dedicated Lambda functions that are not part of the API server. These jobs are scheduled to run by or as a response to events in our system. CloudWatch Currently we use a “sibling” project to handle these background job Lambdas — using the open source . [apex/apex](https://github.com/apex/apex) These functions only run when needed and there’s no need to keep servers up to process these jobs. Another win for the Serverless approach 🚀 #7. Logging AWS services comes with the build in service which has awful UI, UX and DX. While the has a feature to view the logs, there’s still much more to ask: alerts, aggregated logs, etc. CloudWatch logs up cli log Our first solution was logging directly from the API server to a 3rd party logging service (we use ), but this kept the Lambda functions always up. papertrail into a dedicated Lambda that is responsible for sending it to the 3rd party logging service. We used an updated version of . I also suggest streaming the API Gateway logs to get the full picture. A better approach is to stream the Lambda logs cloudwatch-to-papertrail #8. Environment variables and secrets Now that we got this out of the way, we should store them encrypted somewhere. AWS has a solution exactly for this and it is called . You add your parameters, choose whether to encrypt them or not and then choose who can read these secrets. We will allow our Lambda function to read these secrets as soon as it starts running. Since Lambda functions are re-used, this will happen only on the first invocation of the Lambda (First API call). To set this up, we add the parameters with an hierarchy of , for example . Now we can read all parameters and use them as environment variables or just store them in-memory. Don’t commit your secrets to source control. AWS Parameter Store /{env}/env_variable /production/MYSQL_PASSWORD /production #9. Performance and Cold starts When a Lambda hasn’t been invoked in a while it will freeze and the next invocation will incur the time of launching a new instance of the server. This can take some time depending on the complexity of the app, sometimes between 600ms–2000ms. other than (1) warming the Lambda (periodically calling it using a monitoring service or just another scheduled Lambda invocation using CloudWatch) and (2) making your Node.js app load faster. Hopefully, AWS will find a way to reduce the cold start time in the future. There’s currently no real solution for this If your API server has to comply with an SLA, Serverless at this point might not be a great fit 😞 #10. No parallel requests When building Node.js servers, we’re used to handling multiple requests with the help of the event loop and asynchronous functions. However, when ran inside an AWS Lambda, each Lambda container will only handle one request. This means that spawning multiple Lambdas vs. one Node.js app serving multiple requests. parallelism is achieved by the API Gateway Test your app and use cases to see if this model fits. Conclusion Is Serverless a step forward in the operations space? With we wanted to understand how ops work while with Serverless we benefit from delegating the responsibility for operations to someone else (in this case AWS) and we can call it . While we lose flexibly, we gain a lot of features, ease of mind and ability to focus our energy on our code and product. devops no-ops Serverless will surely take over in the next few years, including more specific serverless offerings like serverless databases, serverless streaming services and others. For us developers, this is almost the holy grail. Build it, ship it, it works. If you liked this article, please Clap 👏 and share it with your network.