Hackernoon logoGoing Serverless with Amazon Web Services (AWS) — The Traditional Approach by@cwidanage

Going Serverless with Amazon Web Services (AWS) — The Traditional Approach

Chathura Widanage Hacker Noon profile picture

Chathura Widanage

This is the first blog of a series of two blogs, Going Serverless with Amazon Web Services (AWS)
1. The Traditional Approach 🚶
2. The Modern Approach 🚴

In this blog, I am going to develop a very basic sample serverless application from scratch, utilizing AWS serverless components. Before moving forward, let’s have a look at few terminologies.

Cloud Computing

There is no Cloud, It’s just someone else’s computer!

This statement(joke) is true to a certain extent. But cloud computing offers lot more advantages which can’t be expected from someone else’s computer such as(source),

  1. Self-service provisioning: Whenever you need resources, it is just a matter of few mouse clicks
  2. Elasticity: Scale up or down your resources ⇒ Saves money and time
  3. Pay per use
  4. Workload resilience: You don’t have to worry about redundancy. Most of the times, service vendors take that burden for you!

In a nutshell, when it comes to cloud computing, you don’t have to worry about infrastructure. Just keep worrying about resource provisioning, your application logic & code.

Serverless Computing

Serverless computing can be considered as a new variant or a new execution model of cloud computing where you get backends and processing units as a service(BaaS and FaaS).

Let’s take an example from AWS world. With traditional cloud computing when we wanted to do processing, we had to get an EC2 instance provisioned and keep that instance up and running 24x7. At the same time, we had to pay for that instance whether we use it or not. Apart from that, we had to apply security patches, update OSes ourselves to keep our instance away from security threats. But with the introduction of Lambda, the new processing unit of Serverless family, things changed dramatically. With lambda, we just have to worry about our code and just forget about load balancing, security patches and even high availability.

So in Serverless computing, we even don’t have to worry about resource provisioning, we just have to worry about our code & application logic.

Let’s go Server-less with a sample use case

Backend for a Contact Us form

Static websites can be hosted on services like google drive or even on github without any cost. But this advantage becomes useless when we want to add a simple yet essential dynamic component, such as contact us form for the website. Either we have to run a server at the back-end to collect contact us data, or we have to purchase a service which facilitates a back-end for our contact us form. Either of these options costs us a fixed amount per month regardless of the traffic we get for our website and `regardless of the fraction of people out of that traffic, who wants contact us`.

We need to have following components in the back-end, in order to support above scenario.

  • REST endpoint which accepts AJAX calls from Contact Us page.
  • Processing unit to validate and accept incoming requests.
  • A database to persist responses.

In order to address above requirements, I am going to choose following AWS services to implement my server-less contact us form back-end.

Due to the simplicity of the sample use case, we can even drop Amazon Lambda and directly persist REST payload accepted by API Gateway to the DynamoDB. But for the sake of completeness, let’s assume that we have to perform complex validations on the request payload. (API gateway can be configured to perform simple validations on payload as well, but let’s assume my validation requirements are way more complex than the capabilities of APIG 😸)

In this blog post, let me show you the traditional, slowest and toughest way of implementing this solution.

🚶Traditional Approach— Using AWS Console

Step 1 — Creating the DynamoDB Table

You can start creating a table by clicking Services ⇒ DynamoDB

Creating the DynamoDB table

Then, AWS will take you to the DynamoDB dashboard where you will see an option to create a table.

Table name & Partition key are mandatory fields

Selecting Partition Key & Sort Key

In DynamoDB, when we create a new table, we can configure the read/write throughput that we expect from the table. In AWS DynamoDB world, we measure this throughput in read/write capacity units.

One read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for items up to 4 KB in size.
One write capacity unit represents one write per second for items up to 1 KB in size.

Let’s assume one partition of DynamoDB is capable of providing a maximum of 3,000 read capacity units or 1,000 write capacity units. Then, if we request 5000 read capacity units and 1000 write capacity units per table, AWS back-end is going to create DynamoDB partitions based on following formula. (This is just an assumption based on DynamoDB spec, the actual partitions created is not visible to the users)

This is similar to having 3 buckets, that can hold data. Whenever we write an entry to the table, DynamoDB is going to store that in one of above 3 partitions, based on the Partition Key we have selected. The goal of the partition key is to select it carefully, such that our data get evenly distributed between the partitions.

Within the partitions, entries are going to be ordered based on the Sort Key that we choose at the provisioning time.

So in my example use-case, I am going to select email as my partition key, which is unique for each person. Within the partitions, I am going to sort entries by date. However, having a Sort Key is optional in DynamoDB. I am going to keep read/write capacity units for my table at their default values(5 & 5). So just one partition will be created in AWS servers to support my contact_us table.

After setting all necessary parameters, click on Create to start table creating process.

Step 2 — Writing AWS Lambda Code

Let’s navigate to the Lambda console by clicking on Services ⇒ Lambda ⇒ Create Function

Creating a lambda Function

In order to create a lambda, we need to give a unique name and specify a run-time environment. Since Lambda is going to access your other AWS resources on behalf of you, you need to assign a role to lambda with sufficient permissions to access relevant AWS resources.

Name, Runtime & Role are mandatory to create a lambda

Best practice is to assign a role, such that we assign only necessary permissions to our lambda function.

The function in my case, just need permission to do a putItem operation on DynamoDB.

Creating a Role for Lambda Function

You can easily create a role (or use an existing role) with following permissions by following AWS documentation.

  • dynamodb:PutItem
  • logs:CreateLogGroup
  • logs:CreateLogStream
  • logs:PutLogEvents

Writing Lambda Code

Initially, I am going to write my function without any validations or extra logic. Since I am not going to use any 3rd party libraries initially, I can write the code directly in the AWS lambda console.

The Lambda service has preinstalled the AWS SDK for Node.js

Now, let’s assume I want to do some validations on the request before persisting them. (As stated initially, that is the whole purpose of using a lambda in this simple use case, instead of calling DynamoDB directly from API Gateway).

Since, inbuilt IDE of lambda console, doesn’t has support to add 3rd party libraries, now I have to write my lambda function locally, in my computer and upload that code as a zip file(bundled with necessary dependencies) via lambda console.

Creating Lambda bundle locally

I am going to start by initializing a new nodejs project and add necessary dependencies via node package manager.

Even in this case, it is not necessary to add AWS-SDK as a dependency to my node project. I am using validate.js to validate the fields in the incoming request with a minimum effort.

Now, my code looks like follows, including request validation.

Now my local directory structure looks like follows.

Directory structure of Lambda Node project

In order to deploy this as a lambda code, I have to make a zip file including everything in this folder and upload it via Lambda console. In order to do that, code entry type of my existing lambda function should be switched from Edit code inline to Upload a .ZIP file.

Switching the Code Entry type of lambda function

When it comes to this point, we have a fully functioning back end, and the only task left is to expose this back end through an API, so my Lambda function can be triggered externally from the Contact Us HTML form.

Let’s go ahead and create an API Gateway resource to trigger this lambda function.

Step 3 — Creating API Gateway trigger

You can easily create an API from the Lambda console itself without having to navigate to the API Gateway console.

However, in my case, there are few tweaks that should be done to the generated API, and for that, I have to jump quickly to the API gateway console by clicking on the newly created API.

Tweaking generated API

Here, I am turning off lambda proxy integration (You may read about lambda integration types here) and enabling CORS for my API endpoint.

After applying these changes, I have to redeploy my API to apply the changes.

Deploying API

Now you can test your API using any HTTP client of your choice and integrate this backend with your HTML contact us form.

This is the traditional approach for developing a serverless application and I hope you have realized that, with serverless computing, you can forget resource provisioning and just focus on code & application logic.

In the next blog post, let’s see how we can develop a serverless application with the modern approach, where we forget both resource provisioning and code(Yes! 😇) and just focus on the application logic.

Call To Action

  • Clap. Appreciate and let others find this article.
  • Comment. Share your views on this article.
  • Follow me. Chathura Widanage to receive updates on articles like this.
  • Keep in touch. LinkedIn, Twitter


Join Hacker Noon

Create your free account to unlock your custom reading experience.