paint-brush
AWS Application Load Balancer + WAF - Just Want to See It Workby@jack.h.mocha
142 reads

AWS Application Load Balancer + WAF - Just Want to See It Work

by Jack MochaAugust 7th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

When learning a new technology, sometimes I just want to see it work. It gives me a baseline to extend my ideas, to see what is possible, and to imagine what it can become. This series aims at minimizing the possibility of having a missing link and encourages you to build your next innovative solution based on what you learned here. Today, we are building a virtual coffee shop that serves cappuccino. Through this project, we are exploring the benefit of using AWS Application Load Balancer and WAF.
featured image - AWS Application Load Balancer + WAF - Just Want to See It Work
Jack Mocha HackerNoon profile picture

When learning a new technology, sometimes I just want to see it work. It gives me a baseline to extend my ideas, to see what is possible, and to imagine what it can become.


This series aims at minimizing the possibility of having a missing link and encourages you to build your next innovative solution based on what you learned here.


Today, we are building a virtual coffee shop that serves cappuccino. Through this project, we are exploring the benefit of using AWS Application Load Balancer and WAF.

Why Use a Load Balancer?

Imagine owning a coffee shop with a fantastic cappuccino that attracts customers in droves. Initially, your trusty coffee machine effortlessly manages the influx of orders, serving a cappuccino per minute.


However, success comes knocking, and soon, a single customer per minute evolves into a queue of two. The dilemma arises – the second customer is left waiting, and impatience could cost you their loyalty. But fear not, for there's a solution: the load balancer.


The Coffee Machine Analogy:

Picture your coffee machine as your API backend. As demand surges, the analogy becomes clear. Vertical scaling, akin to replacing your current machine with a pricier one capable of handling two cappuccinos per minute, has its limits.


Costs skyrocket as performance gains taper off, creating an unsustainable trajectory.


Enter Horizontal Scaling:

A more sensible approach is horizontal scaling. Just as you'd introduce more coffee machines to meet growing demand, horizontal scaling involves adding more servers to distribute the load.


This strategy accommodates increasing traffic gracefully, keeping performance steady without exponential cost escalation.


The Load Balancer's Role:

Here's where the load balancer steps in – the load balancer ensures an even distribution of incoming requests across multiple servers. Think of it as your coffee shop's efficient manager, guiding each customer to the next available machine.


Customizable Traffic Distribution:

Tailoring your load balancer's configuration to suit your needs is key. Whether you choose round-robin distribution, favoring fairness, or opt for more nuanced algorithms based on server health, the load balancer's flexibility adapts to your traffic patterns.


Enhanced Performance and Resilience:

Load balancers don't just prevent bottlenecks. They also bolster fault tolerance. If a server falters, the load balancer redirects traffic, averting disruptions. This ensures uninterrupted service, akin to customers always getting their cappuccinos.

Why Use a Web Application Firewall (WAF)?

Owning a coffee shop is an honor, and naturally, safeguarding your valuable resources becomes paramount. This is where AWS WAF comes into play, offering a seamless method to place a protective firewall in front of your load balancer.


Learning the nuances of WAF may be a steep learning curve.


However, with an array of managed rules offered by AWS, configuring the WAF can take mere minutes to complete.

What Do We Want to Achieve?

  • Set up 2 EC2 instances, each running a Coffee-Api Node.js Express application.


  • Set up an Application Load Balancer.


  • Set up and attach a WAF to the load balancer.


  • Observe how the traffic is distributed over both EC2 instances.


  • Here is the high-level diagram of the infrastructure.

Resources


Setting Up the Environment

Assuming you are familiar with AWS, follow the Cornell VPC to set up the below infrastructure.

  • Create a VPC
  • Setup 2 Public Subnets
  • Configure the route table to connect to an Internet Gateway
  • Launch 2 Ubuntu EC2 instances

Inside the Coffee-API

Coffee-api is a simple Node.js Express project that does one thing - serves cappuccino.

Index.js

const express = require('express');
const config = require('config');
const coffee = require('./routes/coffee')
const app = express();
const PORT = 3000;

app.use(express.json());
app.use('/api/coffee', coffee);

app.listen(PORT, () => {
  console.log(`Server is running on port ${PORT}`);
});

console.log('Application Name: ' + config.get('name'));
  • The application listens on port 3000


  • The config package loads different JSON files according to the environment variable, NODE_ENV.
    • NODE_ENV=development, development.json is used

    • NODE_ENV=production, production.json is used


  • We have one route, /API/coffee

coffee.js

const express = require('express');
const config = require('config');
const router = express.Router();
const HOST = config.get('host');
const CAPPUCCINO = 'cappuccino';

router.post('/make', (req, res) => {
    //Validate the request
    const { type } = req.body;
    if (!type || type.toLowerCase() != CAPPUCCINO) {
      res.status(400).send(`(${HOST}) No coffe for you.`);
      return;
    }
  
    //Make coffee
    const coffee = MakeCappuccino();

    //Serve coffee
    res.send(`(${HOST}) Here's your ${coffee}. Enjoy!`);
});

function MakeCappuccino() {
    return CAPPUCCINO;
}

module.exports = router;

In the /make endpoint,


  • We first validate the request, making sure the type of coffee is included in the body of the request, and the requested type is cappuccino; otherwise, you don’t get coffee!


  • Then, we start making a cappuccino.


  • Finally, we serve the cappuccino!

Package the Project

  • Navigate to coffee-api directory.
npm pack
  • The result is coffee-api-1.0.0.tgz.

Setting up the EC2 Instance to Serve Coffee

Setup Apache As a Reverse Proxy

  • SSH to your Ubuntu server.


  • Install necessary packages. (Accept all default settings)
sudo apt update
sudo apt install nodejs npm apache2


  • Install more packages.
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo systemctl restart apache2


  • Setup a new Apache virtual host configuration file.
cd /etc/apache2/sites-available/
sudo  nano ./api.conf


  • Enter the following to api.conf.


  • Make sure to update the SERVERNAME to the public DNS of the server

<VirtualHost *:80>
    ServerName SERVER_PUBLIC_DNS
    ProxyPass / http://localhost:3000/
    ProxyPassReverse / http://localhost:3000/
</VirtualHost>


  • Enable Apache virtual host configuration
sudo a2ensite api.conf
sudo systemctl reload apache2

Install the Coffee-API

  • Transfer coffee-api-1.0.0.tgz to the server.


  • Unzip the package and move it to the projects directory.
tar -xvzf coffee-api-1.0.0.tgz
mkdir ~/projects
mv package ~/projects/coffee-api


  • Update the name of the host to ‘MK-001’
nano coffee-api/config/production.json


  • Install the application
cd ~/projects/coffee-api
npm install --production


  • Setup up pm2 to start the application
sudo npm install -g pm2
export NODE_ENV=production


  • Start the coffee-API
pm2 start index.js


You should see the application running.

  • View the pm2 log to confirm the production.json is loaded

Setup the Second EC2 Instance

  • Follow the same steps as above.


  • Make sure you are launching the second EC2 instance in a different public subnet.


  • Before installing coffee-api, update the host to ‘MK-002’ in the production.json.

Testing

Let’s use Postman to test the API endpoints.

  • Set the request URL to http://your-public-dns/api/coffee/make


  • Set the content-type to application/JSON


  • Here is the request body
{
    "type": "cappuccino"
}


In the response, you should get your cappuccino!

(MK-001) Here's your cappuccino. Enjoy!

Setting Up an Application Load Balancer

Finally, we can set up the load balancer! We need to first create a target group and forward traffic to this group in the load balancer.

Create a Target Group

  • Go to EC2 > Target groups > Create target group.
  • Give it a name.
  • Make sure the Protocol is HTTP, and the port is set to 80.
  • Select the VPC you created.
  • Choose HTTP1 in the Protocol version.
  • Click Next.
  • Select those 2 EC2 instances you created.
  • Click on Include as pending below.
  • Click on Create target group.

Create a Load Balancer

  • Go to EC2 > Load balancers > Create Load balancer.
  • Select Application Load Balancer.
  • Give it a name.
  • In Network mapping, select the VPC you created.
  • Select those 2 public subnets you created.
  • Select the security group that controls the traffic to your load balancer.
  • In Listeners and routing, set Protocol to HTTP, and Port to 80.
  • Select the target group you just created.
  • Leave the rest as default.
  • Click on Create load balancer.


You are done setting up the load balancer!

Additional Steps

Recall that when we setup the reverse proxy, we set the ServerName to the public DNS of the EC2 instance. We verified that the API endpoint is working by testing it with Postman.


Now, we want to configure the reverse proxy to forward traffic from your load balancer to the API endpoint.

Observe the Traffic

At this point, you are ready to observe the magic of load balancing. Recall that you have one endpoint setup as MK-001, and the other as MK-002. In Postman, update the URL in your request to point to the public DNS of the load balancer.


Send the request multiple times, you should see the response with the name alternating like the following.

(MK-001) Here is your cappuccino. Enjoy!
(MK-002) Here is your cappuccino. Enjoy!


There you go. That’s a victory! Take a break and enjoy your coffee. Once you are ready, let’s add a WAF in front of the load balancer.

Setting Up WAF

  • Go to AWS WAF > Web ACLs > Create web ACL
  • Give it a name.
  • Select the region where your load balancer is at.
  • Add AWS resources.
  • Select your load balancer.
  • Click Next.
  • Click Add rules.
  • Click Add managed rule groups.
  • Expand AWS-managed rule groups.
    • Pick those from the Free rule groups.
    • You can read more about what those rule groups do in the descriptions.
  • Click Add Rules.
  • Leave the rest as default.
  • Click Next until you see Create web ACL.
  • Click Create web ACL.


The process will take a few minutes. Then you should see the success badge at the top of the screen.


Once the Web ACL is created, click on it, and you will be able to see all the metrics you set up for this ACL. Test the coffee-api using Postman again. You should get the same result.


In the Overview of your web ACL, observe the diagram of requests flowing through your coffee shop, like the following. Ask your friends to hit your coffee-api, and see your coffee shop grow!




There you have it. Enjoy your cappuccino!

Conclusion

As the aroma of success wafts through our virtual coffee shop, we experienced the power of load balancers and WAF through the following steps:


  • Setup a VPC for the virtual coffee shop.


  • Host the coffee-api on 2 EC2 instances.


  • Setup a load balancer to distribute requests.


  • Setup a WAF to protect our resources.


  • Finally, we make requests through Postman and get our coffee.


What is your use case for the load balancer and WAF? Let me know by leaving a comment below. Thanks for joining me on this journey. See you next time!