paint-brush
What Are Load balancers And How Do They Work?by@divyesh.aegis
7,024 reads
7,024 reads

What Are Load balancers And How Do They Work?

by divyesh.aegisJuly 29th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In traditional client server architecture, the web components will be deployed in a server and the client will access the server to get the web page. The load balancers sit between the client and servers to spread the traffic across servers to improve responsiveness and availability of applications, Java websites or databases. By routing the traffic among the servers, load balancer will make sure that there won't be a single point of application failure. Cloud-based solutions are the managed services available in cloud providers like AWS, Google Cloud, and Azure.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - What Are Load balancers And How Do They Work?
divyesh.aegis HackerNoon profile picture

In traditional client server architecture, the web components will be deployed in a server and the client will access the server to get the web page. A single server deployment possesses many challenges. What if this server goes down?

What if there are a very high volume of incoming request to this server? Obviously, the site performance will be affected and the user experience too. These situation leads to the distributed system architecture; where in the server components will be distributed to many servers called a server farm or server pool. These servers work in tandem to deliver the content.

Image credit: www.educative.io

The distributed systems have its own challenges. We have to route the incoming traffic to servers in such a way that there won't be a single server overloaded with requests.

Also, we should be able to add new servers and traffic needs to be routed to new servers as well. The load balancers do the job of routing traffic to servers in a distributed system.

The load balancers sit between the client and servers. It helps to spread the traffic across servers to improve responsiveness and availability of applications, Java websites or databases.

Load balancer

By routing the traffic among the servers, load balancer will make sure that there won't be a single point of application failure. While sending the request to servers, it will monitor the server health as well. The server will respond directly to the client. In this scenario the single point of failure would be the load balancer itself. We can configure backup load balancers to mitigate the risk of failure. Load balancer can handle HTTP, HTTPS, TCP and UDP type of traffic.

Load Balancing Types

The load balancer can be hardware based, software based or cloud-based service. Hardware based solutions comes up as machines with specialized processors, loaded with proprietary software. To cope up with increasing traffic, you have to buy additional resources (machines) from the vendor. Software solutions are less expensive and more flexible. We can install the software on the hardware of our choice or in cloud environments like AWS EC2. The cloud-based solutions are the managed services available in cloud providers like AWS, Google Cloud, and Azure etc. These services follow a pay per usage model, which makes them the low cost in all these three.

Popular Load Balancers

Below are some of the popular load balancers available in market.

  • Cisco
  • TP-Link
  • Barracuda
  • Nginx
  • Elastic Load Balancing from AWS

How does it work?

Load balancer will redirect the request to an available server in the pool. There are load balancing algorithms based on which these works. The load balancers can also shoulder the responsibility of performance, security and resiliency of the applications in servers.

The load balancers should also be able to handle the situations where we need to add new servers or remove any. They also monitor the health of the servers. They run the health checks to ascertain if the underlying server can take traffic.

Load balancers also maintains the user session across requests, this process is called session persistence. The client application usually stores the user's session information. This will be shared with server. Load balancer needs to ensure that all requests from a client within a session are send to the same server.

This is particularly useful for shopping cart kind of applications, wherein sending requests to multiple servers can cause performance issues or even transaction failures. In enterprise applications, servers usually cache the response to improve the performance. So, shifting servers for repeated calls could result in cache miss, and degrade the performance. It is the responsibility of the load balancer to handle such scenarios, so that fewer cache misses occurs.

Suppose if the load balancer itself fails, what happens? In this scenario a single instance of the load balancing server would cause the entire system outage. So here come the distributed load balancers. We can have multiple load balancers that manages the traffic. There are many ways to achieve this.

  1. Active/Passive load balancers - One load balancer handles the traffic for a site, if that goes down another passive load balancer will take charge and handles the incoming requests. But this set up cannot be a good solution for sites with large traffic.
  2. Active/Active load balancers - The traffic will be configured in many servers. An algorithm will choose the load balancer to which the traffic needs to be sent. Other nodes will be discarded.

Load Balancing Algorithms

As mentioned above, load balancers use different types of algorithms. Some are as follows.

  • The Least Connection method - As the name suggests the load balancer will pick the server with least number of active connections.
  • The Least Response Time method - Here the server with lowest response time will be selected.
  • Round Robin method - The servers will be allocated in a Round Robin fashion. The servers will form a circular list. When a request comes it will be given to a server in the list and the server position will be moved to the end of the list. All servers will get an equal chance to serve the requests.
  • The Least Packets method - The server that received least number of packets for a specified period of time will be picked for handling a new incoming request.

Conclusion

In a large-scale enterprise application, we need to route the traffic to all server nodes. These requests should be forwarded such that the optimal system performance should be achieved. As we seen in this tutorial, the load balancers come to our rescue.

They work on different methods or algorithms that routes the traffic to down level servers. The load balancers can be applied to web servers, database servers, etc. We may have to go for a set of load balancers in order to avoid a single point of failure.

Efficient use of load balancers can boost the system performance as well.