CLB vs. ALB vs. NLB—Which AWS load balancer is right for you?

Load distribution between resources in AWS is a common challenge. Organizations are tasked with choosing how to solve this issue, including which technologies to use.

In this article, we’ll go through how to pick the right AWS load balancer to best solve your needs.

What is Elastic Load Balancer?

Load workflow, from client to load balancer to AWS resources Fig. 1: Load workflow, from client to load balancer to AWS resources

Elastic Load Balancer (ELB) is an AWS service that enables you to distribute load between AWS resources, that is, between EC2 instances. ELB is an AWS-managed service and is thus highly available.

The instances you want to distribute the load can be in different availability zones in the same region. ELB auto-scales to handle the load as it increases or decreases; most importantly, it distributes the load only to healthy instances via health checks.

You can have public load balancers, which are public to the internet, and private load balancers, which are private to a specific AWS network.

Network protocols

Computers communicate in network layers using protocols. A significant part of choosing the best load balancer is understanding which network layer the load balancers act on, together with each related protocol.

Different network layers and their composition Fig. 2: Different network layers and their composition

The important communication layers that we need to understand here are the network, transport, and application layers. Most applications communicate at the application layer, and each layer uses the layers beneath it, as described in the above image.

Network layer

In the network layer, we have the Internet Protocol (IP), where bytes are transferred; it is less reliable than other layers, as data might get lost.

Transport layer

The transport layer contains the Transmission Control Protocol (TCP) where data reliability is more significant than performance. Transport Layer Security (TLS) is more secure than TCP due to data encryption. Meanwhile, the User Datagram Protocol (UDP), where performance is greater than reliability, is an alternative to TCP that video streaming applications use.

Application layer

The application layer includes HTTP and HTTPS, where web apps, rest APIs, email servers, and file filters communicate with each other. Those applications implement TCP/TLS in the network layer for data reliability when using the layers beneath it.

Some applications that need high performance, such as gaming applications and live video streaming, communicate directly in the transport layer via UDP, sacrificing data reliability for performance.

Elastic Load Balancer listeners

Listeners monitor for client connection requests. Each one has a protocol, a port, and a set of rules, so it knows how to route requests to targets.

Each listener must have an inbound rule for its security group, mapping your port and source.

Target groups

Load workflow from the load balancer to the target groups Fig. 3: Load workflow from the load balancer to the target groups

In the above workflow, each EC2 instance is a target and each group of EC2 instances that you want to distribute the load among is a target group.

Target groups, made up of EC2 instances, IP addresses, or Lambda functions, typically implement a few configurations, as discussed below.

Deregistration delay

Here we can define how long Elastic Load Balancer will wait before deregistering a target. This has to do with connection draining. Imagine you have a target group with four instances, and due to a health check failure, you want to deregister a specific instance. You should wait until the requests already in progress with that specific target are completed. Configuring a deregistration delay allows your request to be more resilient.

Slow-start duration

When a new target is added to the target group, this configuration gives it time to warm up before being ready to receive requests.

Load balancer algorithm

Here we can define how the load is balanced between the targets. In the Round Robin configuration, the LB will sequentially balance the load between targets. In the Least Outstanding Requests configuration, the LB will balance the load to the target with the lowest number of outstanding requests.

Stickiness

This is the configuration you will look for when you want to maintain a user session. That is, you want a request from a specific user to be always balanced to a specific instance. Stickiness uses cookies and is supported by CLB and ALB.

Multiple target groups

You can use multiple target groups for your applications and microservices, depending on your need, and use one load balancer for all of them. This way, you can balance the load between the groups, each having multiple targets or instances. However, you cannot support multiple target groups with CLB.

Listener rules

A load balancer can contain multiple listeners, a combination of a protocol and a port. Rules, based on the content of HTTP requests, dictate to which target group a listener routes.

You can also define rules so that the load balancer routes to multiple groups. An example of a rule could be if you wanted the request path of “microservice-a” to route all the requests coming in from that path to “target-group-a.”

The listener rules are executed in the order they are configured, and the default rule is always executed last. You can define rules based on the HTTP headers, query strings, or IP addresses.

Auto Scaling

How Auto Scaling acts on the target from each target group Fig. 4: How Auto Scaling acts on the target from each target group

Auto Scaling helps you to scale in and out based on load.

  • Scaling in decreases the number of targets.
  • Scaling out increases the number of targets.

The Auto Scaling group is primarily responsible for maintaining a configured number of instances using a periodic health check. A health check is a request to see whether the instance is up or down based on specific criteria. You pre-define a health check path, and the load balancer checks it against an expected response.

Create an Auto Scaling group with two desired instances and one of them goes down. The Auto Scaling group will bring another instance up to maintain the desired capacity defined in your infrastructure.

This way, the load balancer can distribute the load to active instances as the Auto Scaling group takes care of scaling them in and out based on the load.

Types of Load Balancers

AWS provides different types of load balancers; we describe each one below.

The types of load balancers Fig. 5 The types of load balancers

Classic Load Balancer (CLB)

CLB operates in Layer 4 and Layer 7. Currently, AWS does not recommend CLB, as they decided that creating a load balancer that acts on more than one layer simultaneously is not ideal.

CLB is part of what AWS calls the “previous generation.“ When creating one, they point out that it has been deprecated.

Application Load Balancer (ALB)

ALB operates in Layer 7. It is the most-used load balancer in AWS and supports WebSockets, HTTP, and HTTPS; it can also scale automatically based on demand using Auto Scaling.

ALB can balance load between:

  • EC2 Instances
  • Containerized applications (ECS)
  • Web applications using their IP address
  • Lambdas, that is, serverless (sporadically)

ALB can use advanced routing approaches, looking at the requests, paths, and hostnames. Then, based on the requirements, it can route to the appropriate targets.

A common use of ALB is in microservices architecture, where you must distribute traffic between different applications based on different rules.

To create an ALB, you must obtain an IP address type and set your LB protocol, that is, if it will use HTTP or HTTPS. You will also have to define the virtual private cloud (VPC) your load balancer will be in, plus the Availability Zone of a given AWS Region (by default, cross-zone load balancing is supported).

It is a good practice to create a security group for each LB; this gives you control of what comes in and out of a given LB.

The sources listed will indicate where you want to allow traffic. By default, ALB is set to 0.0.0.0/0, ::/0, meaning traffic will be permitted from everywhere.

You can only link one LB with each target group, with each load balancer routing requests to targets in a given target group. The target type will be instances for EC2 and ECS, an IP address for web applications, and Lambdas for serverless.

You will need to perform health checks on each path. Define a healthy threshold based on by how many consecutive successful checks are needed to consider a target healthy. Similarly, define an unhealthy threshold based on how many consecutive failed health checks are needed to consider a target unhealthy.

Remember to also define a timeout so that ALB knows the amount of time during which no response indicates a failed health check. Furthermore, the defined interval will tell ALB how often to execute the health checks.

Lastly, once a target group is created, you must register targets (e.g., instances) to the given target group.

Network Load Balancer (NLB)

NLB operates in Layer 4 (the transport layer). It is part of the new generation that supports TCP/TLS and UDP and is typically recommended for high-performance use cases.

NLB can balance load between:

  • EC2 Instances
  • Containerized applications (ECS)
  • Web applications using their IP address

NLB isn’t in the free tier, and it can’t route to Lambdas. NLB doesn’t include a security group either. You can assign it a Static or an Elastic IP, and it preserves the source IP address for non-HTTP applications.

Features comparison

Application Load Balancer Network Load Balancer
Operates at Application layer, Layer 7 Operates at Network layer, Laye 4
HTTP and HTTPS support TCP and TLC support
VPC VPC
Multiple ports Multiple ports
Uses target groups to route requests to server Uses target groups to route requests to server
Supports Lamda functions as targets Not applicable
Routes based on path,host,HTTP header,query string,and source IP Static IP,preserves the source IP,high throughput
HTTP/2 Not applicable

Each of these load balancers has its own features and is designed to handle specific workloads.

ALB is a load balancer that operates at the application layer (HTTP/HTTPS) and provides advanced traffic routing capabilities. It is a good choice for modern, microservice-based applications that require advanced routing and scaling capabilities.

NLB is a load balancer designed to handle high-throughput, low-latency workloads. It operates at the connection level (TCP) and is optimized for extreme performance, making it a good choice for high-performance workloads such as gaming, media streaming, and high-frequency trading.

So, which load balancer should you choose? If you have a modern, microservices-based application that requires advanced routing and scaling capabilities, ALB might be a good fit. And if you have a high-performance workload that requires extreme performance, NLB might be the best choice.

In summary, when deciding which AWS load balancer to choose, consider the specific requirements of your workload and the advanced features and capabilities that each load balancer offers. That way, you can choose the load balancer that best meets the needs of your application.

Cost Comparison

Each of these load balancers has its own pricing model, and the cost will depend on the specific needs of your workload. The three load balancers make use of the Load Balancer Capacity Unit (LCU), which measures the number of connections, new connections, and data processed by the load balancer.

The CLB is the most affordable of the three load balancers. It is priced per hour and charges for each LCU used per hour. The CLB also charges for the data transferred through the load balancer.

The ALB is the next most expensive load balancer. It is also priced per hour, but the cost is based on the number of LCUs used per hour and the data transferred through the load balancer. The ALB includes a fixed number of LCUs and GB of data transfer for free each month, and additional usage is charged at a lower rate.

The NLB is the most expensive of the three load balancers. It is also priced per hour and charges for the number of LCUs used per hour and the data transferred through the load balancer. The NLB also includes a fixed number of LCUs and GB of data transfer for free each month, with additional usage being charged at a lower rate.

In summary, the CLB is the most affordable load balancer, followed by the ALB, with the NLB being the most expensive. When deciding which load balancer to choose, it is essential to consider the specific needs of your workload and the cost of each option. That way, you can select the load balancer that best meets the needs of your application while staying within your budget.

It is recommended to use one of AWS's pricing calculators or to get in touch with their support. They offer some plans that can greatly aid in supporting your decisions.

Conclusion

Load balancers are all about listener rules, target groups, and autoscaling. Understanding how they work will help you see how they may improve your applications. Here are the key differences between each type of load balancer:

  • CLB
    • Layer 4 and Layer 7
    • Old generation, not recommended by AWS
  • ALB
    • Layer 7
    • Advanced routing approaches
  • NLB
    • Layer 4
    • High-performance use cases
    • Static or Elastic IP

It is important to know which network layer each AWS load balancer acts on to pick the one that best fits each situation.

Was this article helpful?
Monitor your AWS environment

AWS Monitoring helps you gain observability into your AWS environment

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us