Virtualization on IBM Cloud

Virtualization on IBM Cloud

Join us to explore IBM’s next-generation virtualization strategy, connect with a community of IBM Cloud experts and practitioners, and learn how peers are modernizing VM workloads. Get practical guidance, real-world best practices, and the latest updates on product enhancements, plus access to regional user groups, technical deep dives, webinars, and hands-on how-to content designed to help you modernize with confidence.


#Cloud
 View Only

Understanding the Working of NSX-T Native Load Balancer

By Vineesh V posted Thu December 25, 2025 12:24 AM

  

VMware NSX-T provides a software-defined networking platform that includes built-in load balancing capabilities. The Native Load Balancer is a core feature designed to distribute traffic across multiple servers, ensuring high availability and optimized performance for applications.

 Working of NSX-T Native Load Balancer

The NSX-T Native Load Balancer operates at Layer 4 (TCP/UDP) and Layer 7 (HTTP/HTTPS). It is attached to a Tier-1 Gateway only  and runs on NSX Edge Nodes.

                          

Key components to plan and configure:

1.     Load Balancer:

This is the Load Balancer instances that get create in T1 gateway instances and this will be running along with the T1 Service Router. We must define the LB instances size and right T1 gateway

A screenshot of a computer

AI-generated content may be incorrect.

2.     Server Pools: Group of backend servers serving the application.

This is the list of application servers those serving the actual service, You can select individual servers or you can create a group under inventory and select the group. This will allow to work with dynamic server group. Also we can configure the algorithm how LB can route the traffic to the servers in this pool.


Option

Description

ROUND_ROBIN

Incoming client requests are cycled through a list of available servers capable of handling the request.

Ignores the server pool member weights even if they are configured.

WEIGHTED_ROUND_ROBIN

Each server is assigned a weight value that signifies how that server performs relative to other servers in the pool. The value determines how many client requests are sent to a server compared to other servers in the pool.

This load balancing algorithm focuses on fairly distributing the load among the available server resources.

LEAST_CONNECTION

Distributes client requests to multiple servers based on the number of connections already on the server.

New connections are sent to the server with the fewest connections. Ignores the server pool member weights even if they are configured.

WEIGHTED_LEAST_CONNECTION

Each server is assigned a weight value that signifies how that server performs relative to other servers in the pool. The value determines how many client requests are sent to a server compared to other servers in the pool.

This load balancing algorithm focuses on using the weight value to distribute the load among the available server resources fairly.

By default, the weight value is 1 if the value is not configured and slow start is enabled.

IP-HASH               

Selects a server based on a hash of the source IP address and the total weight of all the running servers.



A screenshot of a computer

AI-generated content may be incorrect.

3.     Monitors: Health checks to ensure backend servers are responsive.

Monitoring is required to understand, How many servers in pool are actively functional and ready to process the requests. This will

A screenshot of a computer

AI-generated content may be incorrect.

4.     Virtual Servers: Represent the VIP (Virtual IP) that clients connect to your application

A Virtual Server (VIP) is a front-end entity in a load balancer that clients connect to, representing a virtual IP address rather than a physical server, and it supports both Layer 4 (TCP/UDP) and Layer 7 (HTTP) protocols; its configuration requires specifying the virtual IP, the protocol layer, the TCP/UDP port number for the service, and the destination server pool, which is a group of backend servers that handle client requests distributed according to load-balancing algorithms.

-      





Architecture

  • Edge Nodes and cluster  will be hosting the Service Router component of Tire 1 gateway where LB resides.
  • Tier-1 Gateway should be in active – standby mode and active gateway will be hosts the load balancer service.
  • Components: Load balancer > Virtual server IP (VIP) >Server pool > Health check monitor



A diagram of a server

AI-generated content may be incorrect.



Traffic flow:

  1. Client sends a request to the VIP.
  2. Load balancer selects a backend server based on algorithm (round-robin, least connections, etc.).
  3. Health monitors ensure only healthy servers receive traffic.
  4. SNAT/DNAT rules manage return traffic depending on topology.

    
A diagram of a computer server

AI-generated content may be incorrect.



Native Load Balancer vs Advanced Load Balancer (Avi)

Feature

Native LB

Advanced LB (Avi)

Deployment

Attached to Tier-1 Gateway

Distributed architecture with Controller & Service Engines

Capabilities

Basic L4/L7 load balancing

Advanced features: WAF, GSLB, analytics, autoscaling

Automation

Limited

Full API-driven, CI/CD integration

Scalability

Single LB per Tier-1

Elastic scaling across multi-cloud

Future

Deprecated in NSX 5.x

Strategic direction for VMware load balancing 

Use Cases

  • Basic application distribution: It can distribute incoming network traffic directly across a pool of servers or with SNAT to prevent any single server from becoming overloaded.

  • High availability: By directing traffic away from servers that fail health checks, it ensures that applications remain available to users even if one or more backend servers go offline.

  • SSL offloading: It can offload SSL/TLS encryption and decryption from the application servers, freeing up their resources to focus on their primary tasks.

  • Simple load balancing algorithms: It supports basic algorithms like Round Robin, Weighted Round Robin, Least Connections, Weighted Least Connections, and IP Hash to distribute traffic based on different criteria.


More details
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/administration-guide/load-balancer.html

Conclusion

The NSX native load balancer has served as a reliable solution for basic L4–L7 traffic distribution within NSX environments, offering simplicity and tight integration with Tier‑1 gateways. It supports SNAT and SSL/TSL offloading with different

0 comments
1 view

Permalink