Azure Load Balancer – Managing Data in a Hybrid Network

Since we have been discussing network load balancing, I want to delve a bit deeper into Azure’s network load balancing tool called Azure Load Balancer. Azure Load Balancer has three different SKUs that you can choose from: Basic, Standard, and Gateway. Each is designed for specific scenarios and each has differences in scale, features, and pricing.

Azure Load Balancer operates at Layer 4 of the Open Systems Interconnection (OSI) model and distributes inbound flows that enter at the load balancer’s front-e nd to backend pool instances and supports both inbound and outbound scenarios. As with some other Azure tools, there is a cost associated with using Azure Load Balancer. For more information on pricing, check out Microsoft’s website at https://azure.microsoft.com/en- us/ pricing/details/load- balancer/#purchase- options.

With Azure Load Balancer you can create either a public (external) load balancer or an internal (private) load balancer. A public load balancer can provide outbound connections for VMs inside your virtual network and are used to load- balance Internet traffic to the VMs. These connections work by converting their private IP addresses to public IP addresses. An internal (or private) load balancer can route traffic from the public to resources within your network and are used to load- balance traffic inside a virtual network. It can be accessed only from private resources that are internal to the network.

Azure Load Balancer works across virtual machines, virtual machine scale sets, and IP addresses. There are three SKUs that you can choose from:

Standard Load Balancer Designed for load- balancing network layer traffic when high performance and super- low latency are required. It routes traffic within and across regions, and to availability zones for high resiliency.

Basic Load Balancer Designed for small- scale applications that do not need high- availability or redundancy. Not compatible with availability zones.

Gateway Load Balancer Designed to help deploy, scale, and manage third- party virtual appliances. Provides one gateway for distributing traffic across multiple virtual appliances. You can scale them up or down, depending on demand.

For step- by- step instructions on how to create a public (external) load balancer using the Azure portal, check out Microsoft’s website at https://learn.microsoft.com/en- us/ azure/load- balancer/quickstart- load- balancer- standard- public- portal.

Configure a Floating IP Address for the Cluster

Some application scenarios may require or suggest that the same port be used by several applications on a single VM in the backend pool. Some examples of common port reuse are clustering for high availability and network virtual appliances. You will need to enable Floating IP in the rule definition if you want to reuse the backend port across multiple rules. When it’s enabled, Azure will change the IP address mapping to the front-e nd IP address of the load balancer front end instead of the backend’s IP address, which allows for greater flexibility.

You can configure a Floating IP on a Load Balancer rule by using a number of tools such as the Azure portal, REST API, CLI, or PowerShell. You must also configure the virtual machine’s Guest OS in order to use a Floating IP. To work properly, the Guest OS for the VM must be configured to receive all traffic bound for the front- end IP and port of the load balancer.

Achieving High Availability with Hyper- V

One of the nice advantages of using Hyper- V is the ability to run an operating server within another server. Virtualization allows you to run multiple servers on top of a single Hyper- V server. But we need to make sure that these servers stay up and running.

That is where Hyper- V high availability comes into play. Ensuring that your Hyper-V  servers are going to continue to run even if there is a hardware issue is an important step in guaranteeing the success of your network. There are many ways to achieve that. One is to set up clustering and another is to set up Hyper- V high availability without clustering. Setting up reliability without clustering requires that your Hyper-V  servers have replica copies that can automatically start up if the virtual machine errors out. This is referred to as live migration and replica servers.

Implementing a Hyper- V Replica

Hyper- V Replica is an important part of the Hyper-V  role. It replicates the Hyper-V virtual  machines from the primary site to the replica secondary sites simultaneously.

Once you enable Hyper- V Replica for a particular virtual machine on the primary Hyper V host server, the Hyper- V replica will begin to create an exact copy of the virtual machine for the secondary site. After this replication, Hyper-V  Replica creates a log file for the virtual machine VHDs. This log file is rerun in reverse order to the replica VHD. This is done using replication frequency. The log files and reverse order helps ensure that the latest changes are stored and copied asynchronously. If there is an issue with the replication frequency, you will receive an alert.

On the virtual machine, you can establish resynchronization settings. You can do this manually, automatically, or automatically on an explicit schedule. To fix constant synchronization issues, you may choose to set up automatic resynchronization.

Hyper- V Replica will aid in a disaster recovery strategy by replicating virtual machines from one host to other while keeping workloads accessible. Hyper- V Replica can create a copy of a running virtual machine to a replica offline virtual machine.

Hyper- V Hosts

With replication over a WAN link, the primary and secondary host servers can be located in the same physical location or at different geographical locations. Hyper- V hosts can be stand- alone, clustered, or a combination of both. Hyper- V hosts are not dependent on Active Directory, and there is no need to be domain members.

Replication and Change Tracking

When you enable Hyper-V  Replica on a virtual machine, an identical copy of that VM is created on a secondary host server. Once this happens, the Hyper-V  Replica will create a log file that will track changes made on a virtual machine VHD. The log file is rerun in reverse order to the replica VHD. This is based on the replication frequency settings, and it ensures that the latest changes are created and replicated asynchronously. This can be done over HTTP or HTTPS.

Extended (Chained) Replication

Extended (Chained) Replication allows you to replicate a virtual machine from a primary host to a secondary host and then replicate the secondary host to a third host. It is not possible to replicate from the primary host directly to the second and third hosts.

Extended (Chained) Replication aids in disaster recovery in that you can recover from both the primary and extended replica. Extended Replication will also help if the primary and secondary locations go offline. It must be noted that the extended replica does not support application- consistent replication and it must use the same VHD that the secondary replica uses.

Setting the Affinity

NLB allows you to configure three types of affinity settings to help response times between NLB clients. Each affinity setting determines a method of distributing NLB client requests. There are three different affinity settings:

No Affinity (None) If you set the affinity to No Affinity (None), NLB will not assign a NLB client with any specific member. When a request is sent to the NLB, the requests are balanced among all the nodes. No Affinity provides greater performance, but there may be issues with clients establishing sessions. This happens because the request may be load- balanced between NLB nodes and session information may not be present.

Single Affinity Setting the cluster affinity to Single (this is the default setting) will send all traffic from a specific IP address to a single cluster node. This will keep a client on a specific node where the client should not have to authenticate again. Setting the affinity mode to Single would remove the authentication problem but would not distribute the load to other servers unless the initial server was down. Setting the affinity to Single allows a client’s IP address to always connect to the same NLB node. This setting allows clients using an intranet to get the best performance.

Class C Affinity When setting the affinity to Class C, NLB links clients with a specific member based on the Class C part of the client’s IP address. This allows you to set up NLB so that clients from the same Class C address range can access the same NLB member. This affinity is best for NLB clusters using the Internet.

Failover

If the primary or the secondary (extended) host server locations goes offline, you can manually initiate failover. Failover is not automatic. There are several different types of manually initiating failover:

Test Failover Use Test Failover to verify that the replica virtual machine can successfully start in the secondary site. It will create a copy test virtual machine during failover and does not affect standard replication. After the test failover, if you select Failover on the replica test virtual machine, the test failover will be deleted.

Planned Failover Use Planned Failover during scheduled downtime. You will have to turn off the primary machine before performing a planned failover. Once the machine fails over, the Hyper- V Replica will start replicating changes back to the primary server. The changes are tracked and sent to ensure that no data is lost. Once the planned failover is complete, the reverse replication begins so that the primary virtual machine become the secondary, and vice versa. This ensures that the hosts are synchronized.

Unplanned Failover Use Unplanned Failover during unforeseen outages. Unplanned failover is started on the replica virtual machine. This should only be used if the primary machine goes offline. A check will confirm whether the primary machine is running. If you have recovery history enabled, then it is possible to recover to an earlier point in time. During failover, you should ensure that the recovery point is acceptable and then finish the failover to ensure that recovery points are combined.

Leave a Reply