AHV Load Balancing

View the bond mode and active interface with the following AHV command from any CVM

nutanix@CVM$ hostssh ovs-appctl bond/show

In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:

Note!! Your Bond name maybe bond0 br0_up br0-up

—- br0_up—-
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off

slave eth2: enabled
active slave
may_enable: true

slave eth3: enabled
may_enable: true

  • Active-Backup

nutanix@CVM$ host ssh ovs-vsctl set port br0_up bond_mode=active-backup

  • Balance-SLB

nutanix@CVM$ hostssh ovs-vsctl set port br0_up bond_mode=balance-slb

nutanix@CVM$ hostssh ovs-vsctl set port br0_up other_config:bond-rebalance-interval=60000

  • LACP & Link-Aggregation NB!! LACP must be set upstream and you may want to do this one via each hosts SSH session

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up lacp=active

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up bond_mode=balance-tcp

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up other_config:lacp-fallback-ab=true

Advertisement

Nutanix Cluster Re-IP

Warning: This step affects the operation of a Nutanix cluster – Schedule a down time before performing

1. Log on to the hypervisor with SSH (vSphere or AHV) or remote desktop connection (Hyper-V), or theIPMI remote console

2. Log on to the any Controller VM
→ vSphere or AHV root@host# ssh nutanix@192.168.5.254
→ Hyper-V > ssh nutanix@192.168.5.254

3. Change CVM IP
Stop the Nutanix cluster
→ nutanix@cvm$ cluster stop
→ nutanix@cvm$ cluster reconfig
→ nutanix@cvm$ external_ip_reconfg
Follow the prompts to type the new netmask, gateway, and external IP addresses

4. Shutdown each Controller VM in the cluster
→ nutanix@cvm$ cvm_shutdown -P now

5. Change hypervisor IP

6. Change IPMI IP

7. Reboot Hypervisor

8. Log on to the any Controller VM
→ nutanix@cvm$ cluster start

Configure AHV 1GBe Network Links

If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE interfaces (eth0 and eth1) to a bond on a separate OVS bridge, create a VLAN network on the bridge, and then assign guest VM interfaces to the network.

To configure 1 GbE connectivity for guest VMs, do the following:

  1. Log on to the AHV host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Determine the uplinks configured on the host.
    nutanix@cvm$ allssh manage_ovs show_uplinks

    Output similar to the following is displayed:

    Executing manage_ovs show_uplinks on the cluster
    ================== 192.0.2.49 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.50 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.51 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    
  4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond name are br0 and br0-up, respectively.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up update_uplinks'

    The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.

  5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1 (bridge names must not exceed 10 characters.).
    nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1'
  6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to a bond named br1-up.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up update_uplinks'
  7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the guest VMs, and associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN 10.
    nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
  8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on the guest VMs to the network.For information about assigning guest VM interfaces to a network, see “Creating a VM” in the Prism Web Console Guide.

Rename AHV Hosts

1. Log on to the AHV host with SSH
2. sudo nano /etc/sysconfig/network
3. Edit the following line

HOSTNAME=new_hostname_here.mycompany.com

Tip: use FQDN as per DNS entries to fix the NCC “FQDN lookup error”
4. Restart the Acropolis host

Shutting down an AHV node in a Cluster

Before you begin

Shut down guest VMs that are running on the node, or move them to other nodes in the cluster.

Caution: You can only shut down one node for each cluster. If the cluster would have more than one node shut down, shut down the entire cluster.

  1. If the Controller VM is running, shut down the Controller VM.
    1. Log on to the Controller VM with SSH.
    2. Put the node into maintenance mode.
      nutanix@cvm$ acli host.enter_maintenance_mode host_ID [wait="{ true | false }" ]

      Replace host_ID with the host ID of the host.

      Specify wait=true to wait for the host evacuation attempt to finish.

    3. Shut down the Controller VM.
      nutanix@cvm$ cvm_shutdown -P now
  2. Log on to the Acropolis host with SSH.
  3. Shut down the host.
    root@ahv# shutdown -h now