Lost / Forgotten CVM Password

Don’t worry, if you have forgotten, changed or lost the CVM password and are not able to access the CLI you can recover/reset your CVM “nutanix” password but you still need the prism user & password credentials

Simply put, by setting up passwordless ssh and generating ssh keys from your workstation and adding your public key to the cluster via the prism you can regain access to the CVM via SSH

Once complete, you can login to the CVM from the same workstation and will not be prompted to enter the password but will log you in directly

Follow this article on how accomplis: KB-1895 

Good luck & Don’t forget to change your password again to somthing you can remember!

Advertisement

Nutanix Image Service Template

Once your “template” VM has been powered off following sysprep you will want to push the associated disk image to the Image Service for our cluster

Using an SSH client, execute the following:

> acli
<acropolis> image.create <IMAGE-NAME> clone_from_vmdisk=vm:<VM-NAME>:scsi.0 image_type=kDiskImage annotation="<IMAGE-ANNOTATION>"

Verify in Prism that the image is available and in an Active state.

Nutanix Node stuck in Pheonix bootloop

After trying to upgrade BIOS and BMC through LCM a node is not booting up and keeps booting into pheonix

It is advised to open a support ticket to resolve “Why” it happend but you can run the following command to boot back to the host

Run the following command

python /phoenix/reboot_to_host.py

You may then need to run the following on the CVM to exit maintenance

First find the host ID

ncli host ls

You will see a host with the following:

Under Maintenance Mode    : true (Hypervisor upgrade)

Run the following against the above host ID

ncli host edit id=<host_id> enable-maintenance-mode=false

Nutanix Cluster Commands

  • Manual Cluster Creation

nutanix@cvm$ cluster –cluster_name= –cluster_external_ip= –dns_servers= –ntp_servers= –redundancy_factor=2 -s cvm1_IP,cvm2_IP,cvm3_IP create

  • Change Cluster TimeZone

nutanix@cvm$ ncli cluster set-timezone timezone=choose_timezone

  • Run Health Checks

nutanix@cvm$ ncc health_checks run_all

  • To check controller VM (CVM) IP addressees on nutanix cluster

nutanix@cvm$ svmips

  • To check Hypervisor IP addresses on nutanix cluster

nutanix@cvm$ hostips

  • To check IPMI IP addresses on nutanix cluster

nutanix@cvm$ ipmiips

AHV Load Balancing

View the bond mode and active interface with the following AHV command from any CVM

nutanix@CVM$ hostssh ovs-appctl bond/show

In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:

Note!! Your Bond name maybe bond0 br0_up br0-up

—- br0_up—-
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off

slave eth2: enabled
active slave
may_enable: true

slave eth3: enabled
may_enable: true

  • Active-Backup

nutanix@CVM$ host ssh ovs-vsctl set port br0_up bond_mode=active-backup

  • Balance-SLB

nutanix@CVM$ hostssh ovs-vsctl set port br0_up bond_mode=balance-slb

nutanix@CVM$ hostssh ovs-vsctl set port br0_up other_config:bond-rebalance-interval=60000

  • LACP & Link-Aggregation NB!! LACP must be set upstream and you may want to do this one via each hosts SSH session

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up lacp=active

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up bond_mode=balance-tcp

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up other_config:lacp-fallback-ab=true

Reset/Clear Nutanix Licensing violation

Too many times I have gone to do licensing in a newly created cluster (or log onto a clients Nutanix deployment) to be faced with the following

The below command run from any CVM cluster or for Prism Central CLI will clear it

–> ssh to Cluster or Prism Central IP (Depending which you have the message in)

–> ncli license reset-license

Log out of the GUI and back in again

Nutanix vSphere Cluster Settings

Following list provides an overview of the settings to configure as best practice on a Nutanix ESXi deployment

vSphere HA Settings

  • Enable host monitoring
  • Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster
  • Set the VM Restart Priority of all Controller VMs toDisabled
  • Set the Host Isolation Response of the cluster to Power Off
  • Set the Host Isolation Response of all Controller VMs to Disabled
  • Set the VM Monitoring for all Controller VMs to Disabled
  • Enable Datastore Heartbeating by clicking Select only from my preferred datastores and choosing the Nutanix NFS datastore – I normally create two small heatbeat datastores, advertise as 10GB capacity and use these. However if the cluster has only one datastore, add an advanced option named das.ignoreInsufficientHbDatastore withValue of true

vSphere DRS Settings

  • Set the Automation Level on all Controller VMs to Disabled
  • Leave power management disabled (set to Off)

Other Cluster Settings

  • Store VM swapfiles in the same directory as the virtual machine
  • Enable EVC in the cluster

Disable SIOC on a Container

It is recommended to disable SIOC on Nutanix because this setting can cause following issues.

  • If SIOC or SIOC in the statistics mode is enabled then storage might become unavailable.
  • If SIOC is enabled and you are using Metro Availability feature, you may face issues with activate and restore operation.
  • If SIOC in the statistics mode is enabled, then this might cause all the hosts to repeatedly create and delete the access and .lck-XXXXXXXX files in the .iorm.sf directory in the root directory of the container.

Perform the following procedure to disable storage I/O statistics collection.

  1. Log into the vSphere Web Client.
  2. Click Storage.
  3. Navigate to the container for your cluster.
  4. Right-click the container and select Configure Storage I/O Controller.
    The properties for the container is displayed. The Disable Storage I/O statistics collection option is unchecked, which means that SIOC is enabled by default.
  5. Select the Disable Storage I/O statistics collection option to disable SIOC, and click OK.
  6. Select the Exclude I/O Statistics from SDRS option, and click OK.

Nutanix Cluster Re-IP

Warning: This step affects the operation of a Nutanix cluster – Schedule a down time before performing

1. Log on to the hypervisor with SSH (vSphere or AHV) or remote desktop connection (Hyper-V), or theIPMI remote console

2. Log on to the any Controller VM
→ vSphere or AHV root@host# ssh nutanix@192.168.5.254
→ Hyper-V > ssh nutanix@192.168.5.254

3. Change CVM IP
Stop the Nutanix cluster
→ nutanix@cvm$ cluster stop
→ nutanix@cvm$ cluster reconfig
→ nutanix@cvm$ external_ip_reconfg
Follow the prompts to type the new netmask, gateway, and external IP addresses

4. Shutdown each Controller VM in the cluster
→ nutanix@cvm$ cvm_shutdown -P now

5. Change hypervisor IP

6. Change IPMI IP

7. Reboot Hypervisor

8. Log on to the any Controller VM
→ nutanix@cvm$ cluster start

Configure AHV 1GBe Network Links

If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE interfaces (eth0 and eth1) to a bond on a separate OVS bridge, create a VLAN network on the bridge, and then assign guest VM interfaces to the network.

To configure 1 GbE connectivity for guest VMs, do the following:

  1. Log on to the AHV host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Determine the uplinks configured on the host.
    nutanix@cvm$ allssh manage_ovs show_uplinks

    Output similar to the following is displayed:

    Executing manage_ovs show_uplinks on the cluster
    ================== 192.0.2.49 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.50 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.51 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    
  4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond name are br0 and br0-up, respectively.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up update_uplinks'

    The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.

  5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1 (bridge names must not exceed 10 characters.).
    nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1'
  6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to a bond named br1-up.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up update_uplinks'
  7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the guest VMs, and associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN 10.
    nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
  8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on the guest VMs to the network.For information about assigning guest VM interfaces to a network, see “Creating a VM” in the Prism Web Console Guide.