If you somehow managed to lock a local Prism Element or Central user account then do the folloing to unlock
ssh to CVM or PCVM with the “nutanix” credentials and enter the following
allssh sudo faillock –user <username_to_unlock> –reset
If you somehow managed to lock a local Prism Element or Central user account then do the folloing to unlock
ssh to CVM or PCVM with the “nutanix” credentials and enter the following
allssh sudo faillock –user <username_to_unlock> –reset
Don’t worry, if you have forgotten, changed or lost the CVM password and are not able to access the CLI you can recover/reset your CVM “nutanix” password but you still need the prism user & password credentials
Simply put, by setting up passwordless ssh and generating ssh keys from your workstation and adding your public key to the cluster via the prism you can regain access to the CVM via SSH
Once complete, you can login to the CVM from the same workstation and will not be prompted to enter the password but will log you in directly
Follow this article on how accomplis: KB-1895
Good luck & Don’t forget to change your password again to somthing you can remember!
Once your “template” VM has been powered off following sysprep you will want to push the associated disk image to the Image Service for our cluster
Using an SSH client, execute the following:
> acli <acropolis> image.create <IMAGE-NAME> clone_from_vmdisk=vm:<VM-NAME>:scsi.0 image_type=kDiskImage annotation="<IMAGE-ANNOTATION>"
Verify in Prism that the image is available and in an Active state.
After trying to upgrade BIOS and BMC through LCM a node is not booting up and keeps booting into pheonix
It is advised to open a support ticket to resolve “Why” it happend but you can run the following command to boot back to the host
Run the following command
python /phoenix/reboot_to_host.py
You may then need to run the following on the CVM to exit maintenance
First find the host ID
ncli host ls
You will see a host with the following:
Under Maintenance Mode : true (Hypervisor upgrade)
Run the following against the above host ID
ncli host edit id=<host_id> enable-maintenance-mode=false
nutanix@cvm$ cluster –cluster_name= –cluster_external_ip= –dns_servers= –ntp_servers= –redundancy_factor=2 -s cvm1_IP,cvm2_IP,cvm3_IP create
nutanix@cvm$ ncli cluster set-timezone timezone=choose_timezone
nutanix@cvm$ ncc health_checks run_all
nutanix@cvm$ svmips
nutanix@cvm$ hostips
nutanix@cvm$ ipmiips
View the bond mode and active interface with the following AHV command from any CVM
nutanix@CVM$ hostssh ovs-appctl bond/show
In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:
Note!! Your Bond name maybe bond0 br0_up br0-up
—- br0_up—-
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
slave eth2: enabled
active slave
may_enable: true
slave eth3: enabled
may_enable: true
nutanix@CVM$ host ssh ovs-vsctl set port br0_up bond_mode=active-backup
nutanix@CVM$ hostssh ovs-vsctl set port br0_up bond_mode=balance-slb
nutanix@CVM$ hostssh ovs-vsctl set port br0_up other_config:bond-rebalance-interval=60000
root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up lacp=active
root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up bond_mode=balance-tcp
root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up other_config:lacp-fallback-ab=true
Too many times I have gone to do licensing in a newly created cluster (or log onto a clients Nutanix deployment) to be faced with the following
The below command run from any CVM cluster or for Prism Central CLI will clear it
–> ssh to Cluster or Prism Central IP (Depending which you have the message in)
–> ncli license reset-license
Log out of the GUI and back in again
It is recommended to disable SIOC on Nutanix because this setting can cause following issues.
Perform the following procedure to disable storage I/O statistics collection.
Warning: This step affects the operation of a Nutanix cluster – Schedule a down time before performing
1. Log on to the hypervisor with SSH (vSphere or AHV) or remote desktop connection (Hyper-V), or theIPMI remote console
2. Log on to the any Controller VM
→ vSphere or AHV root@host# ssh nutanix@192.168.5.254
→ Hyper-V > ssh nutanix@192.168.5.254
3. Change CVM IP
Stop the Nutanix cluster
→ nutanix@cvm$ cluster stop
→ nutanix@cvm$ cluster reconfig
→ nutanix@cvm$ external_ip_reconfg
Follow the prompts to type the new netmask, gateway, and external IP addresses
4. Shutdown each Controller VM in the cluster
→ nutanix@cvm$ cvm_shutdown -P now
5. Change hypervisor IP
6. Change IPMI IP
7. Reboot Hypervisor
8. Log on to the any Controller VM
→ nutanix@cvm$ cluster start