If you somehow managed to lock a local Prism Element or Central user account then do the folloing to unlock
ssh to CVM or PCVM with the “nutanix” credentials and enter the following
allssh sudo faillock –user <username_to_unlock> –reset
If you somehow managed to lock a local Prism Element or Central user account then do the folloing to unlock
ssh to CVM or PCVM with the “nutanix” credentials and enter the following
allssh sudo faillock –user <username_to_unlock> –reset
To obtain a good level of performance, we will install the Windows VirtIO Drivers during the Windows installation.
I need to pass a Google Coral mPCIe device for TensorFlow processing to my Ubuntu 20.04+ VM with a docker container running Frigate
The motherboard I am using is Intel based and does not have an m.2 port but does have an mPCIe slot which was used for the WiFi/BT device and since they are not used in my virtual setup I swapped them out with the mPCIe Google Coral Adapter
Note: A USB device is available but has a high price tag especially when ordering 4 units
So, Here is how I enabled and passed it to my ubuntu VM
Enable IOMMU in Proxmox
Open your bootloader kernel command line config file
For grub & Intel CPU’s:
nano /etc/default/grub
Replace the “GRUB_CMDLINE_LINUX_DEFAULT” with
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
Reboot
Verify IOMMU is enabled
Run
dmesg | grep -e DMAR -e IOMMU
There should be a line that looks like “DMAR: IOMMU enabled”. If there is no output, something is wrong
Check for PCI device being detected
root@proxmox:~# lspci -nnk | grep 089a
02:00.0 System peripheral [0880]: Device [1ac1:089a]Subsystem: Device [1ac1:089a]
The 02.00.0 is the important part. This will change based upon your system and/or PCIe port used
Make sure 02:00.0 is in itās own iommu group. If you dont get anything returned, you may need to go back and check enabling IOMMU
root@proxmox:~# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/34/devices/0000:7f:0a.1 /sys/kernel/iommu_groups/34/devices/0000:7f:0a.2 /sys/kernel/iommu_groups/6/devices/0000:00:04.0 /sys/kernel/iommu_groups/62/devices/0000:80:04.2 /sys/kernel/iommu_groups/90/devices/0000:ff:10.1 /sys/kernel/iommu_groups/24/devices/0000:02:00.0
This will show a long list but you can grep for the device bus ID if you know it and then grep again the iommu group – commands below š
root@proxmox:~# find /sys/kernel/iommu_groups/ -type l | grep 04:
/sys/kernel/iommu_groups/24/devices/0000:02:00.0
root@proxmox:~# find /sys/kernel/iommu_groups/ -type l | grep /24/
/sys/kernel/iommu_groups/24/devices/0000:02:00.0
Blacklist the Device
Create a new file at /etc/modprobe.d/blacklist-apex.conf and add these lines:
root@proxmox:~# nano /etc/modprobe.d/blacklist-apex.conf
blacklist gasket blacklist apex options vfio-pci ids=1ac1:089a
Save the file and then then run the following to apply the changes
root@proxmox:~# update-initramfs -u -k all
The command above will take a while to run. Once completed restart ProxMox
Verify that the apex and gasket modules did not load by running this:
root@proxmox:~# lsmod | grep apex
It should not return anything
Add PCIe Device to VM
Once rebooted and confirmed all in order as above
Stop the VM, add the PCI device to the VM (where 100 is the VM ID in Proxmox)
root@proxmox:~# qm set 100 -hostpci0 02:00.0
I decided to move away from my trusty XenServer homelab to Proxmox and will be documenting the various steps taken including some hints & tips I have picked up along the way. Check them out here
But first, heres a little about my home lab setup
I have recently come across a project (pimox) that is a port of Proxmox to support Raspberry Pi4 so plan to build a single Promox node and if it works to tray a 3 node Proxmox cluster configured with Ceph with each node having the following spec
“Why” I hear you ask
First, Because I can
Second, For the fun of it and to tinker – Its in my nature
Third, I would like to run my Home Assistant and various home automation “services” in a cluster rather than on a single Raspberry Pi4
Again I hear you say “why not use a docker swarm”
Well, thats easy to answer. Its been done, well comumented so not fun to learn and do š
Don’t worry, if you have forgotten, changed or lost the CVM password and are not able to access the CLI you can recover/reset your CVM “nutanix” password but you still need the prism user & password credentials
Simply put, by setting up passwordless ssh and generating ssh keys from your workstation and adding your public key to the cluster via the prism you can regain access to the CVM via SSH
Once complete, you can login to the CVM from the same workstation and will not be prompted to enter the password but will log you in directly
Follow this article on how accomplis: KB-1895
Good luck & Don’t forget to change your password again to somthing you can remember!
Once your “template” VM has been powered off following sysprep you will want to push the associated disk image to the Image Service for our cluster
Using an SSH client, execute the following:
> acli <acropolis> image.create <IMAGE-NAME> clone_from_vmdisk=vm:<VM-NAME>:scsi.0 image_type=kDiskImage annotation="<IMAGE-ANNOTATION>"
Verify in Prism that the image is available and in an Active state.
After trying to upgrade BIOS and BMC through LCM a node is not booting up and keeps booting into pheonix
It is advised to open a support ticket to resolve “Why” it happend but you can run the following command to boot back to the host
Run the following command
python /phoenix/reboot_to_host.py
You may then need to run the following on the CVM to exit maintenance
First find the host ID
ncli host ls
You will see a host with the following:
Under Maintenance Mode : true (Hypervisor upgrade)
Run the following against the above host ID
ncli host edit id=<host_id> enable-maintenance-mode=false
nutanix@cvm$ cluster ācluster_name= –cluster_external_ip= –dns_servers= –ntp_servers= –redundancy_factor=2 -s cvm1_IP,cvm2_IP,cvm3_IP create
nutanix@cvm$ ncli cluster set-timezone timezone=choose_timezone
nutanix@cvm$Ā ncc health_checks run_all
nutanix@cvm$ svmips
nutanix@cvm$ hostips
nutanix@cvm$Ā ipmiips
View the bond mode and active interface with the following AHV command from any CVM
nutanix@CVM$ hostssh ovs-appctl bond/show
In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:
Note!! Your Bond name maybe bond0 br0_up br0-up
—- br0_up—-
bond_mode:Ā active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
slave eth2: enabled
activeĀ slave
may_enable: true
slave eth3: enabled
may_enable: true
nutanix@CVM$ host ssh ovs-vsctl set port br0_up bond_mode=active-backup
nutanix@CVM$ hostssh ovs-vsctl set port br0_up bond_mode=balance-slb
nutanix@CVM$ hostssh ovs-vsctl set port br0_up other_config:bond-rebalance-interval=60000
root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up lacp=active
root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up bond_mode=balance-tcp
root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up other_config:lacp-fallback-ab=true
Import a GPO from a file using Advaced Group Policy Managment
Download AGPM: https://www.microsoft.com/en-us/download/details.aspx?id=54967