Proxmox Windows VM Install

Prepare

To obtain a good level of performance, we will install the Windows VirtIO Drivers during the Windows installation.

  • Create a new VM, select “Microsoft Windows 10/2016/2019” as Guest OS and enable the “Qemu Agent” in the System tab. Continue and mount your Windows 10 ISO in the CDROM drive
  • For your virtual hard disk select “SCSI” as bus with “VirtIO SCSI” as controller. Set “Write back” as cache option for best performance (the “No cache” default is safer, but slower) and tick “Discard” to optimally use disk space (TRIM).
  • Configure your memory settings as needed, continue and set “VirtIO (paravirtualized)” as network device, finish your VM creation.
  • For the VirtIO drivers, upload the driver ISO (use the stable VirtIO ISO, download it from here) to your storage, create a new CDROM drive (use “Add -> CD/DVD drive” in the hardware tab) with Bus “IDE” and number 3. Load the Virtio Drivers ISO in the new virtual CDROM drive.
  • Now your ready to start the VM, just follow the Windows installer.

Launch Windows install

  • After starting your VM launch the noVNC console
  • Follow the installer steps until you reach the installation type selection where you need to select “Custom (advanced)”
  • Now click “Load driver” to install the VirtIO drivers for hard disk and the network.
    • Hard disk: Browse to the CD drive where you mounted the VirtIO driver and select folder “\amd64\xxx” and confirm. Select the “Red Hat VirtIO SCSI pass-through controller” and click next to install it. Now you should see your drive.
    • Network: Repeat the steps from above (click again “Load driver”, etc.) and select the folder “NetKVM\w10\amd64”, confirm it and select “Redhat VirtIO Ethernet Adapter” and click next.
    • Memory Ballooning: Again, repeat the steps but this time select the “Balloon\w10\amd64” folder, then the “VirtIO Balloon Driver” and install it by clicking next. With these three drivers you should be good covered to run a fast virtualized Windows 10 system.
  • Choose the drive and continue the Windows installer steps.
Advertisement

Proxmox PCIe passthrough

I need to pass a Google Coral mPCIe device for TensorFlow processing to my Ubuntu 20.04+ VM with a docker container running Frigate

The motherboard I am using is Intel based and does not have an m.2 port but does have an mPCIe slot which was used for the WiFi/BT device and since they are not used in my virtual setup I swapped them out with the mPCIe Google Coral Adapter

Note: A USB device is available but has a high price tag especially when ordering 4 units

So, Here is how I enabled and passed it to my ubuntu VM

Enable IOMMU in Proxmox

Open your bootloader kernel command line config file

For grub & Intel CPU’s:

nano /etc/default/grub

Replace the “GRUB_CMDLINE_LINUX_DEFAULT” with

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

Reboot

Verify IOMMU is enabled

Run

dmesg | grep -e DMAR -e IOMMU

There should be a line that looks like “DMAR: IOMMU enabled”. If there is no output, something is wrong

Check for PCI device being detected

root@proxmox:~# lspci -nnk | grep 089a
02:00.0 System peripheral [0880]: Device [1ac1:089a]Subsystem: Device [1ac1:089a]

The 02.00.0 is the important part. This will change based upon your system and/or PCIe port used

Make sure 02:00.0 is in it’s own iommu group. If you dont get anything returned, you may need to go back and check enabling IOMMU

root@proxmox:~# find /sys/kernel/iommu_groups/ -type l

/sys/kernel/iommu_groups/34/devices/0000:7f:0a.1 /sys/kernel/iommu_groups/34/devices/0000:7f:0a.2 /sys/kernel/iommu_groups/6/devices/0000:00:04.0 /sys/kernel/iommu_groups/62/devices/0000:80:04.2 /sys/kernel/iommu_groups/90/devices/0000:ff:10.1 /sys/kernel/iommu_groups/24/devices/0000:02:00.0

This will show a long list but you can grep for the device bus ID if you know it and then grep again the iommu group – commands below šŸ˜‰

root@proxmox:~# find /sys/kernel/iommu_groups/ -type l | grep 04:

/sys/kernel/iommu_groups/24/devices/0000:02:00.0

root@proxmox:~# find /sys/kernel/iommu_groups/ -type l | grep /24/

/sys/kernel/iommu_groups/24/devices/0000:02:00.0

Blacklist the Device

Create a new file at /etc/modprobe.d/blacklist-apex.conf and add these lines:

root@proxmox:~# nano /etc/modprobe.d/blacklist-apex.conf
blacklist gasket
blacklist apex
options vfio-pci ids=1ac1:089a

Save the file and then then run the following to apply the changes

root@proxmox:~# update-initramfs -u -k all

The command above will take a while to run. Once completed restart ProxMox

Verify that the apex and gasket modules did not load by running this:

root@proxmox:~# lsmod | grep apex

It should not return anything

Add PCIe Device to VM

Once rebooted and confirmed all in order as above

Stop the VM, add the PCI device to the VM (where 100 is the VM ID in Proxmox)

root@proxmox:~# qm set 100 -hostpci0 02:00.0

My Home Lab

I decided to move away from my trusty XenServer homelab to Proxmox and will be documenting the various steps taken including some hints & tips I have picked up along the way. Check them out here

But first, heres a little about my home lab setup

Current Build

Hosts / Nodes

Networking

  • Intenet
    • Static IP Broandband – It can be helpfully
    • Double Hop Firewalls: ISP + pFsense firewall
  • Switches
  • Mesh WiFI System

Virtualisation

  • Proxmox Virtual Environment 6.4

Other Hardware

  • 16 U Server Rack on wheels
  • 2 x UPS
  • 2 x PDU
  • 4 port KVM Switch

Whats next?

I have recently come across a project (pimox) that is a port of Proxmox to support Raspberry Pi4 so plan to build a single Promox node and if it works to tray a 3 node Proxmox cluster configured with Ceph with each node having the following spec

  • 1 x Raspberry Pi 4 8GB
    • 32GB Micro SD
    • 256 USB3.0 SSD

“Why” I hear you ask
First, Because I can
Second, For the fun of it and to tinker – Its in my nature
Third, I would like to run my Home Assistant and various home automation “services” in a cluster rather than on a single Raspberry Pi4

Again I hear you say “why not use a docker swarm”
Well, thats easy to answer. Its been done, well comumented so not fun to learn and do šŸ™‚

Lost / Forgotten CVM Password

Don’t worry, if you have forgotten, changed or lost the CVM password and are not able to access the CLI you can recover/reset your CVM “nutanix” password but you still need the prism user & password credentials

Simply put, by setting up passwordless ssh and generating ssh keys from your workstation and adding your public key to the cluster via the prism you can regain access to the CVM via SSH

Once complete, you can login to the CVM from the same workstation and will not be prompted to enter the password but will log you in directly

Follow this article on how accomplis: KB-1895 

Good luck & Don’t forget to change your password again to somthing you can remember!

Nutanix Image Service Template

Once your “template” VM has been powered off following sysprep you will want to push the associated disk image to the Image Service for our cluster

Using an SSH client, execute the following:

> acli
<acropolis> image.create <IMAGE-NAME> clone_from_vmdisk=vm:<VM-NAME>:scsi.0 image_type=kDiskImage annotation="<IMAGE-ANNOTATION>"

Verify in Prism that the image is available and in an Active state.

Nutanix Node stuck in Pheonix bootloop

After trying to upgrade BIOS and BMC through LCM a node is not booting up and keeps booting into pheonix

It is advised to open a support ticket to resolve “Why” it happend but you can run the following command to boot back to the host

Run the following command

python /phoenix/reboot_to_host.py

You may then need to run the following on the CVM to exit maintenance

First find the host ID

ncli host ls

You will see a host with the following:

Under Maintenance Mode    : true (Hypervisor upgrade)

Run the following against the above host ID

ncli host edit id=<host_id> enable-maintenance-mode=false

Nutanix Cluster Commands

  • Manual Cluster Creation

nutanix@cvm$ cluster –cluster_name= –cluster_external_ip= –dns_servers= –ntp_servers= –redundancy_factor=2 -s cvm1_IP,cvm2_IP,cvm3_IP create

  • Change Cluster TimeZone

nutanix@cvm$ ncli cluster set-timezone timezone=choose_timezone

  • Run Health Checks

nutanix@cvm$Ā ncc health_checks run_all

  • To check controller VM (CVM) IP addressees on nutanix cluster

nutanix@cvm$ svmips

  • To check Hypervisor IP addresses on nutanix cluster

nutanix@cvm$ hostips

  • To check IPMI IP addresses on nutanix cluster

nutanix@cvm$Ā ipmiips

AHV Load Balancing

View the bond mode and active interface with the following AHV command from any CVM

nutanix@CVM$ hostssh ovs-appctl bond/show

In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:

Note!! Your Bond name maybe bond0 br0_up br0-up

—- br0_up—-
bond_mode:Ā active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off

slave eth2: enabled
activeĀ slave
may_enable: true

slave eth3: enabled
may_enable: true

  • Active-Backup

nutanix@CVM$ host ssh ovs-vsctl set port br0_up bond_mode=active-backup

  • Balance-SLB

nutanix@CVM$ hostssh ovs-vsctl set port br0_up bond_mode=balance-slb

nutanix@CVM$ hostssh ovs-vsctl set port br0_up other_config:bond-rebalance-interval=60000

  • LACP & Link-Aggregation NB!! LACP must be set upstream and you may want to do this one via each hosts SSH session

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up lacp=active

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up bond_mode=balance-tcp

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up other_config:lacp-fallback-ab=true

Import a GPO from a File

Import a GPO from a file using Advaced Group Policy Managment

Download AGPM: https://www.microsoft.com/en-us/download/details.aspx?id=54967

  1. In theĀ Group Policy Management ConsoleĀ tree, clickĀ Change Control in the domain to which you want to import policy settings
  2. On theĀ ContentsĀ tab, click theĀ Controlled tab to display the controlled GPOs
  3. Check out the destination GPO to which you want to import policy settings
  4. Right-click the destination GPO, point toĀ Import from, and then clickĀ File
  5. Follow the instructions in theĀ Import Settings Wizard to select a GPO backup, import its policy settings to replace those in the destination GPO, and enter a comment for the audit trail of the destination GPO. By default, the destination GPO is checked in when the wizard is finished

Additional considerations

  • By default, you must be an Editor or an AGPM Administrator (Full Control) to perform this procedure. Specifically, you must haveĀ List Contents,Ā Edit Settings, andĀ Import GPO permissions for the domain, and the GPO must be checked out by you
  • Although an Editor cannot import policy settings into a new GPO during its creation, an Editor can request the creation of a new GPO and then import policy settings into it after it is created