Proxmox Windows VM Install

Prepare

To obtain a good level of performance, we will install the Windows VirtIO Drivers during the Windows installation.

  • Create a new VM, select “Microsoft Windows 10/2016/2019” as Guest OS and enable the “Qemu Agent” in the System tab. Continue and mount your Windows 10 ISO in the CDROM drive
  • For your virtual hard disk select “SCSI” as bus with “VirtIO SCSI” as controller. Set “Write back” as cache option for best performance (the “No cache” default is safer, but slower) and tick “Discard” to optimally use disk space (TRIM).
  • Configure your memory settings as needed, continue and set “VirtIO (paravirtualized)” as network device, finish your VM creation.
  • For the VirtIO drivers, upload the driver ISO (use the stable VirtIO ISO, download it from here) to your storage, create a new CDROM drive (use “Add -> CD/DVD drive” in the hardware tab) with Bus “IDE” and number 3. Load the Virtio Drivers ISO in the new virtual CDROM drive.
  • Now your ready to start the VM, just follow the Windows installer.

Launch Windows install

  • After starting your VM launch the noVNC console
  • Follow the installer steps until you reach the installation type selection where you need to select “Custom (advanced)”
  • Now click “Load driver” to install the VirtIO drivers for hard disk and the network.
    • Hard disk: Browse to the CD drive where you mounted the VirtIO driver and select folder “\amd64\xxx” and confirm. Select the “Red Hat VirtIO SCSI pass-through controller” and click next to install it. Now you should see your drive.
    • Network: Repeat the steps from above (click again “Load driver”, etc.) and select the folder “NetKVM\w10\amd64”, confirm it and select “Redhat VirtIO Ethernet Adapter” and click next.
    • Memory Ballooning: Again, repeat the steps but this time select the “Balloon\w10\amd64” folder, then the “VirtIO Balloon Driver” and install it by clicking next. With these three drivers you should be good covered to run a fast virtualized Windows 10 system.
  • Choose the drive and continue the Windows installer steps.
Advertisement

Proxmox PCIe passthrough

I need to pass a Google Coral mPCIe device for TensorFlow processing to my Ubuntu 20.04+ VM with a docker container running Frigate

The motherboard I am using is Intel based and does not have an m.2 port but does have an mPCIe slot which was used for the WiFi/BT device and since they are not used in my virtual setup I swapped them out with the mPCIe Google Coral Adapter

Note: A USB device is available but has a high price tag especially when ordering 4 units

So, Here is how I enabled and passed it to my ubuntu VM

Enable IOMMU in Proxmox

Open your bootloader kernel command line config file

For grub & Intel CPU’s:

nano /etc/default/grub

Replace the “GRUB_CMDLINE_LINUX_DEFAULT” with

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

Reboot

Verify IOMMU is enabled

Run

dmesg | grep -e DMAR -e IOMMU

There should be a line that looks like “DMAR: IOMMU enabled”. If there is no output, something is wrong

Check for PCI device being detected

root@proxmox:~# lspci -nnk | grep 089a
02:00.0 System peripheral [0880]: Device [1ac1:089a]Subsystem: Device [1ac1:089a]

The 02.00.0 is the important part. This will change based upon your system and/or PCIe port used

Make sure 02:00.0 is in itโ€™s own iommu group. If you dont get anything returned, you may need to go back and check enabling IOMMU

root@proxmox:~# find /sys/kernel/iommu_groups/ -type l

/sys/kernel/iommu_groups/34/devices/0000:7f:0a.1 /sys/kernel/iommu_groups/34/devices/0000:7f:0a.2 /sys/kernel/iommu_groups/6/devices/0000:00:04.0 /sys/kernel/iommu_groups/62/devices/0000:80:04.2 /sys/kernel/iommu_groups/90/devices/0000:ff:10.1 /sys/kernel/iommu_groups/24/devices/0000:02:00.0

This will show a long list but you can grep for the device bus ID if you know it and then grep again the iommu group – commands below ๐Ÿ˜‰

root@proxmox:~# find /sys/kernel/iommu_groups/ -type l | grep 04:

/sys/kernel/iommu_groups/24/devices/0000:02:00.0

root@proxmox:~# find /sys/kernel/iommu_groups/ -type l | grep /24/

/sys/kernel/iommu_groups/24/devices/0000:02:00.0

Blacklist the Device

Create a new file at /etc/modprobe.d/blacklist-apex.conf and add these lines:

root@proxmox:~# nano /etc/modprobe.d/blacklist-apex.conf
blacklist gasket
blacklist apex
options vfio-pci ids=1ac1:089a

Save the file and then then run the following to apply the changes

root@proxmox:~# update-initramfs -u -k all

The command above will take a while to run. Once completed restart ProxMox

Verify that the apex and gasket modules did not load by running this:

root@proxmox:~# lsmod | grep apex

It should not return anything

Add PCIe Device to VM

Once rebooted and confirmed all in order as above

Stop the VM, add the PCI device to the VM (where 100 is the VM ID in Proxmox)

root@proxmox:~# qm set 100 -hostpci0 02:00.0