ArchLinux:KVM: Difference between revisions

From Wiki³
mNo edit summary
 
(159 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{DISPLAYTITLE:{{TitleIcon|arch=true}} KVM on Arch Linux}}<metadesc>How to automate VMs using QEMU KVM with libvirt, Packer and Vagrant on Arch Linux.</metadesc>
{{DISPLAYTITLE:{{TitleIcon|arch=true}} KVM on Arch Linux}}<metadesc>How to automate VMs using QEMU KVM with libvirt, Packer and Vagrant on Arch Linux.</metadesc>
<div id="tocalign">__TOC__</div>
<div id="tocalign">__TOC__</div>
{{UnderConstruction}}
{{Back|Arch Linux}}
{{Back|Arch Linux}}
{{SeeAlso|ArchLinux:OVH|OVH Custom Installation}}
{{GitLab|[https://gitlab.com/kyaulabs/aarch kyaulabs/aarch]: Automated Arch Linux installer.}}
{{Warning|This page has not been updated since the creation of AArch and its included packages. Therefore it is possible that some or all of the following information is out of date.}}
= {{Icon24|sitemap}} Introduction =
= {{Icon24|sitemap}} Introduction =
This is a tutorial on how to automate the setup of VMs using KVM on Arch Linux. This tutorial utilizes [//www.qemu.org QEMU] as a back-end for KVM using [//libvirt.org libvirt]. System base images will be generated using [//www.packer.io Packer]. And finally, [//www.vagrantup.com/ Vagrant] and [//github.com/vagrant-libvirt/vagrant-libvirt vagrant-libvirt] will be utilized for KVM test environments.
This is a tutorial for setting up and using KVM on Arch Linux utilizing [//www.qemu.org QEMU] as the back-end and [//libvirt.org libvirt] as the front-end. Additional notes have been added for creating system images.
{{Note|This tutorial is meant as a supplement to the OVH installation tutorials.}}
For demonstration in this tutorial I am following this use case:


''Environment consists of a database server, a DNS server, a web server and one or more test servers (which may or may not be clones of the three main servers). Additional servers should be able available on demand for any use case. All machine images should be built in-house so that image security can be maintained.''
'''UPDATE (2019):''' ''Tested/Cleaned Up this document using a Dell R620 located in-house at KYAU Labs as the test machine.''


= {{Icon24|sitemap}} Installation =
= {{Icon24|sitemap}} Installation =
Before getting started there are a few packages that will be needed to set all of this up.
Before getting started it is a good idea to make sure VT-x or AMD-V is enabled in BIOS.
{{Console|1=pacaur -S bridge-utils dnsmasq ebtables libguestfs libvirt openbsd-netcat openssl-1.0 {{GreenBold|\}}<br/>     ovmf packer-io qemu-headless qemu-headless-arch-extra vagrant}}
{{Console|1=egrep --color 'vmx{{!}}svm' /proc/cpuinfo}}
== {{Icon|notebook}} vagrant-libvirt ==
{{margin}}
The libvirt plugin installation for vagrant requires some cleanup first.
{{Note|If hardware virtualization is not enabled, reboot the machine and enter the BIOS to enable it.}}
{{Console|1=sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup}|2=sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup}|3=sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup}|4=sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,backup} }}
Once hardware virtualization has been verified install all the packages required.
Then build the plugin.
{{Console|1=pikaur -S bridge-utils dmidecode libguestfs libvirt {{GreenBold|\}}<br/>     openbsd-netcat openssl-1.0 ovmf qemu-headless {{GreenBold|\}}<br/>     qemu-headless-arch-extra virt-install}}
{{Console|1=vagrant plugin install vagrant-libvirt}}
 
= {{Icon24|sitemap}} Configuration =
After all of the packages have been installed libvirt/QEMU need to be configured.
== {{Icon|notebook}} User/Group Management ==
Create a user for KVM.
{{Console|1=sudo useradd -g kvm -s /usr/bin/nologin kvm}}
Then modify the libvirt QEMU config to reflect this.
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|1=...<br/>user {{=}} "kvm"<br/>group {{=}} "kvm"<br/>...}}
Fix permission on {{mono|/dev/kvm}}
{{Console|1=sudo groupmod -g 78 kvm}}
{{margin}}
{{Console|1=sudo usermod -u 78 kvm}}
{{margin}}
{{Note|systemd as of 234 assigns dynamic IDs to groups, but KVM expects 78}}
=== User Access ===
If non-root user access to libvirtd is desired, add the {{mono|libvirt}} group to polkit access.
{{Console|title=/etc/polkit-1/rules.d/50-libvirt.rules|prompt=false|1={{BlackBold|/* Allow users in kvm group to manage the libvirt daemon without authentication */}}<br/>polkit.addRule(function(action, subject) {<br/>    if (action.id {{=}}{{=}} "org.libvirt.unix.manage" &&<br/>        subject.isInGroup("libvirt")) {<br/>            return polkit.Result.YES;<br/>    }<br/>});}}
{{margin}}
{{Note|If [[ArchLinux:Security|HAL]] was followed to secure the system after installation and you would like to use libvirt as a non-root user, the {{mono|hidepid}} security feature from the {{mono|/proc}} line in {{mono|/etc/fstab}} will need to be removed. This will require a reboot.}}
Add the users who need libvirt access to the {{mono|kvm}} and {{mono|libvirt}} groups.
{{Console|1=sudo gpasswd -a {{cyanBold|username}} kvm}}
{{margin}}
{{Console|1=sudo gpasswd -a {{cyanBold|username}} libvirt}}
To make life easier it is suggested to set a couple shell variables for {{mono|virsh}}, this will default to {{mono|qemu:///session}} when running as a non-root user.
{{Console|1=setenv VIRSH_DEFAULT_CONNECT_URI qemu:///system}}
{{margin}}
{{Console|1=setenv LIBVIRT_DEFAULT_URI qemu:///system}}
These can be added to {{mono|/etc/bash.bashrc}}, {{mono|/etc/fish/config.fish}} or {{mono|/etc/zsh/zshenv}} depending on which shell is being used.
 
== {{Icon|notebook}} Hugepages ==
== {{Icon|notebook}} Hugepages ==
Enabling hugepages can improve the performance of virtual machines. First add an entry to the fstab, make sure to first check what the group id of the group {{mono|kvm}} is.
Enabling hugepages can improve the performance of virtual machines. First add an entry to the fstab, make sure to first check what the group id of the group {{mono|kvm}} is (it should be {{mono|78}}.
{{Console|1=grep kvm /etc/group|2=sudoedit /etc/fstab}}<br/>
{{Console|1=grep kvm /etc/group}}
{{Console|title=/etc/fstab|prompt=false|1=hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=999 0 0}}
{{margin}}
{{Console|title=/etc/fstab|prompt=false|1=hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0}}
Instead of rebooting, remount instead.
Instead of rebooting, remount instead.
{{Console|1=sudo umount /dev/hugepages|2=mount /dev/hugepages}}
{{Console|1=sudo umount /dev/hugepages|2=sudo mount /dev/hugepages}}
This can then be verified.
This can then be verified.
{{Console|1=sudo mount {{!}} grep huge|2=ls -FalG /dev/ {{!}} grep huge}}
{{Console|1=sudo mount {{!}} grep huge|2=ls -FalG /dev/ {{!}} grep huge}}
Now to set the number of hugepages to use. For this one has to do a bit of math, for each gigabyte of the system RAM that you want to use for VMs you divide the size in megabytes by two.
Now to set the number of hugepages to use. For this one has to do a bit of math, for each gigabyte of the system RAM that you want to use for VMs you divide the size in megabytes by two.
{{Note|On my setup I will dedicated 12GB out of the 16GB of system RAM to VMs. This means {{mono|(12 * 1024) / 2}} or {{mono|6144}}}}
{{Note|On my setup I will dedicated 40GB out of the 48GB of system RAM to VMs. This means {{mono|(40 * 1024) / 2}} or {{mono|20480}}}}
Set the number of hugepages.
Set the number of hugepages.
{{Console|1=echo 6144 {{!}} sudo tee /proc/sys/vm/nr_hugepages}}
{{Console|1=echo {{cyanBold|20480}} {{!}} sudo tee /proc/sys/vm/nr_hugepages}}
Also set this permanently by adding a file to {{mono|/etc/sysctl.d}}.
Also set this permanently by adding a file to {{mono|/etc/sysctl.d}}.
{{Console|title=/etc/sysctl.d/40-hugepages.conf|prompt=false|1=vm.nr_hugepages = 6144}}
{{Console|title=/etc/sysctl.d/40-hugepages.conf|prompt=false|1=vm.nr_hugepages {{=}} {{cyanBold|20480}}}}
Again verify the changes.
Again verify the changes.
{{Console|1=grep HugePages_Total /proc/meminfo}}
{{Console|1=grep HugePages_Total /proc/meminfo}}
== {{Icon|notebook}} KVM Group ==
Edit the libvirt QEMU config and turn hugepages on.
Create a user for KVM.
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|1=...<br/>hugetlbfs_mount {{=}} "/dev/hugepages"<br/>...}}
{{Console|1=sudo useradd -g kvm -s /usr/bin/nologin kvm}}
 
Then modify the libvirt QEMU config to reflect this.
== {{Icon|notebook}} Kernel Modules ==
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|1=user {{=}} "kvm"<br/>group {{=}} "kvm"}}
A few additional kernel modules will help to assist KVM.
Fix permission on {{mono|/dev/kvm}}
 
{{Console|1=sudo groupmod -g 78 kvm}}
Nested virtualization can be enabled by loading the {{mono|kvm_intel}} module with the {{mono|nested{{=}}1}} option. To mount directories directly from the host inside of a VM, the {{mono|9pnet_virtio}} module will need to be loaded. Additionally {{mono|virtio-net}} and {{mono|virtio-pci}} are loaded to add para-virtualized devices.
{{Note|systemd as of 234 assigns dynamic IDs to groups, but KVM expects 78}}
{{Console|1=sudo modprobe -r kvm_intel}}
Add the current user to the {{mono|kvm}} group.
{{margin}}
{{Console|1=sudo gpasswd -a kyau kvm}}
{{Console|1=sudo modprobe kvm_intel nested{{=}}1}}
== {{Icon|notebook}} OVMF & IOMMU ==
{{margin}}
{{Console|1=sudo modprobe 9pnet_virtio virtio_net virtio_pci}}
Also load the module on boot.
{{Console|title=/etc/modules-load.d/virtio.conf|prompt=false|1=options kvm_intel nested{{=}}1<br/>9pnet_virtio<br/>virtio_net<br/>virtio_pci}}
If 9pnet is going to be used, change the global QEMU config to turn off dynamic file ownership.
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|1=...<br/>dynamic_ownership {{=}} 0<br/>...}}
Nested virtualization can be verified.
{{Console|1=sudo systool -m kvm_intel -v {{!}} grep nested}}
{{margin}}
{{Note|If the machine has an AMD processor use {{mono|kvm_amd}} instead for nested virtualization.}}
 
== {{Icon|notebook}} UEFI & PCI-E Passthrough ==
The Open Virtual Machine Firmware (OVMF) is a project to enable UEFI support for virtual machines and enabling IOMMU will enable PCI pass-through among other things. This extends the possibilities for operating system choices significantly and also provides some other options.
The Open Virtual Machine Firmware (OVMF) is a project to enable UEFI support for virtual machines and enabling IOMMU will enable PCI pass-through among other things. This extends the possibilities for operating system choices significantly and also provides some other options.


=== GRUB ===
Enable IOMMU on boot by adding an option to the kernel line in GRUB.
Enable IOMMU on boot by adding an option to the kernel line in GRUB.
{{Console|title=/etc/default/grub|prompt=false|1=GRUB_CMDLINE_LINUX_DEFAULT{{=}}"intel_iommu{{=}}on"}}
{{Console|title=/etc/default/grub|prompt=false|1=...<br/>GRUB_CMDLINE_LINUX_DEFAULT{{=}}"intel_iommu{{=}}on"<br/>...}}
Re-generate the GRUB config.
Re-generate the GRUB config.
{{Console|1=sudo grub-mkconfig -o /boot/grub/grub.cfg}}
{{Console|1=sudo grub-mkconfig -o /boot/grub/grub.cfg}}
=== REfind ===
Enable IOMMU on boot by adding an option to the
{{Console|title=/boot/EFI/BOOT/refind.conf|prompt=false|1=...<br/>options "root{{=}}/dev/mapper/skye-root rw add_efi_memmap nomodeset intel_iommu{{=}}on zswap.enabled{{=}}1 zswap.compressor{{=}}lz4 {{greenBold|\}}<br/>        zswap.max_pool_percent{{=}}20 zswap.zpool{{=}}z3fold initrd{{=}}\intel-ucode.img"<br/>...}}
Reboot the machine and then verify IOMMU is enabled.
Reboot the machine and then verify IOMMU is enabled.
{{Console|1=sudo dmesg {{!}} grep -e DMAR -e IOMMU}}
{{Console|1=sudo dmesg {{!}} grep -e DMAR -e IOMMU}}
If it was enabled properly, there should be a line similar to {{mono|[    0.000000] DMAR: IOMMU enabled}}.
If it was enabled properly, there should be a line similar to {{mono|[    0.000000] DMAR: IOMMU enabled}}.


=== OVMF ===
Adding the OVMF firmware to libvirt.
Adding the OVMF firmware to libvirt.
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|nvram {{=}} [<br/> "/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"<br/>]}}
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|...<br/>nvram {{=}} [<br/> "/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"<br/>]<br/>...}}


= {{Icon24|sitemap}} LVM =
=== SPICE TLS ===
During the [[ArchLinux:OVH|installation]] of the KVM host machine a {{mono|data}} volume group was created for VMs. Before carving out disk space for virtual machines, create the volume(s) that will exist outside of the virtual machines. These will be used for databases, web root directories and any other data that needs to persist between VM creation and destruction.
Enable SPICE over TLS will allow SPICE to be exposed externally.
{{Console|1=sudo lvcreate -L 256G data --name http}}
{{Note|I am only using a single LVM volume and then creating directories inside of this for each machine}}
Create a directory for the volume.
{{Console|1=sudo mkdir /http}}
Format the new volume with {{mono|ext4}}.
{{Console|1=sudo mkfs.ext4 -O metadata_csum,64bit /dev/data/http|2=sudo mount /dev/data/http /http}}
Set proper permissions and mod the {{mono|http}} user's home directory.
{{Console|1=sudo chown http:http /http|2=sudo usermod -m -d /http http}}
Add the volume to {{mono|fstab}} so that it mounts upon boot.
{{Console|title=/etc/fstab|prompt=false|/dev/mapper/data-http /http ext4 rw,relatime,stripe{{=}}256,data{{=}}ordered,journal_checksum 0 0}}
Volumes will now need to be created for each virtual machine, for this an LVM thin pool can be utilized.
== {{Icon|notebook}} LVM Thin Provisioning ==
Thin provisioning creates another virtual layer on top of your volume group, in which logical thin volumes can be created. Thin volumes, unlike normal thick volumes, do not reserve the disk space for the volume on creation but instead do so upon write; to the operating system they are still reported as full size volumes. This means that when utilizing LVM directly for KVM it will perform similarly to a "dynamic disk" meaning it will only use what disk space it needs regardless of how big the virtual hard drive actually is. This can also be paired with LVM cloning (snapshots) to create some interesting setups, like running 1TB of VMs on a 128GB disk for example.
{{Warning|The one disadvantage to doing this is that without proper disk monitoring and management this can lead to over provisioning (overflow will cause volume drop)}}
Use the rest of the {{mono|data}} volume group for the thin pool.
{{Console|1=sudo lvcreate -l +100%FREE data --thinpool qemu}}
Pulling up {{mono|lvdisplay}} can verify that it created a thin pool.
{{Console|1=sudo lvdisplay data/qemu}}<br/>
{{Console|prompt=false|1={{Black|__}}LV Size                <1.50 TiB<br/>  Allocated pool data    0.00%}}
Finally {{mono|lvs}} should show the volume with the {{mono|t}} and {{mono|tz}} attributes as well as a data percentage.
{{Console|1=sudo lvs}}<br/>
{{Console|prompt=false|1={{Black|__}}LV  VG      Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert<br/>  http data    -wi-ao---- 256.00g<br/>  qemu data    twi-a-tz--  <1.50t            0.00  0.43<br/>  root neutron -wi-ao----  63.93g}}
Adding volumes to the thin pool is very similar to adding normal volumes, add one for the first VM.
{{Console|1=sudo lvcreate -V 20G --thin -n dns data/qemu}}
These volumes can be shrunk or extended at any point.
{{Console|1=sudo lvextend -L +15G data/dns}}
Or even removed entirely.
{{Console|1=sudo lvremove data/dns}}
Verify the new {{mono|base}} volume was added correctly to the thin pool.
{{Console|1=sudo lvs}}
The volume should be marked in pool {{mono|qemu}}, have a data of {{mono|0.00%}} and attributes {{mono|V}} and {{mono|tz}}.
{{Console|prompt=false|1={{Black|__}}LV  VG      Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert<br/>  dns  data    Vwi-a-tz--  20.00g qemu        0.00}}


= {{Icon24|sitemap}} Packer =
Edit the libvirt QEMU config to enable SPICE over TLS.
Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding.
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|1=...<br/>spice_listen {{=}} "0.0.0.0"<br/>spice_tls {{=}} 1<br/>spice_tls_x509_cert_dir {{=}} /etc/pki/libvirt-spice<br/>...}}
{{GitHub|[//github.com/kyau/packer-kvm-templates kyau/packer-kvm-templates]}}
Then use the following script to generate the required certificates.
The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the [//www.packer.io/docs/templates/index.html Packer docs]. For Arch Linux, the template file {{mono|archlinux-x86_64-base-vagrant.json}} will be used to generate an Arch Linux {{mono|qcow2}} virtual machine image.
{{Console|title=spice-tls.sh|prompt=false|1=<nowiki>#!/bin/bash
{{Console|1=<nowiki>git clone https://github.com/kyau/packer-kvm-templates</nowiki>|2=cd packer-kvm-templates/archlinux-x86_64-base}}
To explain the template a bit, inside of the {{mono|builders}} section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use {{mono|virtio}} for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the {{mono|http_directory}} specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The {{mono|boot_command}} is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the {{mono|qemuargs}} should be rather apparent as they are the arguments passed to QEMU.
{{Console|1=cd packer-kvm-templates}}
Looking then at the {{mono|provisioners}} section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The {{mono|install.sh}} script is the one that installs Arch Linux, {{mono|hardnening.sh}} is the script that applies [[ArchLinux:Security|hardening]] the Arch Linux installation and finally {{mono|cleanup.sh}} is there for general cleanup after the installation is complete.


While the {{mono|README.md}} does have all of this information for the packer templates, it will also be detailed here.
SERVER_KEY=server-key.pem


For added security generate a new {{mono|moduli}} for your VMs (or copy from {{mono|/etc/ssh/moduli}}.
# creating a key for our ca
{{Console|1=ssh-keygen -G moduli.all -b 4096|2=ssh-keygen -T moduli.safe -f moduli.all|3=mv moduli.safe moduli && rm moduli.all}}
if [ ! -e ca-key.pem ]; then
Enter the directory for the Arch Linux template and sym-link the moduli.
    openssl genrsa -des3 -out ca-key.pem 1024
{{Console|1=cd archlinux-x86_64-base/default|2=ln -s ../../moduli . && cd ..}}
fi
Build the base virtual machine image.
# creating a ca
{{Console|1=./build archlinux-x86_64-base-vagrant.json}}
if [ ! -e ca-cert.pem ]; then
{{Note|This runs: {{mono|PACKER_LOG{{=}}1 PACKER_LOG_PATH{{=}}"packer.log" packer-io build archlinux-x86_64-base-vagrant.json}}, it logs to the current directory}}
    openssl req -new -x509 -days 1095 -key ca-key.pem -out ca-cert.pem -utf8 -subj "/C=</nowiki>{{cyanBold|WA}}/L{{=}}{{cyanBold|Seattle}}/O{{=}}{{cyanBold|KYAU Labs}}<nowiki>/CN=KVM"
Once finished, there should be a qcow2 vagrant-libvirt image for Arch Linux in the {{mono|box}} directory.
fi
# create server key
if [ ! -e $SERVER_KEY ]; then
    openssl genrsa -out $SERVER_KEY 1024
fi
# create a certificate signing request (csr)
if [ ! -e server-key.csr ]; then
    openssl req -new -key $SERVER_KEY -out server-key.csr -utf8 -subj "/C=</nowiki>{{cyanBold|WA}}/L{{=}}{{cyanBold|Seattle}}/O{{=}}{{cyanBold|KYAU Labs}}/CN{{=}}{{cyanBold|myhostname.example.com}}<nowiki>"
fi
# signing our server certificate with this ca
if [ ! -e server-cert.pem ]; then
    openssl x509 -req -days 1095 -in server-key.csr -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem
fi


Add this image to Vagrant.
# now create a key that doesn't require a passphrase
{{Console|1=vagrant box add box/archlinux-x86_64-base-vagrant-libvirt.box --name archlinux-x86_64-base}}
openssl rsa -in $SERVER_KEY -out $SERVER_KEY.insecure
mv $SERVER_KEY $SERVER_KEY.secure
mv $SERVER_KEY.insecure $SERVER_KEY


= {{Icon24|sitemap}} Vagrant-libvirt =
# show the results (no other effect)
Vagrant can be used to build and manage test machines. The [//github.com/vagrant-libvirt/vagrant-libvirt vagrant-libvirt] plugin adds a Libvirt provider to Vagrant, allowing Vagrant to control and provision machines via the Libvirt toolkit.
openssl rsa -noout -text -in $SERVER_KEY
openssl rsa -noout -text -in ca-key.pem
openssl req -noout -text -in server-key.csr
openssl x509 -noout -text -in server-cert.pem
openssl x509 -noout -text -in ca-cert.pem</nowiki>}}
{{margin}}
{{Note|If setting up multiple KVM host machines, use the same CA files when generating the other machine certificates.}}
Create the directory for the certificates.
{{Console|1=sudo mkdir -p /etc/pki/libvirt-spice}}
Change permissions on the directory.
{{Console|1=sudo chmod -R a+rx /etc/pki}}
Move the generate files to the new directory.
{{Console|1=sudo mv ca-* server-* /etc/pki/libvirt-spice}}
Correct permissions on the files.
{{Console|1=sudo chmod 660 /etc/pki/libvirt-spice/*}}
{{margin}}
{{Console|1=sudo chown kvm:kvm /etc/pki/libvirt-spice/*}}


To bring up the first machine initialize Vagrant in a new directory first create a directory for the machine.
== {{Icon|notebook}} Services ==
{{Console|1=cd|2=mkdir testmachine|3=cd testmachine}}
Once the bridge is up and running libvirtd can be started, enable and start the {{mono|libvirtd}} service.
Init the machine the Vagrant.
{{Console|1=sudo systemctl enable libvirtd}}
{{Console|1=vagrant init archlinux-x86_64-base}}
{{margin}}
Then bring up the machine.
{{Console|1=sudo systemctl start libvirtd}}
{{Console|1=vagrant up}}
Verify that libvirt is running.
Then SSH into the machine directly.
{{Console|1=virsh --connect qemu:///system}}
{{Console|1=vagrant ssh}}
{{margin}}
= {{Icon24|sitemap}} Libvirt =
{{Console|prompt=false|1=Welcome to virsh, the virtualization interactive terminal.<br/><br/>Type:  'help' for help with commands<br/>      'quit' to quit<br/><br/>virsh #}}
While Vagrant is a great tool for working with test and development environments, for the more permanent VMs on the system, utilizing libvirt directly can interface at a higher level with KVM. This will allow the VMs to run directly off of LVM volumes.
If you end up at the {{mono|virsh}} prompt, simply type {{mono|quit}} to exit back to the shell.
 
= {{Icon24|sitemap}} Networking =
The server being used for testing has quad gigabit network cards in it. For this type of setup one NIC will be used for management of the host OS, while the other three will be bonded together using 802.3ad (combines all NICs for optimal throughput).
 
== {{Icon|notebook}} NIC Bonding ==
Pull up a list of all network cards in the machines.
{{Console|1=ip -c=auto l}}
{{margin}}
{{Console|prompt=false|1=1: {{cyan|lo:}} &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000<br/>    link/loopback {{yellow|00:00:00:00:00:00}} brd {{yellow|00:00:00:00:00:00}}<br/>2: {{cyan|eth0:}} &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state {{green|UP}} mode DEFAULT group default qlen 1000<br/>    link/ether {{yellow|d4:be:d9:b2:95:43}} brd {{yellow|ff:ff:ff:ff:ff:ff}}<br/>3: {{cyan|eth1:}} &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state {{red|DOWN}} mode DEFAULT group default qlen 1000<br/>    link/ether {{yellow|d4:be:d9:b2:95:45}} brd {{yellow|ff:ff:ff:ff:ff:ff}}<br/>4: {{cyan|eth2:}} &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state {{red|DOWN}} mode DEFAULT group default qlen 1000<br/>    link/ether {{yellow|d4:be:d9:b2:95:47}} brd {{yellow|ff:ff:ff:ff:ff:ff}}<br/>5: {{cyan|eth3:}} &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state {{red|DOWN}} mode DEFAULT group default qlen 1000<br/>    link/ether {{yellow|d4:be:d9:b2:95:49}} brd {{yellow|ff:ff:ff:ff:ff:ff}}}}
Create a management {{mono|.network}} file, replace {{mono|M.M.M.M}} with the management IP address and {{mono|G.G.G.G}} with the gateway IP.
{{Console|title=/etc/systemd/network/management.network|prompt=false|1=[Match]<br/>Name{{=}}{{cyanBold|eth0}}<br/><br/>[Network]<br/>DHCP{{=}}no<br/>NTP{{=}}pool.ntp.org<br/>DNS{{=}}1.1.1.1<br/>LinkLocalAddressing{{=}}no<br/><br/>[Address]<br/>Address{{=}}{{cyanBold|M.M.M.M}}/24<br/>Label{{=}}management<br/><br/>[Route]<br/>Gateway{{=}}{{cyanBold|G.G.G.G}}}}
{{margin}}
{{Warning|If IPv6 is being used, remove the {{mono|LinkLocalAddressing{{=}}no}} line from the file as this defaults to {{mono|ipv6}}.}}
Create the bond interface with systemd-networkd.
{{Console|title=/etc/systemd/network/bond0.netdev|prompt=false|1=[NetDev]<br/>Name{{=}}bond0<br/>Description{{=}}KVM vSwitch<br/>Kind{{=}}bond<br/><br/>[Bond]<br/>Mode{{=}}802.3ad<br/>TransmitHashPolicy{{=}}layer3+4<br/>MIIMonitorSec{{=}}1s<br/>LACPTransmitRate{{=}}fast}}
Use the last three network cards to create a {{mono|bond0.network}} file.
{{Console|title=/etc/systemd/network/bond0.network|prompt=false|1=[Match]<br/>Name{{=}}{{cyanBold|eth1}}<br/>Name{{=}}{{cyanBold|eth2}}<br/>Name{{=}}{{cyanBold|eth3}}<br/><br/>[Network]<br/>Bond{{=}}bond0}}
Finally create the {{mono|.network}} file attaching it to the bridge that is created in the next section.
{{Console|title=/etc/systemd/network/kvm0.network|prompt=false|1=[Match]<br/>Name{{=}}bond0<br/><br/>[Network]<br/>Bridge{{=}}kvm0}}


For this a separate Packer template was created, one with all of the Vagrant stuff removed. To build one of these simply use the other {{mono|JSON}} file in the Arch Linux template directory.
{{Console|1=./build archlinux-x86_64-base.json}}
This can then be output directly to the LVM thin volume.
{{Console|1=sudo qemu-img convert -f qcow2 -O 'raw' 'qcow2/archlinux-x86_64-base.qcow2' '/dev/data/dns'}}
Then because it copied a thick volume onto a thin volume it will be using all of the disk space.
{{Console|1=sudo lvs}}<br/>
{{Console|prompt=false|1={{Black|__}}LV  VG      Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert<br/>  dns  data    Vwi-a-tz--  20.00g qemu        100.00}}
The disk merely needs to be sparsified.
{{Console|1=sudo virt-sparsify --in-place /dev/data/dns}}
The disk should now be reading properly.
{{Console|1=sudo lvs}}<br/>
{{Console|prompt=false|1={{Black|__}}LV  VG      Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert<br/>  dns  data    Vwi-a-tz--  20.00g qemu        7.17}}
== {{Icon|notebook}} Virt-manager ==
Virt-manager can now be installed on the local machine (the one viewing this tutorial not the KVM host machine), this will be used to connect to libvirt remotely via SSH.
{{Console|1=pacaur -S virt-manager}}
Then on the KVM host machine enable and start {{mono|libvirtd}}.
{{Console|1=sudo systemctl enable libvirtd|2=sudo systemctl start libvirtd}}
Then enable access to libvirtd to everyone in the {{mono|kvm}} group.
{{Console|title=/etc/polkit-1/rules.d/50-libvirt.rules|prompt=false|1={{BlackBold|/* Allow users in kvm group to manage the libvirt daemon without authentication */}}<br/>polkit.addRule(function(action, subject) {<br/>    if (action.id {{=}}{{=}} "org.libvirt.unix.manage" &&<br/>        subject.isInGroup("kvm")) {<br/>            return polkit.Result.YES;<br/>    }<br/>});}}
You should be able to connect remotely to QEMU/KVM with virt-manager over SSH, this will be useful later on.
== {{Icon|notebook}} Network Bridge ==
== {{Icon|notebook}} Network Bridge ==
Setting up a network bridge for KVM is simple with systemd. Replace {{mono|X.X.X.X}} with the host machine's IP address and update the Gateway and DNS if not using OVH.
Setting up a network bridge for KVM is simple with systemd. Create the bridge interface with systemd-networkd.
{{Console|title=/etc/systemd/network/kvm0.netdev|prompt=false|1=[NetDev]<br/>Name{{=}}kvm0<br/>Kind{{=}}bridge}}
{{Console|title=/etc/systemd/network/kvm0.netdev|prompt=false|1=[NetDev]<br/>Name{{=}}kvm0<br/>Kind{{=}}bridge}}
{{margin}}
Create the {{mono|.network}} file for the bridge, replace {{mono|X.X.X.X}} with the IP address desired for the KVM vSwitch, {{mono|G.G.G.G}} with the gateway IP and modify the DNS if Cloudflare is not desired.
{{Console|title=/etc/systemd/network/kvm0.network|prompt=false|1=[Match]<br/>Name{{=}}kvm0<br/><br/>[Network]<br/>DNS{{=}}213.186.33.99<br/>Address{{=}}{{MagentaBold|X.X.X.X}}/24<br/>Gateway{{=}}{{MagentaBold|Y.Y.Y}}.254<br/>IPForward{{=}}yes}}
{{Console|title=/etc/systemd/network/vswitch.network|prompt=false|1=[Match]<br/>Name{{=}}kvm0<br/><br/>[Network]<br/>DHCP{{=}}no<br/>NTP{{=}}pool.ntp.org<br/>DNS{{=}}1.1.1.1<br/>IPForward{{=}}yes<br/>LinkLocalAddressing{{=}}no<br/><br/>[Address]<br/>Address{{=}}{{cyanBold|X.X.X.X}}/24<br/>Label{{=}}vswitch<br/><br/>[Route]<br/>Gateway{{=}}{{cyanBold|G.G.G.G}}}}
{{margin}}
{{Console|title=/etc/systemd/network/eth0.network|prompt=false|1=[Match]<br/>Name{{=}}eth0<br/><br/>[Network]<br/>Bridge{{=}}kvm0}}
And finally restart networkd.
And finally restart networkd.
{{Console|1=sudo systemctl restart systemd-networkd}}
{{Console|1=sudo systemctl restart systemd-networkd}}
The bridge should now be up and running, this should be verified.
The bridge should now be up and running, this should be verified.
{{Console|1=ip a}}
{{Console|1=ip -c{{=}}auto a}}
Once the bridge is up and running QEMU can be directed to use it. Create a directory in {{mono|/etc/}} for QEMU and then make a {{mono|bridge.conf}}.
Before adding the bridge to libvirt, check the current networking settings.
{{Console|1=sudo mkdir /etc/qemu|2=sudoedit /etc/qemu/bridge.conf}}
{{Console|1=virsh net-list --all}}
{{margin}}
{{Console|prompt=false|1=&nbsp;Name      State      Autostart  Persistent<br/>----------------------------------------------<br/> default  inactive  no          yes}}
Create a libvirt configuration for the bridge.
{{Console|title=/etc/libvirt/bridge.xml|prompt=false|1=<nowiki><network></nowiki><br/><nowiki>        <name>kvm0</name></nowiki><br/><nowiki>        <forward mode="bridge"/></nowiki><br/><nowiki>        <bridge name="kvm0"/></nowiki><br/><nowiki></network></nowiki>}}
Enable the bridge in libvirt.
{{Console|1=virsh net-define --file /etc/libvirt/bridge.xml}}
Set the bridge to auto-start.
{{Console|1=virsh net-autostart kvm0}}
Start the bridge.
{{Console|1=virsh net-start kvm0}}
With the bridge now online, the default NAT network can be removed if it will not be used.
{{Console|1=virsh net-destroy default}}
{{margin}}
{{Console|1=virsh net-undefine default}}
This can then be verified.
{{Console|1=virsh net-list --all}}
{{margin}}
{{margin}}
{{Console|title=/etc/qemu/bridge.conf|prompt=false|1=allow kvm0}}
{{Console|prompt=false|1=&nbsp;Name      State      Autostart  Persistent<br/>----------------------------------------------<br/> kvm0      active    yes        yes}}
Then set {{mono|cap_net_admin}} on the binary helper.
{{Console|1=sudo setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helper}}


== {{Icon|notebook}} NAT ==
== {{Icon|notebook}} Firewall ==
To get NAT working inside of each VM IP forwarding will need to be enabled.
Since {{mono|libvirt}} cannot directly interface with {{mono|nftables}}, it can only interface with iptables, {{mono|firewalld}} can be used as a gateway in-between the two. Before it can be started {{mono|nftables}} will have to be disabled if it is currently being used.
{{Console|title=/etc/sysctl.d/99-kvm.conf|prompt=false|1=net.ipv4.ip_forward {{=}} 1}}
{{Console|1=sudo systemctl disable nftables}}
Rules will also need to be appended to {{mono|nftables}}.
{{margin}}
{{Console|title=/etc/nftables.conf|prompt=false|1=table inet filter {<br/>…<br/>  chain foward {<br/>    type filter hook forward priority 0;<br/>    oifname kvm0 accept<br/><nowiki>    </nowiki>iifname kvm0 ct state related, established accept<br/>    iifname kvm0 drop<br/>  }<br/>…<br/>} }}
{{Console|1=sudo systemctl stop nftables}}
Rebooting at this point to make sure all these networking settings were set correctly would be a wise idea.
Install {{mono|firewalld}} and {{mono|dnsmasq}}.
{{Console|1=pikaur -S dnsmasq firewalld}}
Start and enable the service.
{{Console|1=sudo systemctl enable firewalld}}
{{margin}}
{{Console|1=sudo systemctl start firewalld}}
Verify the firewall started properly, it should return {{mono|running}}.
{{Console|1=sudo firewall-cmd --state}}
Add both interfaces to the {{mono|public}} zone.
{{Console|1=firewall-cmd --permanent --zone{{=}}public --add-interface{{=}}eth0}}
{{margin}}
{{Console|1=firewall-cmd --permanent --zone{{=}}public --add-interface{{=}}kvm0}}
Reboot the machine to verify the changes stick upon reboot.
{{Console|1=sudo systemctl reboot}}
{{Console|1=sudo systemctl reboot}}
{{Note|The SSH service is added by default to the firewall, allowing one to log back in after reboot.}}
Look up the default zone config to verify the interfaces were added.
{{Console|1=sudo firewall-cmd --list-all}}
{{margin}}
{{Console|prompt=false|1=public<br/>&nbsp;&nbsp;target: default<br/>&nbsp;&nbsp;icmp-block-inversion: no<br/>&nbsp;&nbsp;interfaces: {{cyanBold|eth0 kvm0}}<br/>&nbsp;&nbsp;sources:<br/>&nbsp;&nbsp;services: dhcpv6-client ssh<br/>&nbsp;&nbsp;ports:<br/>&nbsp;&nbsp;protocols:<br/>&nbsp;&nbsp;masquerade: no<br/>&nbsp;&nbsp;forward-ports:<br/>&nbsp;&nbsp;source-ports:<br/>&nbsp;&nbsp;icmp-blocks:<br/>&nbsp;&nbsp;rich rules:}}
If any other services need to be added, do so now (a couple examples have been listed below).
{{Console|1=sudo firewall-cmd --zone{{=}}public --permanent --add-service{{=}}https}}
{{margin}}
{{Console|1=sudo firewall-cmd --zone{{=}}public --permanent --add-port{{=}}5900-5950/udp}}
With the firewall now setup, libvirtd should be fully started without any warnings about the firewall.
{{Console|1=sudo systemctl status libvirtd}}
{{margin}}
{{Console|prompt=false|1={{green|●}} libvirtd.service - Virtualization daemon<br/>&nbsp;&nbsp;&nbsp;Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)<br/>&nbsp;&nbsp;&nbsp;Active: {{green|active (running)}} since Tue 2019-02-26 14:27:12 PST; 16min ago<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Docs: man:libvirtd(8)<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<nowiki>https://libvirt.org</nowiki><br/>&nbsp;Main PID: 693 (libvirtd)<br/>&nbsp;&nbsp;&nbsp;&nbsp;Tasks: 17 (limit: 32768)<br/>&nbsp;&nbsp;&nbsp;Memory: 61.3M<br/>&nbsp;&nbsp;&nbsp;CGroup: /system.slice/libvirtd.service<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;└─693 /usr/bin/libvirtd<br/><br/>Feb 26 14:27:12 skye.wa.kyaulabs.com systemd[1]: Started Virtualization daemon.}}


== {{Icon|notebook}} Network Test ==
= {{Icon24|sitemap}} Storage Pools =
Using the OVH/SyS Manager setup two failover IP addresses to the same virtual MAC and use that virtual MAC for the following setup.
A storage pool in libvirt is merely a storage location designated to store virtual machine images or virtual disks. The most common storage types include netfs, disk, dir, fs, iscsi, logical, and gluster.


The network on the VM can now be tested. Use the following script to launch your virtual machine. Be sure to input the proper MAC address so it matches the one that OVH has assigned the two IP addresses.
List all of the currently available storage pools.
{{Console|title=qemu-run.sh|prompt=false|1={{BlackBold|#!/usr/bin/env bash}}<br/><br/>VCORES{{=}}"1"<br/>RAM{{=}}"1024"<br/>THIN_VOLUME{{=}}"data/dns"<br/>MAC_ADDRESS{{=}}"12:34:56:78:90:ab"<br/>sudo qemu-system-x86_64 --enable-kvm -machine q35,accel{{=}}kvm -device intel-iommu {{GreenBold|\}}<br/> -m ${RAM} -smp cpus{{=}}1,maxcpus{{=}}16,cores{{=}}${VCORES} -cpu host,kvm{{=}}off {{GreenBold|\}}<br/-drive file{{=}}/dev/${THIN_VOLUME},cache{{=}}none,if{{=}}virtio,format{{=}}raw -net bridge,br{{=}}kvm0 {{GreenBold|\}}<br/> -net nic,model{{=}}virtio,macaddr{{=}}${MAC_ADDRESS} -vga qxl {{GreenBold|\}}<br/> -spice port{{=}}5900,addr{{=}}127.0.0.1,disable-ticketing}}
{{Console|1=virsh pool-list --all}}
Once launched, you should be able to connect to the KVM using a SPICE client such as Vinagre. Click {{mono|Connect}} in Vinagre, set the {{mono|Host:}} to {{mono|localhost}} and then make sure {{mono|Use host}} is checked with your KVM host server name filled in "as a SSH tunnel". Connect and enter your SSH key password.
{{margin}}
{{Console|prompt=false|1= Name  State  Autostart<br/>---------------------------<br/>}}
== {{Icon|notebook}} ISO Images ==
Begin by adding a pool for iso-images. If you wish to use something other than an existing mount-point you will have to change the {{mono|type}}, options include the ones listed above (more can be found in the man page). A couple of examples are as follows:
{{Console|title=iso-images.vol|prompt=false|1=<nowiki><pool type='dir'></nowiki><br/><nowiki> <name>iso-images</name></nowiki><br/><nowiki> <target></nowiki><br/><nowiki> <path>/pool/iso</path></nowiki><br/><nowiki> <permissions></nowiki><br/><nowiki> <mode>0770</mode></nowiki><br/><nowiki> <owner>78</owner></nowiki><br/><nowiki> <group>78</group></nowiki><br/><nowiki> </permissions></nowiki><br/><nowiki> </target></nowiki><br/><nowiki></pool></nowiki>}}
{{margin}}
{{Console|title=iso-images.vol|prompt=false|1=<nowiki><pool type='fs'></nowiki><br/><nowiki> <name>iso-images</name></nowiki><br/> <nowiki><source></nowiki><br/> <nowiki><device path="</nowiki>/dev/{{cyanBold|vgroup}}/{{cyanBold|lvol}}"<nowiki> /></nowiki><br/> <nowiki></source></nowiki><br/><nowiki> <target></nowiki><br/><nowiki> <path>/pool/iso</path></nowiki><br/><nowiki> <permissions></nowiki><br/><nowiki> <mode>0770</mode></nowiki><br/><nowiki> <owner>78</owner></nowiki><br/><nowiki> <group>78</group></nowiki><br/><nowiki> </permissions></nowiki><br/><nowiki> </target></nowiki><br/><nowiki></pool></nowiki>}}
{{margin}}
{{Console|title=iso-images.vol|prompt=false|1=<nowiki><pool type='netfs'></nowiki><br/><nowiki> <name>iso-images</name></nowiki><br/> <nowiki><source></nowiki><br/> <nowiki><host name="</nowiki>{{cyanBold|nfs.example.com}}" <nowiki>/></nowiki><br/> <nowiki><dir path="</nowiki>{{cyanBold|/nfs-path-to/images}}" <nowiki>/></nowiki><br/> <nowiki><format type='nfs'/></nowiki><br/> <nowiki></source></nowiki><br/><nowiki> <target></nowiki><br/><nowiki> <path>/pool/iso</path></nowiki><br/><nowiki> <permissions></nowiki><br/><nowiki> <mode>0770</mode></nowiki><br/><nowiki> <owner>78</owner></nowiki><br/><nowiki> <group>78</group></nowiki><br/><nowiki> </permissions></nowiki><br/><nowiki> </target></nowiki><br/><nowiki></pool></nowiki>}}
After creating the pool XML file, define the pool in libvirt.
{{Console|1=virsh pool-define iso-images.vol}}
Before you begin using the pool it must also be built, it is also a good idea to set it to auto-start.
{{Console|1=virsh pool-build {{cyanBold|iso-images}}}}
{{margin}}
{{Console|1=virsh pool-start {{cyanBold|iso-images}}}}
{{margin}}
{{Console|1=virsh pool-autostart {{cyanBold|iso-images}}}}
The {{mono|iso-images}} pool should now be properly setup, feel free to import some images into the directory.
{{Console|1=sudo cp archlinux-2019.02.01-x86_64.iso /pool/iso}}
Permissions and ownership will need to be set correctly.
{{Console|1=sudo chown kvm:kvm /pool/iso/archlinux-2019.02.01-x86_64.iso}}
{{margin}}
{{Console|1=sudo chmod 660 /pool/iso/archlinux-2019.02.01-x86_64.iso}}
After copying over images and correcting permissions refresh the pool.
{{Console|1=virsh pool-refresh iso-images}}
The image should now show up in the list of volumes for that pool.
{{Console|1=virsh vol-list iso-images --details}}
{{margin}}
{{Console|prompt=false|1=&nbsp;Name                              Path                                        Type  Capacity    Allocation<br/>---------------------------------------------------------------------------------------------------------------<br/>&nbsp;archlinux-2019.02.01-x86_64.iso  /pool/iso/archlinux-2019.02.01-x86_64.iso  file  600.00 MiB  600.00 MiB}}
Check on the status of all pools that have been added.
{{Console|1=virsh pool-list --all --details}}
{{margin}}
{{Console|prompt=false|1=&nbsp;Name        State    Autostart  Persistent  Capacity    Allocation  Available<br/>---------------------------------------------------------------------------------------<br/> iso-images  running  yes        yes          108.75 GiB  4.77 GiB    103.97 GiB}}
 
== {{Icon|notebook}} LVM ==
If LVM is going to be used for the VM storage pool, that can be setup now.
{{Note|If the volume group has been created manually, the {{mono|&lt;source&gt;}} section can be omitted from the XML and skip the {{mono|build}} step as that is used to create the LVM volume group.}}
Begin by creating a storage pool file.
{{Console|title=vdisk.vol|prompt=false|1=<nowiki><pool type='logical'></nowiki><br/><nowiki> <name>vdisk</name></nowiki><br/> <nowiki><source></nowiki><br/> <nowiki><device path="/dev/</nowiki>{{cyanBold|sdX3}}" <nowiki>/></nowiki><br/> <nowiki><device path="/dev/</nowiki>{{cyanBold|sdX4}}" <nowiki>/></nowiki><br/> <nowiki></source></nowiki><br/><nowiki> <target></nowiki><br/><nowiki> <path>/dev/vdisk</path></nowiki><br/><nowiki> <permissions></nowiki><br/><nowiki> <mode>0770</mode></nowiki><br/><nowiki> <owner>78</owner></nowiki><br/><nowiki> <group>78</group></nowiki><br/><nowiki> </permissions></nowiki><br/><nowiki> </target></nowiki><br/><nowiki></pool></nowiki>}}
After creating the pool XML file, define the pool in libvirt, build it and set it to auto-start.
{{Console|1=virsh pool-define vdisk.vol}}
{{margin}}
{{Console|1=virsh pool-build {{cyanBold|vdisk}}}}
{{margin}}
{{Console|1=virsh pool-start {{cyanBold|vdisk}}}}
{{margin}}
{{Console|1=virsh pool-autostart {{cyanBold|vdisk}}}}
Grant ownership of the LVM volumes to the {{mono|kvm}} group in order to properly mount them using Libvirt.
{{Console|title=/etc/udev/rules.d/90-kvm.rules|prompt=false|1=ENV{DM_VG_NAME}{{=}}{{=}}"{{cyanBold|vdisk}}" ENV{DM_LV_NAME}{{=}}{{=}}"*" OWNER{{=}}"kvm"}}
Continuing as is will allow libvirtd to automatically manage the LVM volume on its own.
=== LVM Thin Volumes ===
Before going down this road, there are a couple of things to consider.
{{Warning|If thin provisioning is enabled, LVM automation via libvirtd will be broken.}}
In a standard LVM logical volume, all of the block are allocated when the volume is created, but blocks in a thin provisioned LV are allocated as they are written. Because of this, a thin provisioned logical volume is given a virtual size, and can then be much larger than physically available storage.
{{Warning|[//searchservervirtualization.techtarget.com/feature/Overprovisioning-VMs-may-be-safe-but-it-isnt-sound Over-provisioning] is NEVER recommended, whether it is CPU, RAM or HDD space.}}
With the warnings out of the way, if thin provisioning is desired begin by creating a thin pool.
{{Console|1=sudo lvcreate -l +100%FREE -T {{cyanBold|vdisk/thin}}}}
A volume group named {{mono|vdisk}} was prepared using the previous steps via {{mono|virsh build}}, if this was skipped either go back and redo it or prepare the volume group yourself. Doing it this way has the added benefit of breaking only most of the LVM functionality of libvirt.
 
= {{Icon24|sitemap}} VM Creation =
With libvirt setup completed, time to create the first VM. Before we can begin with the VM installation a logical volume needs to be created for the VM.
 
If you chose the default setup using regular LVM, feel free to use virsh.
{{Console|1=virsh vol-create-as vdisk {{cyanBold|vmname}} {{cyanBold|32}}GiB}}
If you went the route of thin volumes, create the logical volume manually.
{{Console|1=sudo lvcreate -V {{cyanBold|32}}G -T vdisk/thin -n {{cyanBold|vmname}}}}
Take the time now to install {{mono|virt-manager}} on a client machine running X11. Connection to the KVM machine over SSH using virt-manager should now be possible.
{{Note|Adding a hosts entry is only required for console access over SSH when both machines are on the same local network.}}
Add a hosts entry for the KVM machine on the client machine if required.
{{Console|title=/etc/hosts|prompt=false|1={{cyanBold|X.X.X.X}} {{cyanBold|vmhost}}}}
Start the VM installation.
{{Console|1=virt-install {{greenBold|\}}<br/><nowiki>        </nowiki>--virt-type{{=}}kvm --hvm {{greenBold|\}}<br/><nowiki>        </nowiki>--name {{cyanBold|vmname}} {{greenBold|\}}<br/><nowiki>        </nowiki>--cpu host-model-only --vcpus{{=}}{{cyanBold|2}} --memory {{cyanBold|2048}},hugepages {{greenBold|\}}<br/><nowiki>        </nowiki>--network{{=}}bridge{{=}}kvm0,model{{=}}virtio {{greenBold|\}}<br/><nowiki>        </nowiki>--graphics spice,port{{=}}{{cyanBold|4901}},tlsport{{=}}{{cyanBold|5901}},listen{{=}}{{cyanBold|0.0.0.0}},password{{=}}{{cyanBold|moo}} {{greenBold|\}}<br/><nowiki>        </nowiki>--cdrom{{=}}/pool/iso/archlinux-2019.02.01-x86_64.iso {{greenBold|\}}<br/><nowiki>        </nowiki>--disk path{{=}}/dev/vdisk/{{cyanBold|vmname}},bus{{=}}virtio {{greenBold|\}}<br/><nowiki>        </nowiki>--console pty,target_type{{=}}serial --wait -1 --boot uefi}}
If this fails with a {{mono|Permission denied}} error having to do with an nvram file, change the permissions accordingly and then re-run {{mono|virt-install}}.
{{Console|1=sudo chown kvm:kvm /var/lib/libvirt/qemu/nvram/{{cyanBold|vmname}}_VARS.fd}}
{{margin}}
{{Console|1=sudo chmod 660 /var/lib/libvirt/qemu/nvram/{{cyanBold|vmname}}_VARS.fd}}
{{Note|Normally one would use {{mono|--location}} instead of {{mono|--cdrom}} so that {{mono|--extra-args}} could be used to enable the console. However, Arch Linux being a hybrid iso cannot do this.}}
If all went well it should print out the following:
{{Console|prompt=false|1=Starting install...<br/>Domain installation still in progress. Waiting for installation to complete.}}
At this point return to virt-manager on the client machine and connect to the remote libvirt instance. Then select the new virtual machine and choose Open. The remote virtual machine installation should now be on screen.


The KVM virtual machine should now be visible through Vinagre.
If you were lucky enough to catch the installation at the boot menu, press the {{mono|&lt;TAB&gt;}} key to add the console to the kernel line before booting.
{{Console|prompt=false|1=...archiso.img console{{=}}ttyS0}}


Login as {{mono|root}}, if this was built using {{mono|packer-kvm-templates}} the default password is {{mono|password}}.
Follow through with the installation via the console.
{{SeeAlso|ArchLinux:Installation|Arch Linux Installation}}
After installation make sure to re-add the kernel parameter {{mono|console{{=}}ttyS0}}, then reboot the VM.
{{Note|If the console resolution is too large, it can be shrunk with the kernel parameter {{mono|nomodeset vga{{=}}276}} to set it to 800x600. More information [//en.wikipedia.org/wiki/VESA_BIOS_Extensions#Linux_video_mode_numbers here]}}
Once the console kernel parameter(s) have been added, verify this is working.
{{Console|1=virsh console {{cyanBold|vmname}}}}
Once connected press {{mono|&lt;ENTER&gt;}} to get to the login prompt.


Edit the network interface configuration for systemd. This first VM is going to be acting as my DNS server, therefore it will be assigned two IP addresses.
= {{Icon24|sitemap}} Additional Notes =
{{Console|title=/etc/systemd/network/eth0.service|prompt=false|[Match]<br/>Name{{=}}eth0<br/><br/>[Network]<br/>Address{{=}}FAILOVER_IP_1/32<br/>Address{{=}}FAILOVER.IP.2/32<br/>Peer{{=}}HOST_GATEWAY/32<br/><br/>[Gateway]<br/>Gateway{{=}}HOST_GATEWAY<br/>Destination{{=}}0.0.0.0/0}}
These notes are here from my own install.
This is exactly how [//docs.ovh.ca/en/guides-network-bridging.html#networkd OVH] says it should be setup, however this was not enough as the VM still did not have a default route.


To fix the routing create a service on boot.
Import from qcow2 backup.
{{Console|title=/usr/lib/systemd/system/kvmnet.service|prompt=false|1=[Unit]<br/>Description{{=}}Start KVM Network<br/>After{{=}}systemd-udevd.service network-pre.target systemd-sysusers.service systemd-sysctl.service<br/>Before{{=}}network.target multi-user.target shutdown.target<br/>Conflicts{{=}}shutdown.target<br/>Wants{{=}}network.target<br/><br/>[Service]<br/>ExecStart{{=}}/usr/local/bin/kvmnet<br/><br/>[Install]<br/>WantedBy{{=}}multi-user.target}}
{{Console|1=sudo qemu-img convert -f qcow2 -O raw backup.qcow2 /dev/vdisk/{{cyanBold|vmname}}}}
And the script that does the routing.
How to re-sparsify a thin volume if restored from backup qcow2.
{{Console|title=/usr/local/bin/kvmnet|prompt=false|1=#!/bin/bash<br/>ip route add {{MagentaBold|Y.Y.Y}}.254 dev eth0<br/>ip route add default via {{MagentaBold|Y.Y.Y}}.254 dev eth0}}
{{Console|1=sudo virt-sparsify --in-place /dev/vdisk/{{cyanBold|vmname}}}}
Don't forget to make it executable.
Import machine that was previously exported.
{{Console|1=sudo chmod +rx /usr/local/bin/kvmnet}}
{{Console|1=virsh define --file newxml-bind.xml}}
Reboot the VM and then verify it has internet access.
Make a VM autostart on boot.
{{Console|1=sudo reboot|2=ping archlinux.org}}
{{Console|1=virsh autostart {{cyanBold|vmname}}}}
Finally, verify it can be SSH into from the outside via '''BOTH''' IP addresses.


= {{Icon24|book-brown}} References =
<references/>


[[Category:Arch Linux]]
[[Category:Arch Linux]]

Latest revision as of 00:56, 10 May 2021

IconGitLab: kyaulabs/aarch: Automated Arch Linux installer.
IconWARNING: This page has not been updated since the creation of AArch and its included packages. Therefore it is possible that some or all of the following information is out of date.

Icon Introduction

This is a tutorial for setting up and using KVM on Arch Linux utilizing QEMU as the back-end and libvirt as the front-end. Additional notes have been added for creating system images.

UPDATE (2019): Tested/Cleaned Up this document using a Dell R620 located in-house at KYAU Labs as the test machine.

Icon Installation

Before getting started it is a good idea to make sure VT-x or AMD-V is enabled in BIOS.

# egrep --color 'vmx|svm' /proc/cpuinfo
 
IconIf hardware virtualization is not enabled, reboot the machine and enter the BIOS to enable it.

Once hardware virtualization has been verified install all the packages required.

# pikaur -S bridge-utils dmidecode libguestfs libvirt \
openbsd-netcat openssl-1.0 ovmf qemu-headless \
qemu-headless-arch-extra virt-install

Icon Configuration

After all of the packages have been installed libvirt/QEMU need to be configured.

Icon User/Group Management

Create a user for KVM.

# sudo useradd -g kvm -s /usr/bin/nologin kvm

Then modify the libvirt QEMU config to reflect this.

filename: /etc/libvirt/qemu.conf
...
user = "kvm"
group = "kvm"
...

Fix permission on /dev/kvm

# sudo groupmod -g 78 kvm
 
# sudo usermod -u 78 kvm
 
Iconsystemd as of 234 assigns dynamic IDs to groups, but KVM expects 78

User Access

If non-root user access to libvirtd is desired, add the libvirt group to polkit access.

filename: /etc/polkit-1/rules.d/50-libvirt.rules
/* Allow users in kvm group to manage the libvirt daemon without authentication */
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.isInGroup("libvirt")) {
return polkit.Result.YES;
}
});
 
IconIf HAL was followed to secure the system after installation and you would like to use libvirt as a non-root user, the hidepid security feature from the /proc line in /etc/fstab will need to be removed. This will require a reboot.

Add the users who need libvirt access to the kvm and libvirt groups.

# sudo gpasswd -a username kvm
 
# sudo gpasswd -a username libvirt

To make life easier it is suggested to set a couple shell variables for virsh, this will default to qemu:///session when running as a non-root user.

# setenv VIRSH_DEFAULT_CONNECT_URI qemu:///system
 
# setenv LIBVIRT_DEFAULT_URI qemu:///system

These can be added to /etc/bash.bashrc, /etc/fish/config.fish or /etc/zsh/zshenv depending on which shell is being used.

Icon Hugepages

Enabling hugepages can improve the performance of virtual machines. First add an entry to the fstab, make sure to first check what the group id of the group kvm is (it should be 78.

# grep kvm /etc/group
 
filename: /etc/fstab
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0

Instead of rebooting, remount instead.

# sudo umount /dev/hugepages
# sudo mount /dev/hugepages

This can then be verified.

# sudo mount | grep huge
# ls -FalG /dev/ | grep huge

Now to set the number of hugepages to use. For this one has to do a bit of math, for each gigabyte of the system RAM that you want to use for VMs you divide the size in megabytes by two.

IconOn my setup I will dedicated 40GB out of the 48GB of system RAM to VMs. This means (40 * 1024) / 2 or 20480

Set the number of hugepages.

# echo 20480 | sudo tee /proc/sys/vm/nr_hugepages

Also set this permanently by adding a file to /etc/sysctl.d.

filename: /etc/sysctl.d/40-hugepages.conf
vm.nr_hugepages = 20480

Again verify the changes.

# grep HugePages_Total /proc/meminfo

Edit the libvirt QEMU config and turn hugepages on.

filename: /etc/libvirt/qemu.conf
...
hugetlbfs_mount = "/dev/hugepages"
...

Icon Kernel Modules

A few additional kernel modules will help to assist KVM.

Nested virtualization can be enabled by loading the kvm_intel module with the nested=1 option. To mount directories directly from the host inside of a VM, the 9pnet_virtio module will need to be loaded. Additionally virtio-net and virtio-pci are loaded to add para-virtualized devices.

# sudo modprobe -r kvm_intel
 
# sudo modprobe kvm_intel nested=1
 
# sudo modprobe 9pnet_virtio virtio_net virtio_pci

Also load the module on boot.

filename: /etc/modules-load.d/virtio.conf
options kvm_intel nested=1
9pnet_virtio
virtio_net
virtio_pci

If 9pnet is going to be used, change the global QEMU config to turn off dynamic file ownership.

filename: /etc/libvirt/qemu.conf
...
dynamic_ownership = 0
...

Nested virtualization can be verified.

# sudo systool -m kvm_intel -v | grep nested
 
IconIf the machine has an AMD processor use kvm_amd instead for nested virtualization.

Icon UEFI & PCI-E Passthrough

The Open Virtual Machine Firmware (OVMF) is a project to enable UEFI support for virtual machines and enabling IOMMU will enable PCI pass-through among other things. This extends the possibilities for operating system choices significantly and also provides some other options.

GRUB

Enable IOMMU on boot by adding an option to the kernel line in GRUB.

filename: /etc/default/grub
...
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
...

Re-generate the GRUB config.

# sudo grub-mkconfig -o /boot/grub/grub.cfg

REfind

Enable IOMMU on boot by adding an option to the

filename: /boot/EFI/BOOT/refind.conf
...
options "root=/dev/mapper/skye-root rw add_efi_memmap nomodeset intel_iommu=on zswap.enabled=1 zswap.compressor=lz4 \
zswap.max_pool_percent=20 zswap.zpool=z3fold initrd=\intel-ucode.img"
...

Reboot the machine and then verify IOMMU is enabled.

# sudo dmesg | grep -e DMAR -e IOMMU

If it was enabled properly, there should be a line similar to [ 0.000000] DMAR: IOMMU enabled.

OVMF

Adding the OVMF firmware to libvirt.

filename: /etc/libvirt/qemu.conf
...
nvram = [
"/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]
...

SPICE TLS

Enable SPICE over TLS will allow SPICE to be exposed externally.

Edit the libvirt QEMU config to enable SPICE over TLS.

filename: /etc/libvirt/qemu.conf
...
spice_listen = "0.0.0.0"
spice_tls = 1
spice_tls_x509_cert_dir = /etc/pki/libvirt-spice
...

Then use the following script to generate the required certificates.

filename: spice-tls.sh
#!/bin/bash SERVER_KEY=server-key.pem # creating a key for our ca if [ ! -e ca-key.pem ]; then openssl genrsa -des3 -out ca-key.pem 1024 fi # creating a ca if [ ! -e ca-cert.pem ]; then openssl req -new -x509 -days 1095 -key ca-key.pem -out ca-cert.pem -utf8 -subj "/C=WA/L=Seattle/O=KYAU Labs/CN=KVM" fi # create server key if [ ! -e $SERVER_KEY ]; then openssl genrsa -out $SERVER_KEY 1024 fi # create a certificate signing request (csr) if [ ! -e server-key.csr ]; then openssl req -new -key $SERVER_KEY -out server-key.csr -utf8 -subj "/C=WA/L=Seattle/O=KYAU Labs/CN=myhostname.example.com" fi # signing our server certificate with this ca if [ ! -e server-cert.pem ]; then openssl x509 -req -days 1095 -in server-key.csr -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem fi # now create a key that doesn't require a passphrase openssl rsa -in $SERVER_KEY -out $SERVER_KEY.insecure mv $SERVER_KEY $SERVER_KEY.secure mv $SERVER_KEY.insecure $SERVER_KEY # show the results (no other effect) openssl rsa -noout -text -in $SERVER_KEY openssl rsa -noout -text -in ca-key.pem openssl req -noout -text -in server-key.csr openssl x509 -noout -text -in server-cert.pem openssl x509 -noout -text -in ca-cert.pem
 
IconIf setting up multiple KVM host machines, use the same CA files when generating the other machine certificates.

Create the directory for the certificates.

# sudo mkdir -p /etc/pki/libvirt-spice

Change permissions on the directory.

# sudo chmod -R a+rx /etc/pki

Move the generate files to the new directory.

# sudo mv ca-* server-* /etc/pki/libvirt-spice

Correct permissions on the files.

# sudo chmod 660 /etc/pki/libvirt-spice/*
 
# sudo chown kvm:kvm /etc/pki/libvirt-spice/*

Icon Services

Once the bridge is up and running libvirtd can be started, enable and start the libvirtd service.

# sudo systemctl enable libvirtd
 
# sudo systemctl start libvirtd

Verify that libvirt is running.

# virsh --connect qemu:///system
 
Welcome to virsh, the virtualization interactive terminal.

Type: 'help' for help with commands
'quit' to quit

virsh #

If you end up at the virsh prompt, simply type quit to exit back to the shell.

Icon Networking

The server being used for testing has quad gigabit network cards in it. For this type of setup one NIC will be used for management of the host OS, while the other three will be bonded together using 802.3ad (combines all NICs for optimal throughput).

Icon NIC Bonding

Pull up a list of all network cards in the machines.

# ip -c=auto l
 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether d4:be:d9:b2:95:43 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether d4:be:d9:b2:95:45 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether d4:be:d9:b2:95:47 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether d4:be:d9:b2:95:49 brd ff:ff:ff:ff:ff:ff

Create a management .network file, replace M.M.M.M with the management IP address and G.G.G.G with the gateway IP.

filename: /etc/systemd/network/management.network
[Match]
Name=eth0

[Network]
DHCP=no
NTP=pool.ntp.org
DNS=1.1.1.1
LinkLocalAddressing=no

[Address]
Address=M.M.M.M/24
Label=management

[Route]
Gateway=G.G.G.G
 
IconWARNING: If IPv6 is being used, remove the LinkLocalAddressing=no line from the file as this defaults to ipv6.

Create the bond interface with systemd-networkd.

filename: /etc/systemd/network/bond0.netdev
[NetDev]
Name=bond0
Description=KVM vSwitch
Kind=bond

[Bond]
Mode=802.3ad
TransmitHashPolicy=layer3+4
MIIMonitorSec=1s
LACPTransmitRate=fast

Use the last three network cards to create a bond0.network file.

filename: /etc/systemd/network/bond0.network
[Match]
Name=eth1
Name=eth2
Name=eth3

[Network]
Bond=bond0

Finally create the .network file attaching it to the bridge that is created in the next section.

filename: /etc/systemd/network/kvm0.network
[Match]
Name=bond0

[Network]
Bridge=kvm0

Icon Network Bridge

Setting up a network bridge for KVM is simple with systemd. Create the bridge interface with systemd-networkd.

filename: /etc/systemd/network/kvm0.netdev
[NetDev]
Name=kvm0
Kind=bridge

Create the .network file for the bridge, replace X.X.X.X with the IP address desired for the KVM vSwitch, G.G.G.G with the gateway IP and modify the DNS if Cloudflare is not desired.

filename: /etc/systemd/network/vswitch.network
[Match]
Name=kvm0

[Network]
DHCP=no
NTP=pool.ntp.org
DNS=1.1.1.1
IPForward=yes
LinkLocalAddressing=no

[Address]
Address=X.X.X.X/24
Label=vswitch

[Route]
Gateway=G.G.G.G

And finally restart networkd.

# sudo systemctl restart systemd-networkd

The bridge should now be up and running, this should be verified.

# ip -c=auto a

Before adding the bridge to libvirt, check the current networking settings.

# virsh net-list --all
 
 Name State Autostart Persistent
----------------------------------------------
default inactive no yes

Create a libvirt configuration for the bridge.

filename: /etc/libvirt/bridge.xml
<network>
<name>kvm0</name>
<forward mode="bridge"/>
<bridge name="kvm0"/>
</network>

Enable the bridge in libvirt.

# virsh net-define --file /etc/libvirt/bridge.xml

Set the bridge to auto-start.

# virsh net-autostart kvm0

Start the bridge.

# virsh net-start kvm0

With the bridge now online, the default NAT network can be removed if it will not be used.

# virsh net-destroy default
 
# virsh net-undefine default

This can then be verified.

# virsh net-list --all
 
 Name State Autostart Persistent
----------------------------------------------
kvm0 active yes yes

Icon Firewall

Since libvirt cannot directly interface with nftables, it can only interface with iptables, firewalld can be used as a gateway in-between the two. Before it can be started nftables will have to be disabled if it is currently being used.

# sudo systemctl disable nftables
 
# sudo systemctl stop nftables

Install firewalld and dnsmasq.

# pikaur -S dnsmasq firewalld

Start and enable the service.

# sudo systemctl enable firewalld
 
# sudo systemctl start firewalld

Verify the firewall started properly, it should return running.

# sudo firewall-cmd --state

Add both interfaces to the public zone.

# firewall-cmd --permanent --zone=public --add-interface=eth0
 
# firewall-cmd --permanent --zone=public --add-interface=kvm0

Reboot the machine to verify the changes stick upon reboot.

# sudo systemctl reboot
IconThe SSH service is added by default to the firewall, allowing one to log back in after reboot.

Look up the default zone config to verify the interfaces were added.

# sudo firewall-cmd --list-all
 
public
  target: default
  icmp-block-inversion: no
  interfaces: eth0 kvm0
  sources:
  services: dhcpv6-client ssh
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

If any other services need to be added, do so now (a couple examples have been listed below).

# sudo firewall-cmd --zone=public --permanent --add-service=https
 
# sudo firewall-cmd --zone=public --permanent --add-port=5900-5950/udp

With the firewall now setup, libvirtd should be fully started without any warnings about the firewall.

# sudo systemctl status libvirtd
 
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-26 14:27:12 PST; 16min ago
     Docs: man:libvirtd(8)
           https://libvirt.org
 Main PID: 693 (libvirtd)
    Tasks: 17 (limit: 32768)
   Memory: 61.3M
   CGroup: /system.slice/libvirtd.service
           └─693 /usr/bin/libvirtd

Feb 26 14:27:12 skye.wa.kyaulabs.com systemd[1]: Started Virtualization daemon.

Icon Storage Pools

A storage pool in libvirt is merely a storage location designated to store virtual machine images or virtual disks. The most common storage types include netfs, disk, dir, fs, iscsi, logical, and gluster.

List all of the currently available storage pools.

# virsh pool-list --all
 
Name State Autostart
---------------------------

Icon ISO Images

Begin by adding a pool for iso-images. If you wish to use something other than an existing mount-point you will have to change the type, options include the ones listed above (more can be found in the man page). A couple of examples are as follows:

filename: iso-images.vol
<pool type='dir'>
<name>iso-images</name>
<target>
<path>/pool/iso</path>
<permissions>
<mode>0770</mode>
<owner>78</owner>
<group>78</group>
</permissions>
</target>
</pool>
 
filename: iso-images.vol
<pool type='fs'>
<name>iso-images</name>
<source>
<device path="/dev/vgroup/lvol" />
</source>
<target>
<path>/pool/iso</path>
<permissions>
<mode>0770</mode>
<owner>78</owner>
<group>78</group>
</permissions>
</target>
</pool>
 
filename: iso-images.vol
<pool type='netfs'>
<name>iso-images</name>
<source>
<host name="nfs.example.com" />
<dir path="/nfs-path-to/images" />
<format type='nfs'/>
</source>
<target>
<path>/pool/iso</path>
<permissions>
<mode>0770</mode>
<owner>78</owner>
<group>78</group>
</permissions>
</target>
</pool>

After creating the pool XML file, define the pool in libvirt.

# virsh pool-define iso-images.vol

Before you begin using the pool it must also be built, it is also a good idea to set it to auto-start.

# virsh pool-build iso-images
 
# virsh pool-start iso-images
 
# virsh pool-autostart iso-images

The iso-images pool should now be properly setup, feel free to import some images into the directory.

# sudo cp archlinux-2019.02.01-x86_64.iso /pool/iso

Permissions and ownership will need to be set correctly.

# sudo chown kvm:kvm /pool/iso/archlinux-2019.02.01-x86_64.iso
 
# sudo chmod 660 /pool/iso/archlinux-2019.02.01-x86_64.iso

After copying over images and correcting permissions refresh the pool.

# virsh pool-refresh iso-images

The image should now show up in the list of volumes for that pool.

# virsh vol-list iso-images --details
 
 Name Path Type Capacity Allocation
---------------------------------------------------------------------------------------------------------------
 archlinux-2019.02.01-x86_64.iso /pool/iso/archlinux-2019.02.01-x86_64.iso file 600.00 MiB 600.00 MiB

Check on the status of all pools that have been added.

# virsh pool-list --all --details
 
 Name State Autostart Persistent Capacity Allocation Available
---------------------------------------------------------------------------------------
iso-images running yes yes 108.75 GiB 4.77 GiB 103.97 GiB

Icon LVM

If LVM is going to be used for the VM storage pool, that can be setup now.

IconIf the volume group has been created manually, the <source> section can be omitted from the XML and skip the build step as that is used to create the LVM volume group.

Begin by creating a storage pool file.

filename: vdisk.vol
<pool type='logical'>
<name>vdisk</name>
<source>
<device path="/dev/sdX3" />
<device path="/dev/sdX4" />
</source>
<target>
<path>/dev/vdisk</path>
<permissions>
<mode>0770</mode>
<owner>78</owner>
<group>78</group>
</permissions>
</target>
</pool>

After creating the pool XML file, define the pool in libvirt, build it and set it to auto-start.

# virsh pool-define vdisk.vol
 
# virsh pool-build vdisk
 
# virsh pool-start vdisk
 
# virsh pool-autostart vdisk

Grant ownership of the LVM volumes to the kvm group in order to properly mount them using Libvirt.

filename: /etc/udev/rules.d/90-kvm.rules
ENV{DM_VG_NAME}=="vdisk" ENV{DM_LV_NAME}=="*" OWNER="kvm"

Continuing as is will allow libvirtd to automatically manage the LVM volume on its own.

LVM Thin Volumes

Before going down this road, there are a couple of things to consider.

IconWARNING: If thin provisioning is enabled, LVM automation via libvirtd will be broken.

In a standard LVM logical volume, all of the block are allocated when the volume is created, but blocks in a thin provisioned LV are allocated as they are written. Because of this, a thin provisioned logical volume is given a virtual size, and can then be much larger than physically available storage.

IconWARNING: Over-provisioning is NEVER recommended, whether it is CPU, RAM or HDD space.

With the warnings out of the way, if thin provisioning is desired begin by creating a thin pool.

# sudo lvcreate -l +100%FREE -T vdisk/thin

A volume group named vdisk was prepared using the previous steps via virsh build, if this was skipped either go back and redo it or prepare the volume group yourself. Doing it this way has the added benefit of breaking only most of the LVM functionality of libvirt.

Icon VM Creation

With libvirt setup completed, time to create the first VM. Before we can begin with the VM installation a logical volume needs to be created for the VM.

If you chose the default setup using regular LVM, feel free to use virsh.

# virsh vol-create-as vdisk vmname 32GiB

If you went the route of thin volumes, create the logical volume manually.

# sudo lvcreate -V 32G -T vdisk/thin -n vmname

Take the time now to install virt-manager on a client machine running X11. Connection to the KVM machine over SSH using virt-manager should now be possible.

IconAdding a hosts entry is only required for console access over SSH when both machines are on the same local network.

Add a hosts entry for the KVM machine on the client machine if required.

filename: /etc/hosts
X.X.X.X vmhost

Start the VM installation.

# virt-install \
--virt-type=kvm --hvm \
--name vmname \
--cpu host-model-only --vcpus=2 --memory 2048,hugepages \
--network=bridge=kvm0,model=virtio \
--graphics spice,port=4901,tlsport=5901,listen=0.0.0.0,password=moo \
--cdrom=/pool/iso/archlinux-2019.02.01-x86_64.iso \
--disk path=/dev/vdisk/vmname,bus=virtio \
--console pty,target_type=serial --wait -1 --boot uefi

If this fails with a Permission denied error having to do with an nvram file, change the permissions accordingly and then re-run virt-install.

# sudo chown kvm:kvm /var/lib/libvirt/qemu/nvram/vmname_VARS.fd
 
# sudo chmod 660 /var/lib/libvirt/qemu/nvram/vmname_VARS.fd
IconNormally one would use --location instead of --cdrom so that --extra-args could be used to enable the console. However, Arch Linux being a hybrid iso cannot do this.

If all went well it should print out the following:

Starting install...
Domain installation still in progress. Waiting for installation to complete.

At this point return to virt-manager on the client machine and connect to the remote libvirt instance. Then select the new virtual machine and choose Open. The remote virtual machine installation should now be on screen.

If you were lucky enough to catch the installation at the boot menu, press the <TAB> key to add the console to the kernel line before booting.

...archiso.img console=ttyS0

Follow through with the installation via the console.

After installation make sure to re-add the kernel parameter console=ttyS0, then reboot the VM.

IconIf the console resolution is too large, it can be shrunk with the kernel parameter nomodeset vga=276 to set it to 800x600. More information here

Once the console kernel parameter(s) have been added, verify this is working.

# virsh console vmname

Once connected press <ENTER> to get to the login prompt.

Icon Additional Notes

These notes are here from my own install.

Import from qcow2 backup.

# sudo qemu-img convert -f qcow2 -O raw backup.qcow2 /dev/vdisk/vmname

How to re-sparsify a thin volume if restored from backup qcow2.

# sudo virt-sparsify --in-place /dev/vdisk/vmname

Import machine that was previously exported.

# virsh define --file newxml-bind.xml

Make a VM autostart on boot.

# virsh autostart vmname