ArchLinux:KVM: Difference between revisions

From Wiki³
mNo edit summary
Line 82: Line 82:
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|nvram {{=}} [<br/> "/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"<br/>]}}
{{Console|title=/etc/libvirt/qemu.conf|prompt=false|nvram {{=}} [<br/> "/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"<br/>]}}


= {{Icon24|sitemap}} LVM =
== {{Icon24|notebook}} LVM ==
During the [[ArchLinux:OVH|installation]] of the KVM host machine a {{mono|data}} volume group was created for VMs. Before carving out disk space for virtual machines, create the volume(s) that will exist outside of the virtual machines. These will be used for databases, web root directories and any other data that needs to persist between VM creation and destruction.
If LVM is being used be sure to grant ownership of the LVM volumes to the {{mono|kvm}} group in order to properly mount them using Libvirt.
{{Console|1=sudo lvcreate -L 256G data --name http}}
{{Console|title=/etc/udev/rules.d/90-kvm.rules|prompt=false|1=ENV{DM_VG_NAME}{{=}}{{=}}"data" ENV{DM_LV_NAME}{{=}}{{=}}"*" OWNER{{=}}"kvm"}}
{{Note|I am only using a single LVM volume and then creating directories inside of this for each machine}}
Create a directory for the volume.
{{Console|1=sudo mkdir /http}}
Format the new volume with {{mono|ext4}}.
{{Console|1=sudo mkfs.ext4 -O metadata_csum,64bit /dev/data/http|2=sudo mount /dev/data/http /http}}
Set proper permissions and mod the {{mono|http}} user's home directory.
{{Console|1=sudo chown http:http /http|2=sudo usermod -m -d /http http}}
Add the volume to {{mono|fstab}} so that it mounts upon boot.
{{Console|title=/etc/fstab|prompt=false|/dev/mapper/data-http /http ext4 rw,relatime,stripe{{=}}256,data{{=}}ordered,journal_checksum 0 0}}
Volumes will now need to be created for each virtual machine, for this an LVM thin pool can be utilized.
== {{Icon|notebook}} LVM Thin Provisioning ==
Thin provisioning creates another virtual layer on top of your volume group, in which logical thin volumes can be created. Thin volumes, unlike normal thick volumes, do not reserve the disk space for the volume on creation but instead do so upon write; to the operating system they are still reported as full size volumes. This means that when utilizing LVM directly for KVM it will perform similarly to a "dynamic disk" meaning it will only use what disk space it needs regardless of how big the virtual hard drive actually is. This can also be paired with LVM cloning (snapshots) to create some interesting setups, like running 1TB of VMs on a 128GB disk for example.
{{Warning|The one disadvantage to doing this is that without proper disk monitoring and management this can lead to over provisioning (overflow will cause volume drop)}}
Use the rest of the {{mono|data}} volume group for the thin pool.
{{Console|1=sudo lvcreate -l +100%FREE data --thinpool qemu}}
Pulling up {{mono|lvdisplay}} can verify that it created a thin pool.
{{Console|1=sudo lvdisplay data/qemu}}<br/>
{{Console|prompt=false|1={{Black|__}}LV Size                <1.50 TiB<br/>  Allocated pool data    0.00%}}
Finally {{mono|lvs}} should show the volume with the {{mono|t}} and {{mono|tz}} attributes as well as a data percentage.
{{Console|1=sudo lvs}}<br/>
{{Console|prompt=false|1={{Black|__}}LV  VG      Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert<br/>  http data    -wi-ao---- 256.00g<br/>  qemu data    twi-a-tz--  <1.50t            0.00  0.43<br/>  root neutron -wi-ao----  63.93g}}
Adding volumes to the thin pool is very similar to adding normal volumes, add one for the first VM.
{{Console|1=sudo lvcreate -V 20G --thin -n dns data/qemu}}
These volumes can be shrunk or extended at any point.
{{Console|1=sudo lvextend -L +15G data/dns}}
Or even removed entirely.
{{Console|1=sudo lvremove data/dns}}
Verify the new {{mono|base}} volume was added correctly to the thin pool.
{{Console|1=sudo lvs}}
The volume should be marked in pool {{mono|qemu}}, have a data of {{mono|0.00%}} and attributes {{mono|V}} and {{mono|tz}}.
{{Console|prompt=false|1={{Black|__}}LV  VG      Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert<br/>  dns  data    Vwi-a-tz--  20.00g qemu        0.00}}
Grant ownership of the LVM thin volumes to the {{mono|kvm}} group in order to properly mount them using Libvirt.
{{Console|title=/etc/udev/rules.d/90-neutron.rules|prompt=false|1=ENV{DM_VG_NAME}{{=}}{{=}}"data" ENV{DM_LV_NAME}{{=}}{{=}}"*" OWNER{{=}}"kvm"}}


= {{Icon24|sitemap}} Packer =
Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding.
{{GitHub|[//github.com/kyau/packer-kvm-templates kyau/packer-kvm-templates]}}
== {{Icon|notebook}} Vagrant-Libvirt ==
The libvirt plugin installation for vagrant requires some cleanup first.
{{Console|1=sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup}|2=sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup}|3=sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup}|4=sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,backup} }}
Then build the plugin.
{{Console|1=vagrant plugin install vagrant-libvirt}}
== {{Icon|notebook}} Templates ==
The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the [//www.packer.io/docs/templates/index.html Packer docs]. For Arch Linux, the template file {{mono|archlinux-x86_64-base-vagrant.json}} will be used to generate an Arch Linux {{mono|qcow2}} virtual machine image.
{{Console|1=<nowiki>git clone https://github.com/kyau/packer-kvm-templates</nowiki>}}
To explain the template a bit, inside of the {{mono|builders}} section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use {{mono|virtio}} for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the {{mono|http_directory}} specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The {{mono|boot_command}} is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the {{mono|qemuargs}} should be rather apparent as they are the arguments passed to QEMU.
{{Console|1=cd packer-kvm-templates}}
Looking then at the {{mono|provisioners}} section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The {{mono|install.sh}} script is the one that installs Arch Linux, {{mono|hardnening.sh}} is the script that applies [[ArchLinux:Security|hardening]] the Arch Linux installation and finally {{mono|cleanup.sh}} is there for general cleanup after the installation is complete.
While the {{mono|README.md}} does have all of this information for the packer templates, it will also be detailed here.
For added security generate a new {{mono|moduli}} for your VMs (or copy from {{mono|/etc/ssh/moduli}}.
{{Console|1=ssh-keygen -G moduli.all -b 4096|2=ssh-keygen -T moduli.safe -f moduli.all|3=mv moduli.safe moduli && rm moduli.all}}
Enter the directory for the Arch Linux template and sym-link the moduli.
{{Console|1=cd archlinux-x86_64-base/default|2=ln -s ../../moduli . && cd ..}}
Build the base virtual machine image.
{{Console|1=./build archlinux-x86_64-base-vagrant.json}}
{{Note|This runs: {{mono|PACKER_LOG{{=}}1 PACKER_LOG_PATH{{=}}"packer.log" packer-io build archlinux-x86_64-base-vagrant.json}}, it logs to the current directory}}
Once finished, there should be a qcow2 vagrant-libvirt image for Arch Linux in the {{mono|box}} directory.
Add this image to Vagrant.
{{Console|1=vagrant box add box/archlinux-x86_64-base-vagrant-libvirt.box --name archlinux-x86_64-base}}
= {{Icon24|sitemap}} Vagrant-Libvirt =
Vagrant can be used to build and manage test machines. The [//github.com/vagrant-libvirt/vagrant-libvirt vagrant-libvirt] plugin adds a Libvirt provider to Vagrant, allowing Vagrant to control and provision machines via the Libvirt toolkit.
To bring up the first machine initialize Vagrant in a new directory first create a directory for the machine.
{{Console|1=cd|2=mkdir testmachine|3=cd testmachine}}
Init the machine the Vagrant.
{{Console|1=vagrant init archlinux-x86_64-base}}
Then bring up the machine.
{{Console|1=vagrant up}}
Then SSH into the machine directly.
{{Console|1=vagrant ssh}}
= {{Icon24|sitemap}} QEMU =
= {{Icon24|sitemap}} QEMU =
While Vagrant is a great tool for working with test and development environments, for the more permanent VMs on the system, utilizing QEMU directly will allow the VMs to run directly off of LVM thin volumes. Currently vagrant-libvirt cannot do this, due to it's own snapshotting interfering with it; thankfully LVM has snapshotting of its own.
While Vagrant is a great tool for working with test and development environments, for the more permanent VMs on the system, utilizing QEMU directly will allow the VMs to run directly off of LVM thin volumes. Currently vagrant-libvirt cannot do this, due to it's own snapshotting interfering with it; thankfully LVM has snapshotting of its own.
Line 269: Line 196:
{{Console|1=pacaur -S virt-manager}}
{{Console|1=pacaur -S virt-manager}}
Connect remotely to QEMU/KVM with virt-manager over SSH and the virtual machine should be shown as running.
Connect remotely to QEMU/KVM with virt-manager over SSH and the virtual machine should be shown as running.
= {{Icon24|sitemap}} Packer =
Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding.
{{GitHub|[//github.com/kyau/packer-kvm-templates kyau/packer-kvm-templates]}}
== {{Icon|notebook}} Vagrant-Libvirt ==
The libvirt plugin installation for vagrant requires some cleanup first.
{{Console|1=sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup}|2=sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup}|3=sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup}|4=sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,backup} }}
Then build the plugin.
{{Console|1=vagrant plugin install vagrant-libvirt}}
== {{Icon|notebook}} Templates ==
The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the [//www.packer.io/docs/templates/index.html Packer docs]. For Arch Linux, the template file {{mono|archlinux-x86_64-base-vagrant.json}} will be used to generate an Arch Linux {{mono|qcow2}} virtual machine image.
{{Console|1=<nowiki>git clone https://github.com/kyau/packer-kvm-templates</nowiki>}}
To explain the template a bit, inside of the {{mono|builders}} section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use {{mono|virtio}} for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the {{mono|http_directory}} specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The {{mono|boot_command}} is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the {{mono|qemuargs}} should be rather apparent as they are the arguments passed to QEMU.
{{Console|1=cd packer-kvm-templates}}
Looking then at the {{mono|provisioners}} section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The {{mono|install.sh}} script is the one that installs Arch Linux, {{mono|hardnening.sh}} is the script that applies [[ArchLinux:Security|hardening]] the Arch Linux installation and finally {{mono|cleanup.sh}} is there for general cleanup after the installation is complete.
While the {{mono|README.md}} does have all of this information for the packer templates, it will also be detailed here.
For added security generate a new {{mono|moduli}} for your VMs (or copy from {{mono|/etc/ssh/moduli}}.
{{Console|1=ssh-keygen -G moduli.all -b 4096|2=ssh-keygen -T moduli.safe -f moduli.all|3=mv moduli.safe moduli && rm moduli.all}}
Enter the directory for the Arch Linux template and sym-link the moduli.
{{Console|1=cd archlinux-x86_64-base/default|2=ln -s ../../moduli . && cd ..}}
Build the base virtual machine image.
{{Console|1=./build archlinux-x86_64-base-vagrant.json}}
{{Note|This runs: {{mono|PACKER_LOG{{=}}1 PACKER_LOG_PATH{{=}}"packer.log" packer-io build archlinux-x86_64-base-vagrant.json}}, it logs to the current directory}}
Once finished, there should be a qcow2 vagrant-libvirt image for Arch Linux in the {{mono|box}} directory.
Add this image to Vagrant.
{{Console|1=vagrant box add box/archlinux-x86_64-base-vagrant-libvirt.box --name archlinux-x86_64-base}}
= {{Icon24|sitemap}} Vagrant-Libvirt =
Vagrant can be used to build and manage test machines. The [//github.com/vagrant-libvirt/vagrant-libvirt vagrant-libvirt] plugin adds a Libvirt provider to Vagrant, allowing Vagrant to control and provision machines via the Libvirt toolkit.
To bring up the first machine initialize Vagrant in a new directory first create a directory for the machine.
{{Console|1=cd|2=mkdir testmachine|3=cd testmachine}}
Init the machine the Vagrant.
{{Console|1=vagrant init archlinux-x86_64-base}}
Then bring up the machine.
{{Console|1=vagrant up}}
Then SSH into the machine directly.
{{Console|1=vagrant ssh}}


= {{Icon24|sitemap}} Additional Notes =
= {{Icon24|sitemap}} Additional Notes =

Revision as of 02:35, 25 February 2019

IconUNDER CONSTRUCTION: The document is currently being modified!
IconGitLab: kyaulabs/autoarch: Arch Linux installation automation.

Icon Introduction

This is a tutorial for setting up and using KVM on Arch Linux utilizing QEMU as the back-end and libvirt as the front-end. Additional notes have been added for creating system images.

UPDATE (2019): Tested/Cleaned Up this document using a Dell R620 located in-house at KYAU Labs as the test machine.

Icon Installation

Before getting started it is a good idea to make sure VT-x or AMD-V is enabled in BIOS.

# egrep --color 'vmx|svm' /proc/cpuinfo
 
IconIf hardware virtualization is not enabled, reboot the machine and enter the BIOS to enable it.

Once hardware virtualization has been verified install all the packages required.

# pikaur -S bridge-utils dmidecode libguestfs libvirt \
openbsd-netcat openssl-1.0 ovmf qemu-headless \
qemu-headless-arch-extra virt-install

If packer/vagrant are of interest, install them as well.

# pikaur -S packer-io vagrant

Icon Configuration

After all of the packages have been installed libvirt/QEMU need to be configured.

Icon KVM Group

Create a user for KVM.

# sudo useradd -g kvm -s /usr/bin/nologin kvm

Then modify the libvirt QEMU config to reflect this.

filename: /etc/libvirt/qemu.conf
user = "kvm"
group = "kvm"

Fix permission on /dev/kvm

# sudo groupmod -g 78 kvm
Iconsystemd as of 234 assigns dynamic IDs to groups, but KVM expects 78

Add the current user to the kvm and libvirt groups.

# sudo gpasswd -a username kvm
 
# sudo gpasswd -a username libvirt

Icon Hugepages

Enabling hugepages can improve the performance of virtual machines. First add an entry to the fstab, make sure to first check what the group id of the group kvm is.

# grep kvm /etc/group
# sudoedit /etc/fstab
 
filename: /etc/fstab
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0

Instead of rebooting, remount instead.

# sudo umount /dev/hugepages
# sudo mount /dev/hugepages

This can then be verified.

# sudo mount | grep huge
# ls -FalG /dev/ | grep huge

Now to set the number of hugepages to use. For this one has to do a bit of math, for each gigabyte of the system RAM that you want to use for VMs you divide the size in megabytes by two.

IconOn my setup I will dedicated 40GB out of the 48GB of system RAM to VMs. This means (40 * 1024) / 2 or 20480

Set the number of hugepages.

# echo 20480 | sudo tee /proc/sys/vm/nr_hugepages

Also set this permanently by adding a file to /etc/sysctl.d.

filename: /etc/sysctl.d/40-hugepages.conf
vm.nr_hugepages = 20480

Again verify the changes.

# grep HugePages_Total /proc/meminfo

Edit the libvirt QEMU config and turn hugepages on.

filename: /etc/libvirt/qemu.conf

hugetlbfs_mount = "/dev/hugepages"

Icon Kernel Modules

In order to mount directories from the host inside of a virtual machine, the 9pnet_virtio kernel module will need to be loaded.

# sudo modprobe 9pnet_virtio

Also load the module on boot.

filename: /etc/modules-load.d/virtio-9pnet.conf
9pnet_virtio

In addition change the global QEMU config to turn off dynamic file ownership.

filename: /etc/libvirt/qemu.conf

dynamic_ownership = 0

Icon OVMF & IOMMU

The Open Virtual Machine Firmware (OVMF) is a project to enable UEFI support for virtual machines and enabling IOMMU will enable PCI pass-through among other things. This extends the possibilities for operating system choices significantly and also provides some other options.

GRUB

Enable IOMMU on boot by adding an option to the kernel line in GRUB.

filename: /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

Re-generate the GRUB config.

# sudo grub-mkconfig -o /boot/grub/grub.cfg

REfind

Enable IOMMU on boot by adding an option to the

filename: /boot/EFI/BOOT/refind.conf
options "root=/dev/mapper/skye-root rw add_efi_memmap nomodeset intel_iommu=on zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=20 zswap.zpool=z3fold initrd=\intel-ucode.img"

Reboot the machine and then verify IOMMU is enabled.

# sudo dmesg | grep -e DMAR -e IOMMU

If it was enabled properly, there should be a line similar to [ 0.000000] DMAR: IOMMU enabled.

Adding the OVMF firmware to libvirt.

filename: /etc/libvirt/qemu.conf
nvram = [
"/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"
]

Icon LVM

If LVM is being used be sure to grant ownership of the LVM volumes to the kvm group in order to properly mount them using Libvirt.

filename: /etc/udev/rules.d/90-kvm.rules
ENV{DM_VG_NAME}=="data" ENV{DM_LV_NAME}=="*" OWNER="kvm"

Icon QEMU

While Vagrant is a great tool for working with test and development environments, for the more permanent VMs on the system, utilizing QEMU directly will allow the VMs to run directly off of LVM thin volumes. Currently vagrant-libvirt cannot do this, due to it's own snapshotting interfering with it; thankfully LVM has snapshotting of its own.

For this a separate Packer template was created, one with all of the Vagrant stuff removed. To build one of these simply use the other JSON file in the Arch Linux template directory.

# ./build archlinux-x86_64-base.json

This can then be output directly to the LVM thin volume.

# sudo qemu-img convert -f qcow2 -O raw qcow2/archlinux-x86_64-base.qcow2 /dev/data/dns

Then because it copied a thick volume onto a thin volume it will be using all of the disk space.

# sudo lvs


__LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dns data Vwi-a-tz-- 20.00g qemu 100.00

The disk merely needs to be sparsified.

# sudo virt-sparsify --in-place /dev/data/dns

The disk should now be reading properly.

# sudo lvs


__LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
dns data Vwi-a-tz-- 20.00g qemu 7.17

Icon Network Bridge

Setting up a network bridge for KVM is simple with systemd. Replace X.X.X.X with the host machine's IP address and update the Gateway and DNS if not using OVH.

filename: /etc/systemd/network/kvm0.netdev
[NetDev]
Name=kvm0
Kind=bridge
 
filename: /etc/systemd/network/kvm0.network
[Match]
Name=kvm0

[Network]
DNS=213.186.33.99
Address=X.X.X.X/24
Gateway=Y.Y.Y.254
IPForward=yes
 
filename: /etc/systemd/network/eth0.network
[Match]
Name=eth0

[Network]
Bridge=kvm0

And finally restart networkd.

# sudo systemctl restart systemd-networkd

The bridge should now be up and running, this should be verified.

# ip a

Once the bridge is up and running QEMU can be directed to use it. Create a directory in /etc/ for QEMU and then make a bridge.conf.

# sudo mkdir /etc/qemu
# sudoedit /etc/qemu/bridge.conf
 
filename: /etc/qemu/bridge.conf
allow kvm0

Then set cap_net_admin on the binary helper.

# sudo setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helper
IconWARNING: I had major issues using the bridge as a regular user, I actually had to remove the setuid bit to get it working: sudo chmod u-s /usr/lib/qemu/qemu-bridge-help

Icon NAT

To get NAT working inside of each VM IP forwarding will need to be enabled.

filename: /etc/sysctl.d/99-kvm.conf
net.ipv4.ip_forward = 1
2 net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.proxy_ndp = 1
net.ipv6.conf.default.forwarding = 1

Rules will also need to be appended to nftables.

filename: /etc/nftables.conf
table inet filter {

chain foward {
type filter hook forward priority 0;
oifname kvm0 accept
iifname kvm0 ct state related, established accept
iifname kvm0 drop
}

}

Rebooting at this point to make sure all these networking settings were set correctly would be a wise idea.

# sudo systemctl reboot

Icon Network Test

The network on the VM should now be fully tested, for this a connection can be made using the SPICE protocol. On a local client machine install vinagre.

# pacaur -S vinagre

Using the OVH/SyS Manager setup two failover IP addresses to the same virtual MAC. The following arguments will launch the virtual machine. Be sure to input the proper virtual MAC so that it matches the one that OVH assigned.

# /usr/bin/qemu-system-x86_64 --enable-kvm -machine q35,accel=kvm -device intel-iommu \
-m 512 -smp 1 -cpu host -drive file=/dev/data/dns,cache=none,if=virtio,format=raw \
-net bridge,br=kvm0 -net nic,model=virtio,macaddr=00:00:00:00:00:00 -vga qxl \
-spice port=5900,addr=127.0.0.1,disable-ticketing \
-monitor unix:/tmp/monitor-dns.sock,server,nowait

Once launched, you should be able to connect to the KVM using a SPICE client such as Vinagre. Click Connect in Vinagre, set the Host: to localhost and then make sure Use host is checked with your KVM host server name filled in "as a SSH tunnel". Connect and enter your SSH key password.

The KVM virtual machine should now be visible through Vinagre.

Login as root, if this was built using packer-kvm-templates the default password is password.

Edit the network interface configuration for systemd. This first VM is going to be acting as my DNS server, therefore it will be assigned two IP addresses.

filename: /etc/systemd/network/eth0.service
[Match]
Name=eth0

[Network]
Address=FAILOVER_IP_1/32
Address=FAILOVER.IP.2/32
DNS=213.186.33.99
Peer=HOST_GATEWAY/32

[Gateway]
Gateway=HOST_GATEWAY
Destination=0.0.0.0/0

This is exactly how OVH says it should be setup, however this was not enough as the VM still did not have a default route.

IconTODO: Fix this section, this is an ugly hack

To fix the routing create a service on boot.

filename: /usr/lib/systemd/system/kvmnet.service
[Unit]
Description=Start KVM Network
After=network.target
Before=multi-user.target shutdown.target
Conflicts=shutdown.target
Wants=network.target

[Service]
ExecStart=/usr/local/bin/kvmnet

[Install]
WantedBy=multi-user.target

And the script that does the routing.

filename: /usr/local/bin/kvmnet
#!/bin/bash
ip route add Y.Y.Y.254 dev eth0
ip route add default via Y.Y.Y.254 dev eth0

Don't forget to make it executable.

# sudo chmod +rx /usr/local/bin/kvmnet

Enable the service.

# sudo systemctl enable kvmnet

Reboot the VM and then verify it has internet access.

# sudo reboot
# ping archlinux.org

Finally, verify it can be SSH into from the outside via BOTH IP addresses.

Icon Libvirt

To launch the virtual machines on boot there are two options. The first option involves importing the virtual machines into libvirt with virsh. The second option is to setup a systemd service. Given that management will be loads easier with virt-manager I will opt for this option.

On the KVM host machine enable and start libvirtd.

# sudo systemctl enable libvirtd
# sudo systemctl start libvirtd

Then enable access to libvirtd to everyone in the kvm group.

filename: /etc/polkit-1/rules.d/50-libvirt.rules
/* Allow users in kvm group to manage the libvirt daemon without authentication */
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.isInGroup("kvm")) {
return polkit.Result.YES;
}
});

Icon Virsh

Virsh is the command line interface for libvirt. It can be used to import the QEMU arguments into an XML format that libvirt will understand.

To make life easier it is suggested to make a shell alias for virsh.

# alias virsh="virsh -c qemu:///system"

Save the QEMU arguments used before to a temporary file.

# echo "/usr/bin/qemu-system-x86_64 --enable-kvm -machine q35,accel=kvm -device intel-iommu \
-m 512 -smp 1 -cpu Broadwell -drive file=/dev/data/dns,cache=none,if=virtio,format=raw \
-net bridge,br=kvm0 -net nic,model=virtio,macaddr=00:00:00:00:00:00 -vga qxl \
-spice port=5900,addr=127.0.0.1,disable-ticketing \
-monitor unix:/tmp/monitor-dns.sock,server,nowait" > kvm.args
IconTemporarily changing the CPU because virsh cannot recognize host

Convert this to XML format.

# virsh domxml-from-native qemu-argv kvm.args > dns.xml

Then open up the XML file in an editor and change the name, cpu and graphics block.

filename: dns.xml

<name>DNS (Arch64)</name>

<cpu mode='host-passthrough' />

<graphics type='spice' port='5900' autoport='no' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1' />
</graphics>

The last two qemu:commandline arguments can also be removed as they were setting up the SPICE server which is done through the graphics block.

The XML should now be in a similar state as to when it was executed with the QEMU binary.

Import the XML into libvirt.

# virsh define dns.xml

The VM can now be launched.

# virsh start DNS

SSH and SPICE over SSH should both now work and the machine should be running. Use the following to start the machine on boot.

# virsh autostart DNS
IconIf you have issues auto-starting the machine, check the logfile /var/log/libvirt/qemu/.

A reboot of the host machine at this point should yield the virtual machine DNS starting up automatically.

Icon Virt-Manager

Virt-manager can be used to manage the virtual machines remotely.

Virt-manager can now be installed on the local machine (the one viewing this tutorial not the KVM host machine), this can be used to connect to libvirt remotely via SSH.

# pacaur -S virt-manager

Connect remotely to QEMU/KVM with virt-manager over SSH and the virtual machine should be shown as running.

Icon Packer

Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding.

IconGitHub: kyau/packer-kvm-templates

Icon Vagrant-Libvirt

The libvirt plugin installation for vagrant requires some cleanup first.

# sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup}
# sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup}
# sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup}
# sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,backup}

Then build the plugin.

# vagrant plugin install vagrant-libvirt

Icon Templates

The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the Packer docs. For Arch Linux, the template file archlinux-x86_64-base-vagrant.json will be used to generate an Arch Linux qcow2 virtual machine image.

# git clone https://github.com/kyau/packer-kvm-templates

To explain the template a bit, inside of the builders section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use virtio for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the http_directory specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The boot_command is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the qemuargs should be rather apparent as they are the arguments passed to QEMU.

# cd packer-kvm-templates

Looking then at the provisioners section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The install.sh script is the one that installs Arch Linux, hardnening.sh is the script that applies hardening the Arch Linux installation and finally cleanup.sh is there for general cleanup after the installation is complete. While the README.md does have all of this information for the packer templates, it will also be detailed here.


For added security generate a new moduli for your VMs (or copy from /etc/ssh/moduli.

# ssh-keygen -G moduli.all -b 4096
# ssh-keygen -T moduli.safe -f moduli.all
# mv moduli.safe moduli && rm moduli.all

Enter the directory for the Arch Linux template and sym-link the moduli.

# cd archlinux-x86_64-base/default
# ln -s ../../moduli . && cd ..

Build the base virtual machine image.

# ./build archlinux-x86_64-base-vagrant.json
IconThis runs: PACKER_LOG=1 PACKER_LOG_PATH="packer.log" packer-io build archlinux-x86_64-base-vagrant.json, it logs to the current directory

Once finished, there should be a qcow2 vagrant-libvirt image for Arch Linux in the box directory.

Add this image to Vagrant.

# vagrant box add box/archlinux-x86_64-base-vagrant-libvirt.box --name archlinux-x86_64-base

Icon Vagrant-Libvirt

Vagrant can be used to build and manage test machines. The vagrant-libvirt plugin adds a Libvirt provider to Vagrant, allowing Vagrant to control and provision machines via the Libvirt toolkit.

To bring up the first machine initialize Vagrant in a new directory first create a directory for the machine.

# cd
# mkdir testmachine
# cd testmachine

Init the machine the Vagrant.

# vagrant init archlinux-x86_64-base

Then bring up the machine.

# vagrant up

Then SSH into the machine directly.

# vagrant ssh

Icon Additional Notes

These notes are here from my own install.

# cd ~/packer-kvm-templates/archlinux-x86_64-base
# ./build archlinux-x86_64-base.json
 
# sudo lvcreate -V 20G --thin -n bind data/qemu
# sudo lvcreate -V 20G --thin -n sql data/qemu
# sudo lvcreate -V 20G --thin -n nginx data/qemu
 
# sudo qemu-img convert -f qcow2 -O raw qcow2/archlinux-x86_64-base.qcow2 /dev/data/bind
# sudo virt-sparsify --in-place /dev/data/bind
 
# vim virshxml
# ./virshxml
# virsh define ~/newxml-bind.xml

Then repeat this for sql and nginx.

IconDon't forget about the notes virshxml gives for replacing the networkd service
# virsh start bind
# virsh start sql
# virsh start nginx
 
# virsh autostart bind
# virsh autostart sql
# virsh autostart nginx

Icon DNS

Login to the dns virtual machine and install BIND.

# pacaur -S bind

Setup the zones for all domains and reverse IPs.

Icon DNSSEC

Adding DNSSEC to BIND is always a good idea[1], first add the following lines to the options inside of the BIND config.

filename: /etc/named.conf
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;

Install haveged for key generation inside of VMs.

# pacaur -S haveged
# haveged -w 1024

Gain root privileges.

# sudo -i
# cd /var/named

Create zone signing keys for all domains.

# dnssec-keygen -a ECDSAP384SHA384 -n ZONE kyau.net
# dnssec-keygen -a ECDSAP384SHA384 -n ZONE kyau.org

Create a key signing keys for all domains.

# dnssec-keygen -f KSK -a ECDSAP384SHA384 -n ZONE kyau.net
# dnssec-keygen -f KSK -a ECDSAP384SHA384 -n ZONE kyau.org

Run the following for each domain to include the keys in the zone files.

# for key in `ls Kkyau.net*.key`; do echo "\$INCLUDE $key" >> kyau.net.zone; done
# for key in `ls Kkyau.org*.key`; do echo "\$INCLUDE $key" >> kyau.org.zone; done

Run a check on each zone.

# named-checkzone kyau.net /var/named/kyau.net.zone
# named-checkzone kyau.org /var/named/kyau.org.zone

Sign each zone with the dnssec-signzone.

# dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o kyau.net -t kyau.net.zone
# dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o kyau.org -t kyau.org.zone

To update a zone at any point just edit the zone, check the zone and then re-sign as root sudo -i.

# cd /var/named
# dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o kyau.net -t kyau.net.zone
# systemctl restart named
IconWARNING: DO NOT increment the zone file this will be done automatically!

Modify the bind config to read from the signed zone files.

filename: /etc/named.conf
zone "kyau.net" IN {
type master;
file "kyau.net.zone.signed";
allow-update { none; };
notify no;
};

zone "kyau.org" IN {
type master;
file "kyau.org.zone.signed";
allow-update { none; };
notify no;
};

Make sure all is in order.

# named-checkconf /etc/named.conf

Next visit the domain registrar for the domain.

Icon SQL

Create a directory on the host machine for the nginx and sql server.

# sudo mkdir /www/sql /www/nginx

Make sure it has the right permissions.

# sudo chown -R kvm:kvm /www

Edit the sql virtual machine to mount the folder inside of the VM.

# sudo virsh edit sql
 

<filesystem type='mount' accessmode='passthrough'>
<source dir='/www/sql'/>
<target dir='neutron-sql'/>
</filesystem>

Shutdown the virtual machine, then restart it back up.

Login to the sql virtual machine and create a directory for SQL.

# mkdir /sql

Mount the partition from the HOST system.

# mount neutron-sql /sql -t 9p -o trans=virtio

Also set this to mount on boot.

filename: /etc/fstab
neutron-sql /sql 9p trans=virtio 0 0

After the directory is mounted make sure it has the right permissions.

# sudo chown mysql:mysql /sql
# sudo chmod 770 /sql

Install mariadb.

# pacaur -S mariadb

Initialize the SQL database directory.

# sudo mysql_install_db --user=mysql --basedir=/sql/base --datadir=/sql/db

Modify the MySQL global config to support a different basedir, bind to IPv6 in addition to IPv4 and disable filesystem access.

filename: /etc/mysql/my.cnf

[mysqld]
bind-address = ::
port = 3306
socket = /run/mysqld/mysqld.sock
datadir = /sql/db
local-infile = 0

tmpdir = /tmp/

Enable and start the systemd service.

# sudo systemctl enable mariadb
# sudo systemctl start mariadb

Run the MySQL post-install security check, change the remove password and remove all demo/test related material.

# sudo mysql_secure_installation

Icon User Setup

Open mysql and change the root username and allow access from the nginx virtual machine.

# mysql -u root -p
 
MariaDB> RENAME USER 'root'@'localhost' to 'kyau'@'localhost';
MariaDB> RENAME USER 'root'@'127.0.0.1' to 'kyau'@'127.0.0.1';
MariaDB> RENAME USER 'root'@'::1' to 'kyau'@'::1';
MariaDB> GRANT ALL PRIVILEGES ON *.* TO 'kyau'@'142.44.172.255' IDENTIFIED BY 'my-password' WITH GRANT OPTION;
MariaDB> FLUSH PRIVILEGES;

Confirm the changes by listing all users.

MariaDB> SELECT User,Host,Password FROM mysql.user;

Icon UTF8MB4

Optionally, enable UTF8MB4 support, which is recommended over UTF8 as it will provide full unicode support.

filename: /etc/mysql/my.cnf
[client]
default-character-set = utf8mb4

[mysqld]
collation_server = utf8mb4_unicode_ci
character_set_client = utf8mb4
character_set_server = utf8mb4
skip-character-set-client-handshake

[mysql]
default-character-set = utf8mb4

Icon Importing Databases

Head over to the current SQL server and export the needed database.

# mysqldump -u kyau -p --databases <db1> <db2>… > backup.sql

Import them to the new database server.

# mysql -u kyau -p
MariaDB> source backup.sql

Icon Database Maintenance

MariaDB includes mysqlcheck to check, analyze, repair and optimize database tables.

To check all tables in all databases:

# mysqlcheck --all-databases -u root -p -m

To analyze all tables in all databases:

# mysqlcheck --all-databases -u root -p -a

To repair all tables in all databases:

# mysqlcheck --all-databases -u root -p -r

To optimize all tables in all databases:

# mysqlcheck --all-databases -u root -p -o

To check if any tables require upgrades:

# mysqlcheck --all-databases -u root -p -g

If any tables require upgrades, it is recommended to run a full upgrade (this should also be done in-between major MariaDB version releases).

# mysql_upgrade -u root -p

Icon Firewall

Add rules to the firewall to allow access from the nginx virtual machine to MySQL.

filename: /etc/nftables.conf
ip saddr 142.44.172.255 tcp dport 3306 ct state new,established counter accept

Icon Nginx

Login to the nginx virtual machine and shut it down. Then edit the virtual machine on the host.

# virsh edit nginx

Add a mountpoint for the nginx directory on the host.

<filesystem type='mount' accessmode='mapped'>
<source dir='/www/nginx'/>
<target dir='neutron-nginx'/>
</filesystem>

Restart the nginx machine and login via ssh.

# virsh start nginx

Create a directory for nginx files.

# sudo mkdir /nginx

Set the directory to mount on boot.

filename: /etc/fstab
neutron-nginx /nginx 9p trans=virtio 0 0

Reboot the machine to make sure the mounting works.

Install nginx-mainline.

# pacaur -S nginx-mainline

Enable http and https in nftables.

filename: /etc/nftable.conf
tcp dport {http,https} accept

Restart nftables to apply the new rules.

# sudo systemctl restart nftables

Start and enable the nginx service.

# sudo systemctl enable nginx
# sudo systemctl start nginx

You should be able to visit the IP address of the machine and see the nginx default page.

For configuration first create blank configs and directories needed.

# sudo touch /nginx/nginx.conf /nginx/http.conf
# sudo mkdir /nginx/logs /nginx/conf.d /nginx/https

Set permissions accordingly.

# sudo chown -R http:http /nginx/*

Edit the main nginx config, replace all of it with a single include.

filename: /etc/nginx/nginx.conf
include /nginx/nginx.conf

Create the actual main configuration, use sudo to edit the configs.

# sudoedit /nginx/nginx.conf
 
filename: /nginx/nginx.conf
user http;
worker_processes auto;
worker_cpu_affinity auto;
pcre_jit on;

events {
worker_connections 4096;
}

error_log /nginx/logs/nginx-error.log;

include /nginx/http.conf;

Create the http block configuration.

filename: /nginx/http.conf
http {
include mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /nginx/logs/nginx-access.log main;

sendfile on;
tcp_nopush on;
aio threads;
charset utf-8;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
include /nginx/conf.d/*.conf;
}

Then create configs for each website in /nginx/conf.d/ with the naming scheme *.conf. There is a great post on Stack Overflow[2] about achieving an A+ rating with 100 points in every category on SSL Labs.

Icon PHP

Install the packages required for PHP.

# pacaur -S php php-fpm php-gd php-intl php-mcrypt php-sqlite imagemagick

Open up the main php-fpm config.

filename: /etc/php/php-fpm.conf
error_log = /nginx/logs/php-fpm.log

Then edit the config for your server instance.

filename: /etc/php/php-fpm.d/www.conf
listen.allowed_clients = 127.0.0.1
...
php_admin_flag[log_errors] = on
php_admin_value[error_log] = /nginx/logs/php.log
php_admin_value[memory_limit] = 256M
php_admin_value[post_max_size] = 2048M
php_admin_value[upload_max_filesize] = 2048M
php_admin_value[date.timezone] = America/Toronto

Open up the PHP config /etc/php/php.ini and enable the modules: bz2, exif, gd, gettext, iconv, intl, mcrypt, mysqli, pdo_mysql, sockets, sqlite3 by commenting out the extension lines. Start and enable the php-fpm service.

# sudo systemctl enable php-fpm
# sudo systemctl start php-fpm

Import all of the web files and update the configs in conf.d for all websites.

Icon Let's Encrypt

Using SSL encryption is a must. First install the required packages.

# pacaur -S certbot

Bring down nginx temporarily.

# sudo systemctl stop nginx

Use certbot to get a certificate for all domains needed.

# sudo certbot certonly --agree-tos --standalone --email your@address.com --rsa-key-size 4096 -d domain.com,www.domain.com,subdomain.domain.com

Generate a dhparam.

# sudo openssl dhparam -out /etc/letsencrypt/live/kyau.net/dhparam4096.pem 4096

Start back up nginx.

# sudo systemctl start nginx

A timer can then be setup to run certbot twice daily.

filename: /etc/systemd/system/certbot.timer
[Unit]
Description=Twice daily renewal of Let's Encrypt's certificates

[Timer]
OnCalendar=0/12:00:00
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

Also create the service for certbot.

filename: /etc/systemd/system/certbot.service
[Unit]
Description=Let's Encrypt renewal

[Service]
Type=oneshot
ExecStart=/usr/bin/certbot renew --pre-hook "/usr/bin/systemctl stop nginx.service" --post-hook "/usr/bin/systemctl start nginx.service" --quiet --agree-tos

Enable and start the timer.

# sudo systemctl enable certbot.timer
# sudo systemctl start certbot.timer

Icon References

  1. ^ DigitalOcean. How To Setup DNSSEC on an Authoritative BIND DNS Server
  2. ^ Stack Overflow. How do you score A+ with 100 on all categories on SSL Labs test with Let's Encrypt and Nginx?