ArchLinux:KVM: Difference between revisions

From Wiki³
Line 30: Line 30:
{{Console|1=grep HugePages_Total /proc/meminfo}}
{{Console|1=grep HugePages_Total /proc/meminfo}}
== {{Icon|notebook}} KVM User ==
== {{Icon|notebook}} KVM User ==
Finally create a user for KVM.
Create a user for KVM.
{{Console|1=sudo useradd -g kvm -s /usr/bin/nologin kvm}}
{{Console|1=sudo useradd -g kvm -s /usr/bin/nologin kvm}}
Then modify the libvirt QEMU config to reflect this.
Then modify the libvirt QEMU config to reflect this.
Line 37: Line 37:
{{Console|1=sudo groupmod -g 78 kvm}}
{{Console|1=sudo groupmod -g 78 kvm}}
{{Note|systemd as of 234 assigns dynamic IDs to groups, but KVM expects 78}}
{{Note|systemd as of 234 assigns dynamic IDs to groups, but KVM expects 78}}
Add the current user to the {{mono|kvm}} group.
{{Console|1=sudo gpasswd -a kyau kvm}}
== {{Icon|notebook}} LVM Thin Provisioning ==
== {{Icon|notebook}} LVM Thin Provisioning ==
During the [[ArchLinux:OVH|installation]] of the KVM host machine a {{mono|data}} volume group was created for VMs. Volumes will need to be created for each virtual machine, for this an LVM thin pool can be utilized.
During the [[ArchLinux:OVH|installation]] of the KVM host machine a {{mono|data}} volume group was created for VMs. Volumes will need to be created for each virtual machine, for this an LVM thin pool can be utilized.

Revision as of 14:08, 9 August 2017

Icon Introduction

This is a tutorial on how to automate the setup of VMs using KVM on Arch Linux. This tutorial uses QEMU as a back-end for KVM and libvirt, Packer and Vagrant.

IconThis tutorial is meant as a supplement to my OVH: Custom Installation tutorial

Icon Installation

Before getting started there are a few packages that will be needed to set all of this up.

# pacaur -S libvirt openssl-1.0 packer-io qemu-headless qemu-headless-arch-extra vagrant

Icon vagrant-libvirt

The libvirt plugin installation for vagrant requires some cleanup first.

# sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup}
# sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup}
# sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup}
# sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,backup}

Then build the plugin.

# vagrant plugin install vagrant-libvirt

Icon Hugepages

Enabling hugepages can improve the performance of virtual machines. First add an entry to the fstab, make sure to first check what the group id of the group kvm is.

# grep kvm /etc/group
# sudoedit /etc/fstab


filename: /etc/fstab
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=999 0 0

Instead of rebooting, remount instead.

# sudo umount /dev/hugepages
# mount /dev/hugepages

This can then be verified.

# sudo mount | grep huge
# ls -FalG /dev/ | grep huge

Now to set the number of hugepages to use. For this one has to do a bit of math, for each gigabyte of the system RAM that you want to use for VMs you divide the size in megabytes by two.

IconOn my setup I will dedicated 12GB out of the 16GB of system RAM to VMs. This means (12 * 1024) / 2 or 6144

Set the number of hugepages.

# echo 6144 | sudo tee /proc/sys/vm/nr_hugepages

Also set this permanently by adding a file to /etc/sysctl.d.

filename: /etc/sysctl.d/40-hugepages.conf
vm.nr_hugepages = 6144

Again verify the changes.

# grep HugePages_Total /proc/meminfo

Icon KVM User

Create a user for KVM.

# sudo useradd -g kvm -s /usr/bin/nologin kvm

Then modify the libvirt QEMU config to reflect this.

filename: /etc/libvirt/qemu.conf
user = "kvm"
group = "kvm"

Fix permission on /dev/kvm

# sudo groupmod -g 78 kvm
Iconsystemd as of 234 assigns dynamic IDs to groups, but KVM expects 78

Add the current user to the kvm group.

# sudo gpasswd -a kyau kvm

Icon LVM Thin Provisioning

During the installation of the KVM host machine a data volume group was created for VMs. Volumes will need to be created for each virtual machine, for this an LVM thin pool can be utilized.

Thin provisioning creates another virtual layer on top of your volume group, in which logical thin volumes can be created. Thin volumes, unlike normal thick volumes, do not reserve the disk space for the volume on creation but instead do so upon write; to the operating system they are still reported as full size volumes. This means that when utilizing LVM directly for KVM it will perform similarly to a "dynamic disk" meaning it will only use what disk space it needs regardless of how big the virtual hard drive actually is. This can also be paired with LVM cloning (snapshots) to create some interesting setups, like running 1TB of VMs on a 128GB disk for example.

IconWARNING: The one disadvantage to doing this is that without proper disk monitoring and management this can lead to over provisioning (overflow will cause volume drop)

Icon Packer

Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding.

IconGitHub: kyau/packer-kvm-templates

The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the Packer docs. For Arch Linux, the template file archlinux-x86_64-base-vagrant.json will be used to generate an Arch Linux qcow2 virtual machine image.

# git clone https://github.com/kyau/packer-kvm-templates
# cd packer-kvm-templates/archlinux-x86_64-base

To explain the template a bit, inside of the builders section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use virtio for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the http_directory specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The boot_command is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the qemuargs should be rather apparent as they are the arguments passed to QEMU.

# cd packer-kvm-templates

Looking then at the provisioners section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The install.sh script is the one that installs Arch Linux, hardnening.sh is the script that applies hardening the Arch Linux installation and finally cleanup.sh is there for general cleanup after the installation is complete.

While the README.md does have all of this information for the packer templates, it will also be detailed here.

For added security generate a new moduli for your VMs (or copy from /etc/ssh/moduli.

# ssh-keygen -G moduli.all -b 4096
# ssh-keygen -T moduli.safe -f moduli.all
# mv moduli.safe moduli && rm moduli.all

Enter the directory for the Arch Linux template and sym-link the moduli.

# cd archlinux-x86_64-base/default
# ln -s ../../moduli . && cd ..

Build the base virtual machine image.

# ./build
IconThis runs: PACKER_LOG=1 PACKER_LOG_PATH="packer.log" packer-io build archlinux-x86_64-base-vagrant.json, it logs to the current directory

Once finished, there should be a qcow2 vagrant-libvirt image for Arch Linux in the box directory.

Icon References