ArchLinux:KVM: Difference between revisions
m (→Packer) |
m (→Packer) |
||
Line 36: | Line 36: | ||
= {{Icon24|sitemap}} Packer = | = {{Icon24|sitemap}} Packer = | ||
Packer is a tool for automating the | Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding. | ||
{{GitHub|[//github.com/kyau/packer-kvm-templates kyau/packer-kvm-templates]}} | {{GitHub|[//github.com/kyau/packer-kvm-templates kyau/packer-kvm-templates]}} | ||
The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the [//www.packer.io/docs/templates/index.html Packer docs]. For Arch Linux, the template file {{mono|archlinux-x86_64-base-vagrant.json}} will be used to generate an Arch Linux {{mono|qcow2}} virtual machine image. | The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the [//www.packer.io/docs/templates/index.html Packer docs]. For Arch Linux, the template file {{mono|archlinux-x86_64-base-vagrant.json}} will be used to generate an Arch Linux {{mono|qcow2}} virtual machine image. | ||
{{Console|1=<nowiki>git clone https://github.com/kyau/packer-kvm-templates</nowiki>|2=cd packer-kvm-templates/archlinux-x86_64-base}} | |||
To explain the template a bit, inside of the {{mono|builders}} section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use {{mono|virtio}} for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the {{mono|http_directory}} specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The {{mono|boot_command}} is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the {{mono|qemuargs}} should be rather apparent as they are the arguments passed to QEMU. | |||
Looking then at the {{mono|provisioners}} section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The {{mono|install.sh}} script is the one that installs Arch Linux, {{mono|provision.sh}} is the script that sets up everything dealing with the Vagrant user and finally {{mono|cleanup.sh}} is there for general cleanup after the installation is complete. | |||
= {{Icon24|book-brown}} References = | = {{Icon24|book-brown}} References = |
Revision as of 14:31, 8 August 2017
Introduction
This is a tutorial on how to automate the setup of VMs using KVM on Arch Linux. This tutorial uses QEMU as a back-end for KVM and libvirt, Packer and Vagrant.
This tutorial is meant as a supplement to my OVH: Custom Installation tutorial |
Installation
Before getting started there are a few packages that will be needed to set all of this up.
# pacaur -S libvirt openssl-1.0 packer-io qemu-headless qemu-headless-arch-extra vagrant |
vagrant-libvirt
The libvirt plugin installation for vagrant requires some cleanup first.
# sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup} # sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup} # sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup} # sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,backup} |
Then build the plugin.
# vagrant plugin install vagrant-libvirt |
Hugepages
Enabling hugepages can improve the performance of virtual machines. First add an entry to the fstab, make sure to first check what the group id of the group kvm is.
# grep kvm /etc/group # sudoedit /etc/fstab |
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=999 0 0 |
Instead of rebooting, remount instead.
# sudo umount /dev/hugepages # mount /dev/hugepages |
This can then be verified.
# sudo mount | grep huge # ls -FalG /dev/ | grep huge |
Now to set the number of hugepages to use. For this one has to do a bit of math, for each gigabyte of the system RAM that you want to use for VMs you divide the size in megabytes by two.
On my setup I will dedicated 12GB out of the 16GB of system RAM to VMs. This means (12 * 1024) / 2 or 6144 |
Set the number of hugepages.
# echo 6144 | sudo tee /proc/sys/vm/nr_hugepages |
Also set this permanently by adding a file to /etc/sysctl.d.
vm.nr_hugepages = 6144 |
Again verify the changes.
# grep HugePages_Total /proc/meminfo |
KVM User
Finally create a user for KVM.
# sudo useradd -g kvm -s /usr/bin/nologin kvm |
Then modify the libvirt QEMU config to reflect this.
user = "kvm" group = "kvm" |
Packer
Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding.
GitHub: kyau/packer-kvm-templates |
The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the Packer docs. For Arch Linux, the template file archlinux-x86_64-base-vagrant.json will be used to generate an Arch Linux qcow2 virtual machine image.
# git clone https://github.com/kyau/packer-kvm-templates # cd packer-kvm-templates/archlinux-x86_64-base |
To explain the template a bit, inside of the builders section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use virtio for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the http_directory specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The boot_command is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the qemuargs should be rather apparent as they are the arguments passed to QEMU.
Looking then at the provisioners section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The install.sh script is the one that installs Arch Linux, provision.sh is the script that sets up everything dealing with the Vagrant user and finally cleanup.sh is there for general cleanup after the installation is complete.