KVM on Arch Linux
UNDER CONSTRUCTION: The document is currently being modified! |
GitLab: kyaulabs/autoarch: Arch Linux installation automation. |
Introduction
This is a tutorial for setting up and using KVM on Arch Linux utilizing QEMU as the back-end and libvirt as the front-end. Additional notes have been added for creating system images.
UPDATE (2019): Tested/Cleaned Up this document using a Dell R620 located in-house at KYAU Labs as the test machine.
Installation
Before getting started it is a good idea to make sure VT-x or AMD-V is enabled in BIOS.
# egrep --color 'vmx|svm' /proc/cpuinfo |
If hardware virtualization is not enabled, reboot the machine and enter the BIOS to enable it. |
Once hardware virtualization has been verified install all the packages required.
# pikaur -S bridge-utils dmidecode libguestfs libvirt \ openbsd-netcat openssl-1.0 ovmf qemu-headless \ qemu-headless-arch-extra virt-install |
Configuration
After all of the packages have been installed libvirt/QEMU need to be configured.
User/Group Management
Create a user for KVM.
# sudo useradd -g kvm -s /usr/bin/nologin kvm |
Then modify the libvirt QEMU config to reflect this.
user = "kvm" group = "kvm" |
Fix permission on /dev/kvm
# sudo groupmod -g 78 kvm |
# sudo usermod -u 78 kvm |
systemd as of 234 assigns dynamic IDs to groups, but KVM expects 78 |
User Access
If non-root user access to libvirtd is desired, add the libvirt group to polkit access.
/* Allow users in kvm group to manage the libvirt daemon without authentication */ polkit.addRule(function(action, subject) { if (action.id == "org.libvirt.unix.manage" && subject.isInGroup("libvirt")) { return polkit.Result.YES; } }); |
If HAL was followed to secure the system after installation and you would like to use libvirt as a non-root user, the hidepid security feature from the /proc line in /etc/fstab will need to be removed. This will require a reboot. |
Add the users who need libvirt access to the kvm and libvirt groups.
# sudo gpasswd -a username kvm |
# sudo gpasswd -a username libvirt |
To make life easier it is suggested to set a couple shell variables for virsh, this will default to qemu:///session when running as a non-root user.
# setenv VIRSH_DEFAULT_CONNECT_URI qemu:///system |
# setenv LIBVIRT_DEFAULT_URI qemu:///system |
These can be added to /etc/bash.bashrc, /etc/fish/config.fish or /etc/zsh/zshenv depending on which shell is being used.
Hugepages
Enabling hugepages can improve the performance of virtual machines. First add an entry to the fstab, make sure to first check what the group id of the group kvm is (it should be 78.
# grep kvm /etc/group |
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0 |
Instead of rebooting, remount instead.
# sudo umount /dev/hugepages # sudo mount /dev/hugepages |
This can then be verified.
# sudo mount | grep huge # ls -FalG /dev/ | grep huge |
Now to set the number of hugepages to use. For this one has to do a bit of math, for each gigabyte of the system RAM that you want to use for VMs you divide the size in megabytes by two.
On my setup I will dedicated 40GB out of the 48GB of system RAM to VMs. This means (40 * 1024) / 2 or 20480 |
Set the number of hugepages.
# echo 20480 | sudo tee /proc/sys/vm/nr_hugepages |
Also set this permanently by adding a file to /etc/sysctl.d.
vm.nr_hugepages = 20480 |
Again verify the changes.
# grep HugePages_Total /proc/meminfo |
Edit the libvirt QEMU config and turn hugepages on.
… hugetlbfs_mount = "/dev/hugepages" … |
Kernel Modules
In order to mount directories from the host inside of a virtual machine, the 9pnet_virtio kernel module will need to be loaded.
# sudo modprobe 9pnet_virtio |
Also load the module on boot.
9pnet_virtio |
In addition change the global QEMU config to turn off dynamic file ownership.
… dynamic_ownership = 0 … |
UEFI & PCI-E Passthrough
The Open Virtual Machine Firmware (OVMF) is a project to enable UEFI support for virtual machines and enabling IOMMU will enable PCI pass-through among other things. This extends the possibilities for operating system choices significantly and also provides some other options.
GRUB
Enable IOMMU on boot by adding an option to the kernel line in GRUB.
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on" |
Re-generate the GRUB config.
# sudo grub-mkconfig -o /boot/grub/grub.cfg |
REfind
Enable IOMMU on boot by adding an option to the
options "root=/dev/mapper/skye-root rw add_efi_memmap nomodeset intel_iommu=on zswap.enabled=1 zswap.compressor=lz4 \ zswap.max_pool_percent=20 zswap.zpool=z3fold initrd=\intel-ucode.img" |
Reboot the machine and then verify IOMMU is enabled.
# sudo dmesg | grep -e DMAR -e IOMMU |
If it was enabled properly, there should be a line similar to [ 0.000000] DMAR: IOMMU enabled.
OVMF
Adding the OVMF firmware to libvirt.
nvram = [ "/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd" ] |
Services
Once the bridge is up and running libvirtd can be started, enable and start the libvirtd service.
# sudo systemctl enable libvirtd |
# sudo systemctl start libvirtd |
Verify that libvirt is running.
# virsh --connect qemu:///system |
Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # |
If you end up at the virsh prompt, simply type quit to exit back to the shell.
Networking
The server being used for testing has quad gigabit network cards in it. For this type of setup one NIC will be used for management of the host OS, while the other three will be bonded together using 802.3ad (combines all NICs for optimal throughput).
NIC Bonding
Pull up a list of all network cards in the machines.
# ip -c=auto l |
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether d4:be:d9:b2:95:43 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether d4:be:d9:b2:95:45 brd ff:ff:ff:ff:ff:ff 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether d4:be:d9:b2:95:47 brd ff:ff:ff:ff:ff:ff 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether d4:be:d9:b2:95:49 brd ff:ff:ff:ff:ff:ff |
Create a management .network file, replace M.M.M.M with the management IP address and G.G.G.G with the gateway IP.
[Match] Name=eth0 [Network] DHCP=no NTP=pool.ntp.org DNS=1.1.1.1 LinkLocalAddressing=no [Address] Address=M.M.M.M/24 Label=management [Route] Gateway=G.G.G.G |
WARNING: If IPv6 is being used, remove the LinkLocalAddressing=no line from the file as this defaults to ipv6. |
Create the bond interface with systemd-networkd.
[NetDev] Name=bond0 Description=KVM vSwitch Kind=bond [Bond] Mode=802.3ad TransmitHashPolicy=layer3+4 MIIMonitorSec=1s LACPTransmitRate=fast |
Use the last three network cards to create a bond0.network file.
[Match] Name=eth1 Name=eth2 Name=eth3 [Network] Bond=bond0 |
Finally create the .network file attaching it to the bridge that is created in the next section.
[Match] Name=bond0 [Network] Bridge=kvm0 |
Network Bridge
Setting up a network bridge for KVM is simple with systemd. Create the bridge interface with systemd-networkd.
[NetDev] Name=kvm0 Kind=bridge |
Create the .network file for the bridge, replace X.X.X.X with the IP address desired for the KVM vSwitch, G.G.G.G with the gateway IP and modify the DNS if Cloudflare is not desired.
[Match] Name=kvm0 [Network] DHCP=no NTP=pool.ntp.org DNS=1.1.1.1 IPForward=yes LinkLocalAddressing=no [Address] Address=X.X.X.X/24 Label=vswitch [Route] Gateway=G.G.G.G |
And finally restart networkd.
# sudo systemctl restart systemd-networkd |
The bridge should now be up and running, this should be verified.
# ip -c=auto a |
Before adding the bridge to libvirt, check the current networking settings.
# virsh net-list --all |
Name State Autostart Persistent ---------------------------------------------- default inactive no yes |
Create a libvirt configuration for the bridge.
<network> <name>kvm0</name> <forward mode="bridge"/> <bridge name="kvm0"/> </network> |
Enable the bridge in libvirt.
# virsh net-define --file /etc/libvirt/bridge.xml |
Set the bridge to auto-start.
# virsh net-autostart kvm0 |
Start the bridge.
# virsh net-start kvm0 |
With the bridge now online, the default NAT network can be removed if it will not be used.
# virsh net-destroy default |
# virsh net-undefine default |
This can then be verified.
# virsh net-list --all |
Name State Autostart Persistent ---------------------------------------------- kvm0 active yes yes |
firewalld
Since libvirt cannot directly interface with nftables, it can only interface with iptables, firewalld can be used as a gateway in-between the two. Before it can be started nftables will have to be disabled if it is currently being used.
# sudo systemctl disable nftables |
# sudo systemctl stop nftables |
Install firewalld and dnsmasq.
# pikaur -S dnsmasq firewalld |
Start and enable the service.
# sudo systemctl enable firewalld |
# sudo systemctl start firewalld |
Verify the firewall started properly, it should return running.
# sudo firewall-cmd --state |
Add both interfaces to the public zone.
# firewall-cmd --permanent --zone=public --add-interface=eth0 |
# firewall-cmd --permanent --zone=public --add-interface=kvm0 |
Reboot the machine to verify the changes stick upon reboot.
# sudo systemctl reboot |
The SSH service is added by default to the firewall, allowing one to log back in after reboot. |
Look up the default zone config to verify the interfaces were added.
# sudo firewall-cmd --list-all |
public target: default icmp-block-inversion: no interfaces: eth0 kvm0 sources: services: dhcpv6-client ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: |
If any other services need to be added, do so now (a couple examples have been listed below).
# sudo firewall-cmd --zone=public --permanent --add-service=https |
# sudo firewall-cmd --zone=public --permanent --add-port=5900-5950/udp |
With the firewall now setup, libvirtd should be fully started without any warnings about the firewall.
# sudo systemctl status libvirtd |
● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-02-26 14:27:12 PST; 16min ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 693 (libvirtd) Tasks: 17 (limit: 32768) Memory: 61.3M CGroup: /system.slice/libvirtd.service └─693 /usr/bin/libvirtd Feb 26 14:27:12 skye.wa.kyaulabs.com systemd[1]: Started Virtualization daemon. |
Storage Pools
A storage pool in libvirt is merely a storage location designated to store virtual machine images or virtual disks. The most common storage types include netfs, disk, dir, fs, iscsi, logical, and gluster.
List all of the currently available storage pools.
# virsh pool-list --all |
Name State Autostart --------------------------- |
ISO Images
Begin by adding a pool for iso-images. If you wish to use something other than an existing mount-point you will have to change the type, options include the ones listed above (more can be found in the man page). A couple of examples are as follows:
<pool type='dir'> <name>iso-images</name> <target> <path>/pool/iso</path> <permissions> <mode>0770</mode> <owner>78</owner> <group>78</group> </permissions> </target> </pool> |
<pool type='fs'> <name>iso-images</name> <source> <device path="/dev/vgroup/lvol" /> </source> <target> <path>/pool/iso</path> <permissions> <mode>0770</mode> <owner>78</owner> <group>78</group> </permissions> </target> </pool> |
<pool type='netfs'> <name>iso-images</name> <source> <host name="nfs.example.com" /> <dir path="/nfs-path-to/images" /> <format type='nfs'/> </source> <target> <path>/pool/iso</path> <permissions> <mode>0770</mode> <owner>78</owner> <group>78</group> </permissions> </target> </pool> |
After creating the pool XML file, define the pool in libvirt.
# virsh pool-define iso-images.vol |
Before you begin using the pool it must also be built, it is also a good idea to set it to auto-start.
# virsh pool-build iso-images |
# virsh pool-start iso-images |
# virsh pool-autostart iso-images |
The iso-images pool should now be properly setup, feel free to import some images into the directory.
# sudo cp archlinux-2019.02.01-x86_64.iso /pool/iso |
Permissions and ownership will need to be set correctly.
# sudo chown kvm:kvm /pool/iso/archlinux-2019.02.01-x86_64.iso |
# sudo chmod 660 /pool/iso/archlinux-2019.02.01-x86_64.iso |
After copying over images and correcting permissions refresh the pool.
# virsh pool-refresh iso-images |
The image should now show up in the list of volumes for that pool.
# virsh vol-list iso-images |
Name Path ------------------------------------------------------------------------------ archlinux-2019.02.01-x86_64.iso /pool/iso/archlinux-2019.02.01-x86_64.iso |
Check on the status of all pools that have been added.
# virsh pool-list --all --details |
Name State Autostart Persistent Capacity Allocation Available --------------------------------------------------------------------------------------- iso-images running yes yes 108.75 GiB 4.77 GiB 103.97 GiB |
LVM
If LVM is going to be used for the VM storage pool, that can be setup now.
If the volume group has been created manually, the <source> section can be omitted from the XML and skip the build step as that is used to create the LVM volume group. |
Begin by creating a storage pool file.
<pool type='logical'> <name>vdisk</name> <source> <device path="/dev/sdX3" /> <device path="/dev/sdX4" /> </source> <target> <path>/dev/vdisk</path> <permissions> <mode>0770</mode> <owner>78</owner> <group>78</group> </permissions> </target> </pool> |
After creating the pool XML file, define the pool in libvirt, build it and set it to auto-start.
# virsh pool-define vdisk.vol |
# virsh pool-build vdisk |
# virsh pool-start vdisk |
# virsh pool-autostart vdisk |
Grant ownership of the LVM volumes to the kvm group in order to properly mount them using Libvirt.
ENV{DM_VG_NAME}=="vdisk" ENV{DM_LV_NAME}=="*" OWNER="kvm" |
Continuing as is will allow libvirtd to automatically manage the LVM volume on its own.
LVM Thin Volumes
Before going down this road, there are a couple of things to consider.
WARNING: If thin provisioning is enabled, LVM automation via libvirtd will be broken. |
In a standard LVM logical volume, all of the block are allocated when the volume is created, but blocks in a thin provisioned LV are allocated as they are written. Because of this, a thin provisioned logical volume is given a virtual size, and can then be much larger than physically available storage.
WARNING: Over-provisioning is NEVER recommended, whether it is CPU, RAM or HDD space. |
With the warnings out of the way, if thin provisioning is desired begin by creating a thin pool.
# sudo lvcreate -l +100%FREE -T vdisk/thin |
A volume group named vdisk was prepared using the previous steps via virsh build, if this was skipped either go back and redo it or prepare the volume group yourself. Doing it this way has the added benefit of breaking only most of the LVM functionality of libvirt.
QCOW2 images can be converted directly to the LVM volumes.
# sudo qemu-img convert -f qcow2 -O raw system-image.qcow2 /dev/data/lvmvol |
Virsh
Virsh is the command line interface for libvirt. It can be used to import the QEMU arguments into an XML format that libvirt will understand.
To make life easier it is suggested to make a shell alias for virsh.
# alias virsh="virsh -c qemu:///system" |
Save the QEMU arguments used before to a temporary file.
# echo "/usr/bin/qemu-system-x86_64 --enable-kvm -machine q35,accel=kvm -device intel-iommu \ -m 512 -smp 1 -cpu Broadwell -drive file=/dev/data/dns,cache=none,if=virtio,format=raw \ -net bridge,br=kvm0 -net nic,model=virtio,macaddr=00:00:00:00:00:00 -vga qxl \ -spice port=5900,addr=127.0.0.1,disable-ticketing \ -monitor unix:/tmp/monitor-dns.sock,server,nowait" > kvm.args |
Temporarily changing the CPU because virsh cannot recognize host |
Convert this to XML format.
# virsh domxml-from-native qemu-argv kvm.args > dns.xml |
Then open up the XML file in an editor and change the name, cpu and graphics block.
… <name>DNS (Arch64)</name> … <cpu mode='host-passthrough' /> … <graphics type='spice' port='5900' autoport='no' listen='127.0.0.1'> <listen type='address' address='127.0.0.1' /> </graphics> … |
The last two qemu:commandline arguments can also be removed as they were setting up the SPICE server which is done through the graphics block.
The XML should now be in a similar state as to when it was executed with the QEMU binary.
Import the XML into libvirt.
# virsh define dns.xml |
The VM can now be launched.
# virsh start DNS |
SSH and SPICE over SSH should both now work and the machine should be running. Use the following to start the machine on boot.
# virsh autostart DNS |
If you have issues auto-starting the machine, check the logfile /var/log/libvirt/qemu/. |
A reboot of the host machine at this point should yield the virtual machine DNS starting up automatically.
Virt-Manager
Virt-manager can be used to manage the virtual machines remotely.
Virt-manager can now be installed on the local machine (the one viewing this tutorial not the KVM host machine), this can be used to connect to libvirt remotely via SSH.
# pacaur -S virt-manager |
Connect remotely to QEMU/KVM with virt-manager over SSH and the virtual machine should be shown as running.
Packer
Packer is a tool for automating the creation of virtual machines, in this instance it will be used to automate the creation of Vagrant boxes. I have already taken the time to create a packer template for Arch Linux based off of my installation tutorials, but I encourage you to use this only as a basis and delve deeper to create your own templates. I could have very easily just have downloaded someone else's templates, but then I would lack understanding.
GitHub: kyau/packer-kvm-templates |
Vagrant-Libvirt
The libvirt plugin installation for vagrant requires some cleanup first.
# sudo mv /opt/vagrant/embedded/lib/libcurl.so{,.backup} # sudo mv /opt/vagrant/embedded/lib/libcurl.so.4{,.backup} # sudo mv /opt/vagrant/embedded/lib/libcurl.so.4.4.0{,.backup} # sudo mv /opt/vagrant/embedded/lib/pkgconfig/libcurl.pc{,backup} |
Then build the plugin.
# vagrant plugin install vagrant-libvirt |
Templates
The Packer templates are in JSON format and contain all of the information needed to create the virtual machine image. Descriptions of all the template sections and values, including default values, can be found in the Packer docs. For Arch Linux, the template file archlinux-x86_64-base-vagrant.json will be used to generate an Arch Linux qcow2 virtual machine image.
# git clone https://github.com/kyau/packer-kvm-templates |
To explain the template a bit, inside of the builders section the template is specifying that it is a qcow2 image running on QEMU KVM. A few settings are being imported from user variables that are being set in the previous section, this includes the ISO url and checksum, the country setting, disk space for the VMs primary hard drive, the amount of RAM to dedicate to the VM, how many vCores to dedicated to the VM, whether or not it is a headless VM or not, and the login and password for the primary SSH user. These are all set as user variables and placed in a section at the top to be able to make quick edits. The template also specifies that the VM should use virtio for the disk and network interfaces. Lastly the builtin web server in Packer and the boot commands; the http_directory specifies which directory will be the main root of the builtin web server (this enables one to host files up for the VM to access during installation). The boot_command is an array of commands that are to be executed upon boot in order to kick-start the installer. Finally, the qemuargs should be rather apparent as they are the arguments passed to QEMU.
# cd packer-kvm-templates |
Looking then at the provisioners section which is executing three separate scripts after the machine has booted. These scripts are also being passed the required user variables that are set at the top of the file as shell variables. The install.sh script is the one that installs Arch Linux, hardnening.sh is the script that applies hardening the Arch Linux installation and finally cleanup.sh is there for general cleanup after the installation is complete. While the README.md does have all of this information for the packer templates, it will also be detailed here.
For added security generate a new moduli for your VMs (or copy from /etc/ssh/moduli.
# ssh-keygen -G moduli.all -b 4096 # ssh-keygen -T moduli.safe -f moduli.all # mv moduli.safe moduli && rm moduli.all |
Enter the directory for the Arch Linux template and sym-link the moduli.
# cd archlinux-x86_64-base/default # ln -s ../../moduli . && cd .. |
Build the base virtual machine image.
# ./build archlinux-x86_64-base-vagrant.json |
This runs: PACKER_LOG=1 PACKER_LOG_PATH="packer.log" packer-io build archlinux-x86_64-base-vagrant.json, it logs to the current directory |
Once finished, there should be a qcow2 vagrant-libvirt image for Arch Linux in the box directory.
Add this image to Vagrant.
# vagrant box add box/archlinux-x86_64-base-vagrant-libvirt.box --name archlinux-x86_64-base |
Vagrant-Libvirt
Vagrant can be used to build and manage test machines. The vagrant-libvirt plugin adds a Libvirt provider to Vagrant, allowing Vagrant to control and provision machines via the Libvirt toolkit.
To bring up the first machine initialize Vagrant in a new directory first create a directory for the machine.
# cd # mkdir testmachine # cd testmachine |
Init the machine the Vagrant.
# vagrant init archlinux-x86_64-base |
Then bring up the machine.
# vagrant up |
Then SSH into the machine directly.
# vagrant ssh |
Additional Notes
These notes are here from my own install.
# cd ~/packer-kvm-templates/archlinux-x86_64-base # ./build archlinux-x86_64-base.json |
# sudo lvcreate -V 20G --thin -n bind data/qemu # sudo lvcreate -V 20G --thin -n sql data/qemu # sudo lvcreate -V 20G --thin -n nginx data/qemu |
# sudo qemu-img convert -f qcow2 -O raw qcow2/archlinux-x86_64-base.qcow2 /dev/data/bind # sudo virt-sparsify --in-place /dev/data/bind |
# vim virshxml # ./virshxml # virsh define ~/newxml-bind.xml |
Then repeat this for sql and nginx.
Don't forget about the notes virshxml gives for replacing the networkd service |
# virsh start bind # virsh start sql # virsh start nginx |
# virsh autostart bind # virsh autostart sql # virsh autostart nginx |
DNS
Login to the dns virtual machine and install BIND.
# pacaur -S bind |
Setup the zones for all domains and reverse IPs.
DNSSEC
Adding DNSSEC to BIND is always a good idea[1], first add the following lines to the options inside of the BIND config.
dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; |
Install haveged for key generation inside of VMs.
# pacaur -S haveged # haveged -w 1024 |
Gain root privileges.
# sudo -i # cd /var/named |
Create zone signing keys for all domains.
# dnssec-keygen -a ECDSAP384SHA384 -n ZONE kyau.net # dnssec-keygen -a ECDSAP384SHA384 -n ZONE kyau.org |
Create a key signing keys for all domains.
# dnssec-keygen -f KSK -a ECDSAP384SHA384 -n ZONE kyau.net # dnssec-keygen -f KSK -a ECDSAP384SHA384 -n ZONE kyau.org |
Run the following for each domain to include the keys in the zone files.
# for key in `ls Kkyau.net*.key`; do echo "\$INCLUDE $key" >> kyau.net.zone; done # for key in `ls Kkyau.org*.key`; do echo "\$INCLUDE $key" >> kyau.org.zone; done |
Run a check on each zone.
# named-checkzone kyau.net /var/named/kyau.net.zone # named-checkzone kyau.org /var/named/kyau.org.zone |
Sign each zone with the dnssec-signzone.
# dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o kyau.net -t kyau.net.zone # dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o kyau.org -t kyau.org.zone |
To update a zone at any point just edit the zone, check the zone and then re-sign as root sudo -i.
# cd /var/named # dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o kyau.net -t kyau.net.zone # systemctl restart named |
WARNING: DO NOT increment the zone file this will be done automatically! |
Modify the bind config to read from the signed zone files.
zone "kyau.net" IN { type master; file "kyau.net.zone.signed"; allow-update { none; }; notify no; }; zone "kyau.org" IN { type master; file "kyau.org.zone.signed"; allow-update { none; }; notify no; }; |
Make sure all is in order.
# named-checkconf /etc/named.conf |
Next visit the domain registrar for the domain.
SQL
Create a directory on the host machine for the nginx and sql server.
# sudo mkdir /www/sql /www/nginx |
Make sure it has the right permissions.
# sudo chown -R kvm:kvm /www |
Edit the sql virtual machine to mount the folder inside of the VM.
# sudo virsh edit sql |
… <filesystem type='mount' accessmode='passthrough'> <source dir='/www/sql'/> <target dir='neutron-sql'/> </filesystem> … |
Shutdown the virtual machine, then restart it back up.
Login to the sql virtual machine and create a directory for SQL.
# mkdir /sql |
Mount the partition from the HOST system.
# mount neutron-sql /sql -t 9p -o trans=virtio |
Also set this to mount on boot.
neutron-sql /sql 9p trans=virtio 0 0 |
After the directory is mounted make sure it has the right permissions.
# sudo chown mysql:mysql /sql # sudo chmod 770 /sql |
Install mariadb.
# pacaur -S mariadb |
Initialize the SQL database directory.
# sudo mysql_install_db --user=mysql --basedir=/sql/base --datadir=/sql/db |
Modify the MySQL global config to support a different basedir, bind to IPv6 in addition to IPv4 and disable filesystem access.
… [mysqld] bind-address = :: port = 3306 socket = /run/mysqld/mysqld.sock datadir = /sql/db local-infile = 0 … tmpdir = /tmp/ … |
Enable and start the systemd service.
# sudo systemctl enable mariadb # sudo systemctl start mariadb |
Run the MySQL post-install security check, change the remove password and remove all demo/test related material.
# sudo mysql_secure_installation |
User Setup
Open mysql and change the root username and allow access from the nginx virtual machine.
# mysql -u root -p |
MariaDB> RENAME USER 'root'@'localhost' to 'kyau'@'localhost'; MariaDB> RENAME USER 'root'@'127.0.0.1' to 'kyau'@'127.0.0.1'; MariaDB> RENAME USER 'root'@'::1' to 'kyau'@'::1'; MariaDB> GRANT ALL PRIVILEGES ON *.* TO 'kyau'@'142.44.172.255' IDENTIFIED BY 'my-password' WITH GRANT OPTION; MariaDB> FLUSH PRIVILEGES; |
Confirm the changes by listing all users.
MariaDB> SELECT User,Host,Password FROM mysql.user; |
UTF8MB4
Optionally, enable UTF8MB4 support, which is recommended over UTF8 as it will provide full unicode support.
[client] default-character-set = utf8mb4 … [mysqld] collation_server = utf8mb4_unicode_ci character_set_client = utf8mb4 character_set_server = utf8mb4 skip-character-set-client-handshake … [mysql] default-character-set = utf8mb4 … |
Importing Databases
Head over to the current SQL server and export the needed database.
# mysqldump -u kyau -p --databases <db1> <db2>… > backup.sql |
Import them to the new database server.
# mysql -u kyau -p # MariaDB> source backup.sql |
Database Maintenance
MariaDB includes mysqlcheck to check, analyze, repair and optimize database tables.
To check all tables in all databases:
# mysqlcheck --all-databases -u root -p -m |
To analyze all tables in all databases:
# mysqlcheck --all-databases -u root -p -a |
To repair all tables in all databases:
# mysqlcheck --all-databases -u root -p -r |
To optimize all tables in all databases:
# mysqlcheck --all-databases -u root -p -o |
To check if any tables require upgrades:
# mysqlcheck --all-databases -u root -p -g |
If any tables require upgrades, it is recommended to run a full upgrade (this should also be done in-between major MariaDB version releases).
# mysql_upgrade -u root -p |
Firewall
Add rules to the firewall to allow access from the nginx virtual machine to MySQL.
ip saddr 142.44.172.255 tcp dport 3306 ct state new,established counter accept |
Nginx
Login to the nginx virtual machine and shut it down. Then edit the virtual machine on the host.
# virsh edit nginx |
Add a mountpoint for the nginx directory on the host.
<filesystem type='mount' accessmode='mapped'> <source dir='/www/nginx'/> <target dir='neutron-nginx'/> </filesystem> |
Restart the nginx machine and login via ssh.
# virsh start nginx |
Create a directory for nginx files.
# sudo mkdir /nginx |
Set the directory to mount on boot.
neutron-nginx /nginx 9p trans=virtio 0 0 |
Reboot the machine to make sure the mounting works.
Install nginx-mainline.
# pacaur -S nginx-mainline |
Enable http and https in nftables.
tcp dport {http,https} accept |
Restart nftables to apply the new rules.
# sudo systemctl restart nftables |
Start and enable the nginx service.
# sudo systemctl enable nginx # sudo systemctl start nginx |
You should be able to visit the IP address of the machine and see the nginx default page.
For configuration first create blank configs and directories needed.
# sudo touch /nginx/nginx.conf /nginx/http.conf # sudo mkdir /nginx/logs /nginx/conf.d /nginx/https |
Set permissions accordingly.
# sudo chown -R http:http /nginx/* |
Edit the main nginx config, replace all of it with a single include.
include /nginx/nginx.conf |
Create the actual main configuration, use sudo to edit the configs.
# sudoedit /nginx/nginx.conf |
user http; worker_processes auto; worker_cpu_affinity auto; pcre_jit on; events { worker_connections 4096; } error_log /nginx/logs/nginx-error.log; include /nginx/http.conf; |
Create the http block configuration.
http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /nginx/logs/nginx-access.log main; sendfile on; tcp_nopush on; aio threads; charset utf-8; keepalive_timeout 65; gzip on; gzip_disable "msie6"; limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m; include /nginx/conf.d/*.conf; } |
Then create configs for each website in /nginx/conf.d/ with the naming scheme *.conf. There is a great post on Stack Overflow[2] about achieving an A+ rating with 100 points in every category on SSL Labs.
PHP
Install the packages required for PHP.
# pacaur -S php php-fpm php-gd php-intl php-mcrypt php-sqlite imagemagick |
Open up the main php-fpm config.
error_log = /nginx/logs/php-fpm.log |
Then edit the config for your server instance.
listen.allowed_clients = 127.0.0.1 ... php_admin_flag[log_errors] = on php_admin_value[error_log] = /nginx/logs/php.log php_admin_value[memory_limit] = 256M php_admin_value[post_max_size] = 2048M php_admin_value[upload_max_filesize] = 2048M php_admin_value[date.timezone] = America/Toronto |
Open up the PHP config /etc/php/php.ini and enable the modules: bz2, exif, gd, gettext, iconv, intl, mcrypt, mysqli, pdo_mysql, sockets, sqlite3 by commenting out the extension lines. Start and enable the php-fpm service.
# sudo systemctl enable php-fpm # sudo systemctl start php-fpm |
Import all of the web files and update the configs in conf.d for all websites.
Let's Encrypt
Using SSL encryption is a must. First install the required packages.
# pacaur -S certbot |
Bring down nginx temporarily.
# sudo systemctl stop nginx |
Use certbot to get a certificate for all domains needed.
# sudo certbot certonly --agree-tos --standalone --email your@address.com --rsa-key-size 4096 -d domain.com,www.domain.com,subdomain.domain.com |
Generate a dhparam.
# sudo openssl dhparam -out /etc/letsencrypt/live/kyau.net/dhparam4096.pem 4096 |
Start back up nginx.
# sudo systemctl start nginx |
A timer can then be setup to run certbot twice daily.
[Unit] Description=Twice daily renewal of Let's Encrypt's certificates [Timer] OnCalendar=0/12:00:00 RandomizedDelaySec=1h Persistent=true [Install] WantedBy=timers.target |
Also create the service for certbot.
[Unit] Description=Let's Encrypt renewal [Service] Type=oneshot ExecStart=/usr/bin/certbot renew --pre-hook "/usr/bin/systemctl stop nginx.service" --post-hook "/usr/bin/systemctl start nginx.service" --quiet --agree-tos |
Enable and start the timer.
# sudo systemctl enable certbot.timer # sudo systemctl start certbot.timer |
References
- ^ DigitalOcean. How To Setup DNSSEC on an Authoritative BIND DNS Server
- ^ Stack Overflow. How do you score A+ with 100 on all categories on SSL Labs test with Let's Encrypt and Nginx?