Provisioning Multiple Linux Distributions using Terraform Provider for Libvirt
Cross-platform testing is essential when you intend to publish your software on multiple Linux distributions (distros). One of the common practices to set up testing environments over multiple Linux distros is to utilize virtual machines (VMs): have a VM of every Linux distro you plan to support and then test your software on every VM.
This post will walk you through provisioning multiple Linux distros on KVM virtual machines using Terraform Provider for libvirt.
There are many different Linux distros available to cater to your various needs. I will demonstrate the VM installation of Ubuntu, CentOS, and OpenSUSE on a host running Ubuntu server 20.04 in this post.
Project Directory Structure
The following diagram depicts the project directories and the files contained in each directory. It does not show the files and directories automatically generated by Terraform.
$HOME/terraform
├── main.tf
├── sources
│ ├── centos.qcow2
│ ├── opensuse.qcow2
│ └── ubuntu.qcow2
├── ssh
│ └── id_rsa.pub
├── templates
│ ├── network_config.tpl
│ └── user_data.tpl
├── variables.tf
└── volumes
A well-defined and organized project directory structure will help you easily manage source codes, configuration files, and executables. You may change the directory structure or the folder names according to your requirements.
Terraform Provider for Libvirt
I discussed how to build and install the Terraform Provider for libvirt on a Ubuntu server in a previous post. As a refresher, you can read the full post here. It is a fundamental step to create KVM virtual machines with Terraform.
You need to make sure that the Terraform libvirt provider is appropriately installed and configured on your Ubuntu host.
Cloud Images
Create a directory for storing downloaded cloud images:
$ mkdir -p $HOME/terraform/sources
Change the current directory to the project sources
directory:
$ cd ~/terraform/sources
Download the Ubuntu server 20.04 cloud image with a new filenameubuntu.qcow2
into the sources
directory, and then resize its virtual size to 12GB
:
$ wget -O ubuntu.qcow2 https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
$ qemu-img resize ubuntu.qcow2 12G
Download the CentOS 8.2 cloud image with a new filename centos.qcow2
into the sources
directory, and then resize its virtual size to 12GB
:
$ wget -O centos.qcow2 https://cloud.centos.org/centos/8/x86_64/images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2
$ qemu-img resize ~/terraform/sources/centos.qcow2 12G
Download the OpenSUSE Tumbleweed Openstack cloud image with a new filename opensuse.qcow2
into the sources
directory, and then resize its virtual size to 12GB
:
$ wget -O opensuse.qcow2 http://download.opensuse.org/tumbleweed/appliances/openSUSE-Tumbleweed-JeOS.x86_64-OpenStack-Cloud.qcow2
$ qemu-img resize ~/terraform/sources/opensuse.qcow2 12G
SSH Public Key
You can enable SSH public key authentication to connect to a virtual machine from your host machine.
Generate a new key pair using the RSA algorithm if no key pair exists:$ ssh-keygen -t rsa -b 4096 -f $HOME/.ssh/id_rsa -N ""
Create a folder with the named ssh
under the project root directory, and create a symbolic link to the SSH public key id_rsa.pub
in the folder:
$ mkdir $HOME/terraform/ssh
$ ln -s $HOME/.ssh/id_rsa.pub $HOME/terraform/ssh/
Libvirt Storage Pool
You need to create a directory-based storage pool for libvirt to store the virtual disks of VMs and the cloud-init ISO files for user data and network-config data.
Create a folder with the name volumes
under the project root directory to host the new pool:
$ sudo mkdir $HOME/terraform/volumes
Define a new storage pool with the name distro-pool
:
$ virsh pool-define-as --name distro-pool --type dir --target $HOME/terraform/volumes
Set the storage pool to be started when libvirt daemon starts:
$ virsh pool-autostart distro-pool
Start the storage pool:
$ virsh pool-start distro-pool
Check the storage pool state:
$ virsh pool-list
Name State Autostart
-------------------------------------------
distro-pool active yes
Cloud-init
You will use cloud-init to configure the system during the initialization of the VM instances.
Create a folder with the name templates
under the project root directory to store the user-data and network-config template files:
$ mkdir $HOME/terraform/templates
Create the file user_data.tpl
in the templates
directory with the following contents:
#cloud-config
# vim: syntax=yaml
hostname: ${host_name}
manage_etc_hosts: true
users:
- name: vmadmin
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ${auth_key}
ssh_pwauth: true
disable_root: false
chpasswd:
list: |
vmadmin:linux
expire: false
growpart:
mode: auto
devices: ['/']
Create the file network_config.tpl
in the templates
directory with the following contents:
ethernets:
${interface}:
addresses:
- ${ip_addr}/24
dhcp4: false
gateway4: 192.168.122.1
match:
macaddress: ${mac_addr}
nameservers:
addresses:
- 1.1.1.1
- 8.8.8.8
set-name: ${interface}
version: 2
Terraform Configuration Files
Terraform uses its own configuration language, designed to allow concise descriptions of infrastructure. The Terraform language is declarative, describing an intended goal rather than the steps to reach that goal.
You need to create the following two Terraform configuration files under the project root directory.
variables.tf
variable "hosts" {
type = number
default = 3
}variable "interface" {
type = string
default = "ens01"
}variable "memory" {
type = string
default = "2048"
}variable "vcpu" {
type = number
default = 2
}variable "distros" {
type = list
default = ["ubuntu", "centos", "opensuse"]
}variable "ips" {
type = list
default = ["192.168.122.11", "192.168.122.22", "192.168.122.33"]
}variable "macs" {
type = list
default = ["52:54:00:50:99:c5", "52:54:00:0e:87:be", "52:54:00:9d:90:38"]
}
main.tf
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.6.3"
}
}
}provider “libvirt” {
uri = “qemu:///system”
}resource "libvirt_volume" "distro-qcow2" {
count = var.hosts
name = "${var.distros[count.index]}.qcow2"
pool = "distro-pool"
source = "${path.module}/sources/${var.distros[count.index]}.qcow2"
format = "qcow2"
}resource "libvirt_cloudinit_disk" "commoninit" {
count = var.hosts
name = "commoninit-${var.distros[count.index]}.iso"
pool = "distro-pool"
user_data = templatefile("${path.module}/templates/user_data.tpl", {
host_name = var.distros[count.index]
auth_key = file("${path.module}/ssh/id_rsa.pub")
}) network_config = templatefile("${path.module}/templates/network_config.tpl", {
interface = var.interface
ip_addr = var.ips[count.index]
mac_addr = var.macs[count.index]
})
}resource "libvirt_domain" "domain-distro" {
count = var.hosts
name = var.distros[count.index]
memory = var.memory
vcpu = var.vcpu cloudinit = element(libvirt_cloudinit_disk.commoninit.*.id, count.index)
network_interface {
network_name = "default"
addresses = [var.ips[count.index]]
mac = var.macs[count.index]
} console {
type = "pty"
target_port = "0"
target_type = "serial"
} console {
type = "pty"
target_port = "1"
target_type = "virtio"
} disk {
volume_id = element(libvirt_volume.distro-qcow2.*.id, count.index)
}
}
Terraform Commands
Terraform is controlled via a very easy to use command-line interface (CLI). Terraform is only a single command-line application: terraform. This application then takes a subcommand, such as “apply” or “plan”.
Initialize the project directory containing Terraform configuration files:
$ terraform init
Create an execution plan:
$ terraform plan
Scans the project directory for the configuration and applies the changes appropriately:
$ terraform apply -auto-approve
Now all three virtual machines are ready to be served up. You can access them using SSH command:
$ ssh vmadmin@192.168.122.11
$ ssh vmadmin@192.168.122.22
$ ssh vmadmin@192.168.122.33
Thank you for reading!