github dmacvicar/terraform-provider-libvirt v0.9.0

17 hours ago

⚠️ ⚠️ ⚠️ ⚠️ This version of the provider breaks compatibility ⚠️ ⚠️ ⚠️ ⚠️

Background

When this provider was developed, the idea was to mimic a cloud experience on top of libvirt. Because of this, the schema was done as flat as possible, features were abstracted and some features like disks from remote sources were added as convenience.

The initial users of the provider were usually makers of infrastructure software who needed complex network setups. Lot of code was contributed which added complexity outside of its initial design.

So for long time I wanted to restart the provider under a new design principles where:

  • HCL maps almost 1:1 to libvirt XML and therefore almost any libvirt feature can be supported
  • Most of the validation work is left to libvirt which is already doing it
  • No abstractions or extra features, and when they do, they should be designed in a way that are quite independent
  • More consistency, for example, most libvirt APIs can be mostly separated in lifecycle (create, destroy) which map quite well to terraform resources, and then some query APIs, which map well to data sources. This was not the case how we implemented for example querying the IP addresses
  • No unnecessary defensive code, for example, checking that a volume exist when referenced, is a problem that terraform solves if the ID is interpolated and libvirt solves with its own checks, if the volume is referenced by a hardcoded strings.

I knew 1.0 would never come in the current form.

The new provider

The new provider is based on the new plugin framework. This gives us some room for better diagnostics and better plans.

It makes definitions more verbose, but it also means we can implement any libvirt feature. Defaults work as long as they are defaults in libvirt.

Migration plan

You can find the legacy provider in the v0.8 branch. New releases can be done of 0.8.x versions to add bugfixes, so people who rely on it have a path forward. I'd likely not maintain much of 0.8.x, but I guess many people will help here, as they do today with different PRs.

There is no automated way of migrating the HCL of previous providers, but given that it is documented how the new schema is defined, which was not the case with the previous schema, it should be much easier to drive LLMs to perform a conversion.

You should check the documentation and README, which will give you an idea of the main differences and equivalences, but here is an example of the new schema to get an idea:

terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
    }
  }
}

provider "libvirt" {
  uri = "qemu:///system"
}

# Base Alpine Linux cloud image stored in the default pool.
resource "libvirt_volume" "alpine_base" {
  name   = "alpine-3.22-base.qcow2"
  pool   = "default"
  format = "qcow2"

  create = {
    content = {
      url = "https://dl-cdn.alpinelinux.org/alpine/v3.22/releases/cloud/generic_alpine-3.22.2-x86_64-bios-cloudinit-r0.qcow2"
    }
  }
}

# Writable copy-on-write layer for the VM.
resource "libvirt_volume" "alpine_disk" {
  name     = "alpine-vm.qcow2"
  pool     = "default"
  format   = "qcow2"
  capacity = 2147483648

  backing_store = {
    path   = libvirt_volume.alpine_base.path
    format = "qcow2"
  }
}

# Cloud-init seed ISO.
resource "libvirt_cloudinit_disk" "alpine_seed" {
  name = "alpine-cloudinit"

  user_data = <<-EOF
    #cloud-config
    chpasswd:
      list: |
        root:password
      expire: false

    ssh_pwauth: true

    packages:
      - openssh-server
    timezone: UTC
  EOF

  meta_data = <<-EOF
    instance-id: alpine-001
    local-hostname: alpine-vm
  EOF

  network_config = <<-EOF
    version: 2
    ethernets:
      eth0:
        dhcp4: true
  EOF
}

# Upload the cloud-init ISO into the pool.
resource "libvirt_volume" "alpine_seed_volume" {
  name = "alpine-cloudinit.iso"
  pool = "default"

  create = {
    content = {
      url = libvirt_cloudinit_disk.alpine_seed.path
    }
  }
}

# Virtual machine definition.
resource "libvirt_domain" "alpine" {
  name   = "alpine-vm"
  memory = 1048576
  vcpu   = 1

  os = {
    type    = "hvm"
    arch    = "x86_64"
    machine = "q35"
  }

  features = {
    acpi = true
  }

  devices = {
    disks = [
      {
        source = {
          pool   = libvirt_volume.alpine_disk.pool
          volume = libvirt_volume.alpine_disk.name
        }
        target = {
          dev = "vda"
          bus = "virtio"
        }
      },
      {
        device = "cdrom"
        source = {
          pool   = libvirt_volume.alpine_seed_volume.pool
          volume = libvirt_volume.alpine_seed_volume.name
        }
        target = {
          dev = "sdb"
          bus = "sata"
        }
      }
    ]

    interfaces = [
      {
        type  = "network"
        model = "virtio"  # e1000 is more compatible than virtio for Alpine
        source = {
          network = "default"
        }
        # TODO: wait_for_ip not implemented yet (Phase 2)
        # This will wait during creation until the interface gets an IP
        wait_for_ip = {
          timeout = 300    # seconds, default 300
          source  = "any"  # "lease" (DHCP), "agent" (qemu-guest-agent), or "any" (try both)
        }
      }
    ]

    graphics = {
      vnc = {
        autoport = "yes"
        listen   = "127.0.0.1"
      }
    }
  }

  running = true
}

# Query the domain's interface addresses
# This data source can be used at any time to retrieve current IP addresses
# without blocking operations like Delete
data "libvirt_domain_interface_addresses" "alpine" {
  domain = libvirt_domain.alpine.name
  source = "lease" # optional: "lease" (DHCP), "agent" (qemu-guest-agent), or "any"
}

# Output all interface information
output "vm_interfaces" {
  description = "All network interfaces with their IP addresses"
  value       = data.libvirt_domain_interface_addresses.alpine.interfaces
}

# Output the first IP address found
output "vm_ip" {
  description = "First IP address of the VM"
  value = length(data.libvirt_domain_interface_addresses.alpine.interfaces) > 0 && length(data.libvirt_domain_interface_addresses.alpine.interfaces[0].addrs) > 0 ? data.libvirt_domain_interface_addresses.alpine.interfaces[0].addrs[0].addr : "No IP address found"
}

# Output all IP addresses across all interfaces
output "vm_all_ips" {
  description = "All IP addresses across all interfaces"
  value = flatten([
    for iface in data.libvirt_domain_interface_addresses.alpine.interfaces : [
      for addr in iface.addrs : addr.addr
    ]
  ])
}

Feedback is appreciated. There will be a long journey for people to port and iron all the issues, but it is clear this is the path to go.

Docs: https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs

Don't miss a new terraform-provider-libvirt release

NewReleases is sending notifications on new releases.