Showing posts with label libvirt. Show all posts
Showing posts with label libvirt. Show all posts

Wednesday, March 6, 2024

How to access libvirt domains in KubeVirt

KubeVirt makes it possible to run virtual machines on Kubernetes alongside container workloads. Virtual machines are configured using VirtualMachineInstance YAML. But under the hood of KubeVirt lies the same libvirt tooling that is commonly used to run KVM virtual machines on Linux. Accessing libvirt can be convenient for development and troubleshooting.

Note that bypassing KubeVirt must be done carefully. Doing this in production may interfere with running VMs. If a feature is missing from KubeVirt, then please request it.

The following diagram shows how the user's VirtualMachineInstance is turned into a libvirt domain:

Accessing virsh

Libvirt's virsh command-line tool is available inside the virt-launcher Pod that runs a virtual machine. First determine vm1's virt-launcher Pod name by filtering on its label (thanks to Alice Frosi for this trick!):

$ kubectl get pod -l vm.kubevirt.io/name=vm1
NAME                      READY   STATUS    RESTARTS   AGE
virt-launcher-vm1-5gxvg   2/2     Running   0          8m13s

Find the name of the libvirt domain (this is guessable but it doesn't hurt to check):

$ kubectl exec virt-launcher-vm1-5gxvg -- virsh list
 Id   Name          State
-----------------------------
 1    default_vm1   running

Arbitrary virsh commands can be invoked. Here is an example of dumping the libvirt domain XML:

$ kubectl exec virt-launcher-vm1-5gxvg -- virsh dumpxml default_vm1
<domain type='kvm' id='1'>
  <name>default_vm1</name>
...

Viewing libvirt logs and full the QEMU command-line

The libvirt logs are captured by Kubernetes so you can view them with kubectl log <virt-launcher-pod-name>. If you don't know the virt-launcher pod name, check with kubectl get pod and look for your virtual machine's name.

The full QEMU command-line is part of the libvirt logs, but unescaping the JSON string is inconvenient. Here is another way to get the full QEMU command-line:

$ kubectl exec <virt-launcher-pod-name> -- ps aux | grep qemu

Customizing KubeVirt's libvirt domain XML

KubeVirt has a feature for customizing libvirt domain XML called hook sidecars. After the libvirt XML is generated, it is sent to a user-defined container that processes the XML and returns it back. The libvirt domain is defined using this processed XML. To learn more about how it works, check out the documentation.

Hook sidecars are available when the Sidecar feature gate is enabled in the kubevirt/kubevirt custom resource. Normally only the cluster administrator can modify the kubevirt CR, so be sure to check when trying this feature:

$ kubectl auth can-i update  kubevirt/kubevirt -n kubevirt
yes

Although you can provide a complete container image for the hook sidecar, there is a shortcut if you just want to run a script. A generic hook sidecar image is available that launches a script which can be provided as a ConfigMap. Here is example YAML including a ConfigMap that I've used to test the libvirt IOThread Virtqueue Mapping feature:

---
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration: 
      featureGates:
        - Sidecar
---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: "fedora"
spec:
  storage:
    accessModes:
        - ReadWriteOnce
    resources:
      requests:
        storage: 5Gi
  source:
    http:
      url: "https://download.fedoraproject.org/pub/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.raw.xz"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: sidecar-script
data:
  my_script.sh: |
    #!/usr/bin/env python3
    import xml.etree.ElementTree as ET
    import os.path
    import sys
    
    NUM_IOTHREADS = 4
    VOLUME_NAME = 'data' # VirtualMachine volume name
    
    def main(xml):
        domain = ET.fromstring(xml)
    
        domain.find('iothreads').text = str(NUM_IOTHREADS)
    
        disk = domain.find(f"./devices/disk/alias[@name='ua-{VOLUME_NAME}']..")
        driver = disk.find('driver')
        del driver.attrib['iothread']
        iothreads = ET.SubElement(driver, 'iothreads')
        for i in range(NUM_IOTHREADS):
            iothread = ET.SubElement(iothreads, 'iothread')
            iothread.set('id', str(i + 1))
    
        ET.dump(domain)
    
    if __name__ == "__main__":
        # Workaround for https://github.com/kubevirt/kubevirt/issues/11276
        if os.path.exists('/tmp/ran-once'):
            main(sys.argv[4])
        else:
            open('/tmp/ran-once', 'wb')
            print(sys.argv[4])
---
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  creationTimestamp: 2018-07-04T15:03:08Z
  generation: 1
  labels:
    kubevirt.io/os: linux
  name: vm1
  annotations:
    hooks.kubevirt.io/hookSidecars: '[{"args": ["--version", "v1alpha3"],
      "image": "kubevirt/sidecar-shim:20240108_99b6c4bdb",
      "configMap": {"name": "sidecar-script",
                    "key": "my_script.sh",
                    "hookPath": "/usr/bin/onDefineDomain"}}]'
spec:
  domain:
    ioThreadsPolicy: auto
    cpu:
      cores: 8
    devices:
      blockMultiQueue: true
      disks:
      - disk:
          bus: virtio
        name: disk0
      - disk:
          bus: virtio
        name: data
    machine:
      type: q35
    resources:
      requests:
        memory: 1024M
  volumes:
  - name: disk0
    persistentVolumeClaim:
      claimName: fedora
  - name: data
    emptyDisk:
      capacity: 8Gi

If you need to go down one level further and customize the QEMU command-line, see my post on passing QEMU command-line options in libvirt domain XML.

More KubeVirt debugging tricks

The official KubeVirt documentation has a Virtualization Debugging section with more tricks for customizing libvirt logging, launching QEMU with strace or gdb, etc. Thanks to Alice Frosi for sharing the link!

Conclusion

It is possible to get libvirt access in KubeVirt for development and testing. This can make troubleshooting easier and it gives you the full range of libvirt domain XML if you want to experiment with features that are not yet exposed by KubeVirt.

Saturday, April 9, 2011

How to pass QEMU command-line options through libvirt

An entire virtual machine configuration can be passed on QEMU's extensive
command-line, including everything from PCI slots to CPU features to serial
port settings. While defining a virtual machine from a monster
command-line may seem insane, there are times when QEMU's rich command-line
options come in handy.

And at those times one wishes to side-step libvirt's domain XML and specify
QEMU command-line options directly. Luckily libvirt makes this possible and I
learnt about it from Daniel Berrange and Anthony Liguori on IRC. This libvirt
feature will probably come in handy to others and so I want to share it.

The <qemu:commandline> domain XML tag

There is a special namespace for QEMU-specific tags in libvirt domain XML. You
cannot use QEMU-specific tags without first declaring the namespace. To enable
it use the following:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

Now you can add command-line arguments to the QEMU invocation. For example, to load an option ROM with -option-rom:
<qemu:commandline>
   <qemu:arg value='-option-rom'/>
   <qemu:arg value='path/to/my.rom'/>
</qemu:commandline>

It is also possible to add environment variables to the QEMU invocation:
<qemu:commandline>
   <qemu:env name='MY_VAR' value='my_value'/>
</qemu:commandline>

Setting qdev properties through libvirt

Taking this a step further we can set qdev properties through libvirt. There is no domain XML for setting the virtio-blk-pci ioeventfd qdev property. Here is how to set it using <qemu:arg> and the -set QEMU option:
<qemu:commandline>
  <qemu:arg value='-set'/>
  <qemu:arg value='device.virtio-disk0.ioeventfd=off'/>
</qemu:commandline>

The result is that libvirt generates a QEMU command-line that ends with -set device.virtio-disk0.ioeventfd=off. This causes QEMU to go back and set the ioeventfd property of device virtio-disk0 to off.

More information

The following libvirt wiki page documents mappings from QEMU command-line options to libvirt domain XML. This is extremely useful if you know which QEMU option to use but are unsure how to express that in domain XML.

That page also reveals the <qemu:commandline> tag and shows how it can be used to invoke QEMU with the GDB stub (-s).

Tuesday, March 22, 2011

How to access the QEMU monitor through libvirt

It is sometimes useful to issue QEMU monitor commands to VMs managed by libvirt. Since libvirt takes control of the monitor socket it is not possible to interact with the QEMU monitor in the same way as when running QEMU or KVM manually.

Daniel Berrange shared the following techniques on IRC a while back. It is actually pretty easy to get at the QEMU monitor even while libvirt is managing the VM:

Method 1: virsh qemu-monitor-command


There is a virsh command available in libvirt ≥0.8.8 that allows you to access the QEMU monitor through virsh:

virsh qemu-monitor-command --hmp <domain> '<command> [...]'

Method 2: Connecting directly to the monitor socket


On older libvirt versions the only option is shutting down libvirt, using the monitor socket directly, and then restarting libvirt:

sudo service libvirt-bin stop  # or "libvirtd" on Red Hat-based distros
sudo nc -U /var/lib/libvirt/qemu/<domain>.monitor
...
sudo service libvirt-bin start

Either way works fine. I hope this is useful for folks troubleshooting QEMU or KVM. In the future I will post more libvirt tips :).

Update: Daniel Berrange adds that using the QEMU monitor essentially voids your libvirt warranty :). Try to only use query commands like info qtree rather than commands that change the state of QEMU like adding/removing devices.