The previous wording made it sound like all "visible" tasks were
aborted, which is not the case: A user with Sys.Audit but without
Sys.Modify may see a task that was started by a different user, but
overrule-shutdown would not abort the task.
Change wording to better reflect that not all visible tasks may be
aborted.
Also, add a full-stop that was previously missing.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Friedrich Weber [Fri, 12 Apr 2024 14:15:51 +0000 (16:15 +0200)]
fix #4474: qemu api: add overrule-shutdown parameter to stop endpoint
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `qmshutdown` tasks for the same VM (which are
visible to the user/token) are aborted before attempting to stop the
VM.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
Filip Schauer [Mon, 19 Feb 2024 11:11:39 +0000 (12:11 +0100)]
fix #1905: Allow moving unused disks
In the past, moving unused disks to another storage was prohibited due
to oversights in the handling of unused disks. This commit rectifies
this limitation by allowing the movement of unused disks.
Historical context:
* 16 Sep 2010 r5164 qemu-server/pve2: The disknames sub was removed.
* 17 Sep 2010 r5170 qemu-server/pve2: Unused disks were introduced.
* 28 Jan 2011 r5461 qemu-server/pve2: The same disknames sub that was
removed in r5164 was brought back. Since unused disks were not around
yet in r5164 the disknames sub did not consider unused disks.
* 6-8 Aug 2012 c1175c92..f91b2e45 qemu-server.git: Disk resize was
introduced. In commit c1175c92 in sub qemu_block_resize unused disks
were not taken into account and in commit 2f48a4f5 (8 Aug 2012) the
resize API call was changed to only allow disks matching the ones in
the disknames sub. Since sub disknames did not contain any unused
disks, those were not allowed at all in the resize API call.
* 27 May 2013 586bfa78 qemu-server.git: Disk move was introduced. The
API call implementation borrowed heavily from disk resize, including
the behaviour of not taking unused disks into account. Thus, unused
disk could not be moved, which persists to this day.
In summary, this behaviour was introduced because the handling of unused
disks was overlooked and it was never changed.
There is no inherent reason why unused disks should be restricted from
being moved to another storage. These disks cannot use the
qemu_drive_mirror, but they can still be moved with qemu_img_convert,
the same way as any other disk of a stopped VM.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Markus Frank [Thu, 11 Apr 2024 10:48:20 +0000 (12:48 +0200)]
fix #3784: config: Parameter for guest vIOMMU + test-cases
vIOMMU enables the option to passthrough pci devices to L2 VMs in L1
VMs via Nested Virtualisation and adds an extra isolation.
Uses the new property-string from the "config: define machine schema
as property-string"-commit to add the viommu option to the machine
parameter.
Currently there are two vIOMMU implementation in QEMU to choose:
intel or virtio
Virtio-iommu is more recent but less used in production than intel-iommu.
The assert_valid_machine_property function prevents using intel-iommu with
i440fx.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[ TL: tiny coding style fix to extract variable inside if expr ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Hannes Duerr [Wed, 10 Apr 2024 11:17:29 +0000 (13:17 +0200)]
fix #5363: cloudinit: make creation of scsi cloudinit discs possible again
Upon obtaining the device type, a check is performed to determine if it
is a CD drive. It is important to note that Cloudinit drives are always
assigned as CD drives. If the drive has not yet been allocated, the test
will fail due to the unset cd attribute.
To avoid this, an explicit check is now performed to determine if it is
a Cloudinit drive that has not yet been assigned.
Fixes: d1feab4 ("fix #4957: add vendor and product information passthrough for SCSI-Disks") Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
Filip Schauer [Mon, 11 Mar 2024 10:13:08 +0000 (11:13 +0100)]
cpu config: die on hotplug of non x86_64 CPUs
When attempting a CPU hotplug on an architecture other than x86_64, die
with a clean error instead of attempting a hotplug with a known
non-working device command line. Also move the corresponding FIXME up to
the error.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Thomas Lamprecht [Thu, 14 Mar 2024 08:06:22 +0000 (09:06 +0100)]
qm: add VM import command
Add a command that can be used together with volumes from the new
'import' content type of storage plugins.
For now only the new ESXi exposes that content type, but in the long
run its planned to migrate over the existing OVF/OVA infra and extend
it so that it will replace the 'ovfimport' command.
Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
[ TL: split out to separate commit and add message, fix completing
VMID to propose unused ones, note explicitly when in dry-run mode ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
instead of a "pbs-backing" parameter we now have a
"live-restore-backing" parameter containing the `-blockdev` arg and
its name, which also means we print the blockdev earlier
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Thomas Lamprecht [Sun, 10 Mar 2024 17:12:21 +0000 (18:12 +0100)]
config: pending network: avoid undef-warning on old/new comparison
A network device of a VM does not necessarily has to be connected to
an actual bridge, so when a new pending value is set we need to use
the undef-safe compare helpers when checking if there was a change
between old and new value, as otherwise one gets ugly "use of
uninitialized value in string ne" warnings.
cpu config: implement is_native_arch locally for now
could be a better fit in PVE::Tools, like proposed by Filip, but OTOH.
Tools is already crowded as is, so wait if we need it on more places
outside of qemu-server.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fiona Ebner [Fri, 9 Feb 2024 12:14:25 +0000 (13:14 +0100)]
QMP client: remove unnecessary question mark from comment
There might've been a question back when it got first added in commit 9d689077 ("use long timeouts for snapshot monitor command"). But
nowadays, the value is well-established. Changing it would affect
quite a few operations, so that should not be done without good
reason and is likely better done for the specific operation.
Filip Schauer [Wed, 21 Feb 2024 14:33:16 +0000 (15:33 +0100)]
cpu config: Unify the default value for 'kvm'
Make the default value for 'kvm' consistent, taking into account
whether the VM will run on the same CPU architecture as the host.
This would be a breaking change to CPU hotplug for VMs with a
different CPU architecture running on an x86_64 host, as in this case
the default CPU type for CPU hotplug changes from 'kvm64' to 'qemu64'.
However, CPU hotplug of non x86_64 architectures is not supported
anyway, so this is not a breaking change after all.
It should be noted that this change does alter the CPU hotplug
behaviour when emulating an x86_64 CPU on a non-x86_64 host. This is
however not officially supported in Proxmox VE.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Filip Schauer [Wed, 21 Feb 2024 14:33:14 +0000 (15:33 +0100)]
prevent starting a 32-bit VM using a 64-bit OVMF BIOS
Instead of starting a VM with a 32-bit CPU type and a 64-bit OVMF image,
throw an error before starting the VM telling the user that OVMF is not
supported on 32-bit CPU types.
To obtain a list of 32-bit CPU types, refer to the builtin_x86_defs in
target/i386/cpu.c of QEMU. Exclude any entries that have the long mode
feature (CPUID_EXT2_LM).
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Dominik Csapak [Thu, 7 Mar 2024 09:33:37 +0000 (10:33 +0100)]
mediated device pass-through: fix race condition on VM reboot
When rebooting a VM from PVE (via CLI/API), the reboot code is called
under a guest lock, which creates a reboot request, shuts down the VM
and then calls the regular cleanup code, which includes the mdev
cleanup.
In parallel, the qmeventd observes that the VM process has gone, and
starts 'qm cleanup' which is (among other tasks) also starts the VM
again if a reboot from the PVE side is pending.
The qmeventd synchronizes this through a lock on the guest, with a
default timeout of 10 seconds.
Since we currently also always wait 10 seconds for the NVIDIA driver
to clean up the mdev, this creates a race condition for the cleanup
lock. IOW., when the call to `qm cleanup` starts before we started to
sleep for 10 seconds, it will not be able to acquire its lock and not
start the vm again.
To avoid the race condition in practice, do two things:
* increase the timeout in `qm cleanup` to 60 seconds.
Technically this still might run into a timeout, as we can configure
up to 16 mediated devices with each delaying 10 seconds in the worst
case, but realistically most users won't configure more than two or
three of them, if even that.
* change the hard-coded `sleep 10` to a loop sleeping for 1 second
each before checking the state again. This shortens the timeout when
the NVIDIA driver did not require the full 10s to finish the
clean-up.
Further, add a bit of logging, so one can properly see in the task log
what is happening at which point in time.
Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup) Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> Reviewed-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: change warn to print, reword commit message ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
api: clone vm: comment and style clean-up deactivation error-handling
Make the post-if check for the target not already running more
prominent by using a full if block.
Also comment on why we ignore the error here, while the commit
changing that explained it well, this is one of the things that might
be better of with a in-code comment (as doing the deactivation is
described as important here, so one might wonder why the code
continues if that fails)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Hannes Duerr [Wed, 6 Mar 2024 14:08:34 +0000 (15:08 +0100)]
fix #1734: clone VM: if deactivation fails demote error to warning
When a template with disks on LVM is cloned to another node, the
volumes are first activated, then cloned and deactivated again after
cloning.
However, if clones of this template are now created in parallel to
other nodes, it can happen that one of the tasks can no longer
deactivate the logical volume because it is still in use. The reason
for this is that we use a shared lock.
Since the failed deactivation does not necessarily have consequences,
we downgrade the error to a warning, which means that the clone tasks
will continue to be completed successfully.
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com> Tested-by: Friedrich Weber <f.weber@proxmox.com>
Fiona Ebner [Wed, 31 Jan 2024 10:53:59 +0000 (11:53 +0100)]
api: fix using import-from with SCSI disks
by fixing the SCSI feature compatibility check helper. The helper is
also called for disks using import-from, so it has to use the extended
schema when parsing the drive.
Fixes: d1feab4a ("fix #4957: add vendor and product information passthrough for SCSI-Disks") Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
PVE::Storage::path() neither activates the storage of the passed-in volume, nor
does it ensure that the returned value is actually a file or block device, so
this actually fixes two issues. PVE::Storage::abs_filesystem_path() actually
takes care of both, while still calling path() under the hood (since $volid
here is always a proper volid, unless we change the cicustom schema at some
point in the future).
Hannes Duerr [Tue, 19 Dec 2023 14:03:05 +0000 (15:03 +0100)]
migration: secure and use source volume names for deactivation
During migration, the volume names may change if the name is already in
use at the target location. We therefore want to save the original names
so that we can deactivate the original volumes afterwards.
Hannes Duerr [Wed, 10 Jan 2024 12:45:49 +0000 (13:45 +0100)]
fix #4957: add vendor and product information passthrough for SCSI-Disks
adds vendor and product information for SCSI devices to the json schema
and checks in the VM create/update API call if it is possible to add
these to QEMU as a device option
Signed-off-by: Hannes Duerr <h.duerr@proxmox.com>
[FE: add missing space to exception message
use config option for exception e.g. scsi0 rather than 'product'
style fixes] Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Fiona Ebner [Wed, 3 Jan 2024 13:41:49 +0000 (14:41 +0100)]
fix #2258: select correct device when removing drive snapshot via QEMU
The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.
Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.
Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1
While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.
Fiona Ebner [Tue, 19 Dec 2023 13:44:59 +0000 (14:44 +0100)]
fix #4501: TCP migration: start vm: move port reservation and usage closer together
Currently, volume activation, PCI reservation and resetting systemd
scope happen in between, so the 5 second expiretime used for port
reservation is not always enough.
It's possible to defer telling QEMU where it should listen for
migration and do so after it has been started via QMP. Therefore, the
port reservation can be moved very close to the actual usage.
Mentioned here for completeness and can still be done as an additional
change later if desired: next_migrate_port could be modified to
optionally return the open socket and it should be possible to pass
the file descriptor directly to QEMU, but that would require accepting
the connection before on the Perl side (otherwise leads to ENOTCONN
107). While it would avoid any races, it's not the most elegant
and the change at hand should be enough in all practical situations.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com> Tested-by: Hannes Duerr <h.duerr@proxmox.com>
Markus Frank [Fri, 24 Nov 2023 12:32:38 +0000 (13:32 +0100)]
migration: do not allow live-migration with clipboard=vnc
It is not yet supported for QEMU's vdagent device which is used for
the VNC clipboard.
The migration precondition API call will now treat the VNC clipboard
as a local resource. Thus the GUI blocks migration and shows:
"Can't migrate VM with local resources: clipboard=vnc"
QemuMigrate's prepare function will also abort live migration early
when using the VNC clipboard.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
[FE: adapt commit message a bit] Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Markus Frank [Tue, 14 Nov 2023 09:22:51 +0000 (10:22 +0100)]
config: enable VNC clipboard parameter in vga_fmt
add option to use the qemu vdagent implementation to enable the VNC
clipboard. When enabled with SPICE the spice-vdagent gets replaced
with the QEMU implementation.
This patch does not solve #1406, but does allow copy and paste with a
running X-session, when spice-vdagent is installed on the guest.
Signed-off-by: Markus Frank <m.frank@proxmox.com> Reviewed-by: Dominik Csapak <d.csapak@proxmox.com> Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Stefan Lendl [Fri, 17 Nov 2023 14:20:26 +0000 (15:20 +0100)]
gitignore: add build output and .vscode to ignored files
Signed-off-by: Stefan Lendl <s.lendl@proxmox.com>
[ TL: extend subject and use more specific build-dir glob ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fiona Ebner [Fri, 10 Nov 2023 13:24:51 +0000 (14:24 +0100)]
vm start: add warning about deprecated machine version
While there already is a warning from QEMU proper, that one is not
visible as a task warning and it's not straightforward to make it be
one, because QEMU is started inside a run_fork(). It's also more
future-proof to have the detection explicit on our side and the
documentation can be referenced.
Fiona Ebner [Fri, 10 Nov 2023 13:24:47 +0000 (14:24 +0100)]
machine: get current: make it clear that pve-version only exists for the current machine
by adding a comment and grouping the code better. See the PVE QEMU
patch "PVE: Allow version code in machine type" for reference. The way
the code was written previously made it look like a bug where
$pve_version might be overwritten multiple times.
As the ha-manager accessed rather internal details before that
version, and the memory property changing to a format-string with sub
properties in 7f8c808 ("add memory parser") breaks that access, so
ensure the installed ha-manager is using the newer
get_derived_property method to access that information cleanly.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The vCPUs are passed as devices with specific id only when CPU
hot-plug is enable at cold start.
So, we can't enable/disable allow-hotplug online as then vCPU hotplug
API will thrown errors not finding core id.
Not enforcing this could also lead to migration failure, as the QEMU
command line for the target VM could be made different than the one it
was actually running with, causing a crash of the target as Fiona
observed [0].
Filip Schauer [Fri, 13 Oct 2023 13:50:06 +0000 (15:50 +0200)]
backup, migrate: fix races with suspended VMs that can wake up
Fix races with ACPI-suspended VMs which could wake up during migration
or during a suspend-mode backup.
Revert prevention, of ACPI-suspended VMs automatically resuming after
migration, introduced by 7ba974a6828d. The commit introduced a
potential problem that causes a suspended VM that wakes up during
migration to remain paused after the migration finishes.
This can be fixed once QEMU preserves the 'suspended' runstate during
migration (current patch on the qemu-devel list [0]) by checking for
the 'suspended' runstate on the target after migration.
Furthermore the commit increased the race window during the
preparation of a suspend-mode backup, when a suspended VM wakes up
between the vm_is_paused check in PVE::VZDump::QemuServer::prepare and
PVE::VZDump::QemuServer::qga_fs_freeze. This causes the code to skip
fs-freeze even if the VM has woken up, potentially leaving the file
system in an inconsistent state.
To prevent this, do not treat the suspended runstate as paused when
migrating or archiving a VM.
Signed-off-by: Filip Schauer <f.schauer@proxmox.com> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
[ TL: massage in Fiona's extra info into commit message ] Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fiona Ebner [Mon, 16 Oct 2023 13:12:26 +0000 (15:12 +0200)]
fix #4522: api: vncproxy: also set environment variable for ticket without websocket
Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
disallow passwordless"), 'qm vncproxy' will just fail when the
LC_PVE_TICKET environment variable is not set. Since it is not only
required in combination with websocket, drop that conditional.
For the non-serial case, this was the last remaining effect of the
'websocket' parameter, so update the parameter description.
Filip Schauer [Mon, 9 Oct 2023 13:25:19 +0000 (15:25 +0200)]
Fix ACPI-suspended VMs resuming after migration
Add checks for "suspended" and "prelaunch" runstates when checking
whether a VM is paused.
This fixes the following issues:
* ACPI-suspended VMs automatically resuming after migration
* Shutdown and reboot commands timing out instead of failing
immediately on suspended VMs
Signed-off-by: Filip Schauer <f.schauer@proxmox.com>
Friedrich Weber [Fri, 6 Oct 2023 12:15:33 +0000 (14:15 +0200)]
vm start: set higher timeout if using PCI passthrough
The default VM startup timeout is `max(30, VM memory in GiB)` seconds.
Multiple reports in the forum [0] [1] and the bug tracker [2] suggest
this is too short when using PCI passthrough with a large amount of VM
memory, since QEMU needs to map the whole memory during startup (see
comment #2 in [2]). As a result, VM startup fails with "got timeout".
To work around this, set a larger default timeout if at least one PCI
device is passed through. The question remains how to choose an
appropriate timeout. Users reported the following startup times:
The data does not really indicate any simple (e.g. linear)
relationship between RAM and startup time (even data from the same
source). However, to keep the heuristic simple, assume linear growth
and multiply the default timeout by 4 if at least one `hostpci[n]`
option is present, obtaining `4 * max(30, VM memory in GiB)`. This
covers all cases above, and should still leave some headroom.
In preparation to add more properties to the memory configuration like
maximum hotpluggable memory and whether virtio-mem devices should be
used.
This also allows to get rid of the cyclic include of PVE::QemuServer
in the memory module.
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
[FE: also convert new usage in get_derived_property
remove cyclic include of PVE::QemuServer
add commit message] Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
PVE::QemuServer::check_running() does both
PVE::QemuConfig::assert_config_exists_on_node()
PVE::QemuServer::Helpers::vm_running_locally()
The former one isn't needed here when doing hotplug, because the API
already assert that the VM config exists. It also would introduce a
new cyclic dependency between PVE::QemuServer::Memory <->
PVE::QemuConfig with the proposed virtio-mem patch set.
In preparation to remove the cyclic include of PVE::QemuServer in the
memory module.
fix #2816: restore: remove timeout when allocating disks
10 minutes is not long enough when disks are large and/or network
storages are used when preallocation is not disabled. The default is
metadata preallocation for qcow2, so there are still reports of the
issue [0][1]. If allocation really does not finish like the comment
describing the timeout feared, just let the user cancel it.
Also note that when restoring a PBS backup, there is no timeout for
disk allocation, and there don't seem to be any user complaints yet.
The 5 second timeout for receiving the config from vma is kept,
because certain corruptions in the VMA header can lead to the
operation hanging there.
There is no need for the $tmp variable before setting back the old
timeout, because that is at least one second, so we'll always be able
to set the $oldtimeout variable to undef in time in practice.
Currently, there shouldn't even be an outer timeout in the first
place, because the only call path leading to here is via the create
API (also used by qmrestore), both of which don't set a timeout.