Fabian Ebner [Thu, 13 Jan 2022 11:04:01 +0000 (12:04 +0100)]
config: activate affected storages for snapshot operations
For snapshot creation, the storage for the vmstate file is activated
via vdisk_alloc when the state file is created.
Do not activate the volumes themselves, as that has unnecessary side
effects (e.g. waiting for zvol device link for ZFS, mapping the volume
for RBD). If a storage can only do snapshot operations on a volume
that has been activated, it needs to activate the volume itself.
The actual implementation will be in the plugins to be able to skip
CD ROM drives and bind-mounts, etc.
Thomas Lamprecht [Tue, 30 Nov 2021 07:13:13 +0000 (08:13 +0100)]
snapshot prepare: log on parent-cycle deletion
for new clones this should not happen anyway, as the API calls clean
such parent references up now, but for old ones it still can happen,
log to inform.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fabian Ebner [Fri, 26 Nov 2021 10:52:30 +0000 (11:52 +0100)]
replication: update last_sync before removing old replication snapshots
If pvesr was terminated after finishing with the new sync and after
removing old replication snapshots, but before it could write the new
state, the next replication would fail. It would wrongly interpret the
actual last replication snapshot as stale, remove it, and (if no other
snapshots are present) attempt a full sync, which would fail.
Reported in the community forum [0], this was brought to light by the
new pvescheduler before it learned graceful reload.
It's not possible to simply preserve a last remaining snapshot in
prepare(), because prepare() is also used for valid removals. Instead,
update last_sync early enough. Stale snapshots will still be removed
on the next run if there are any.
Fabian Ebner [Tue, 19 Oct 2021 07:54:55 +0000 (09:54 +0200)]
replication: find common snapshot: use additional information
which is now available from the storage back-end.
The benefits are:
1. Ability to detect different snapshots even if they have the same
name. Rather hard to reach, but for example with:
Snapshots A -> B -> C -> __replicationXYZ
Remove B, rollback to C (causes __replicationXYZ to be removed),
create a new snapshot B. Previously, B was selected as replication
base, but it didn't match on source and target. Now, C is correctly
selected.
2. Smaller delta in some cases by not prefering replication snapshots
over config snapshots, but using the most recent possible one from
both types of snapshots.
3. Less code complexity for snapshot selection.
If the remote side is old, it's not possible to detect mismatch of
distinct snapshots with the same name, but the timestamps from the
local side can still be used.
Still limit to our snapshots (config and replication), because we
don't have guarantees for other snapshots (could be deleted in the
meantime, name might not fit import/export regex, etc.).
Fabian Ebner [Thu, 12 Aug 2021 11:01:11 +0000 (13:01 +0200)]
fix #3111: config: snapshot delete: check if replication still needs it
and abort if it does and --force is not specified.
After rollback, the rollback snapshot might still be needed as the
base for incremental replication, because rollback removes (blocking)
replication snapshots.
It's not enough to limit the check to the most recent snapshot,
because new snapshots might've been created between rollback and
remove.
It's not enough to limit the check to snapshots without a parent (i.e.
in case of ZFS, the oldest), because some volumes might've been added
only after that, meaning the oldest snapshot is not an incremental
replication base for them.
Fabian Ebner [Thu, 12 Aug 2021 11:01:10 +0000 (13:01 +0200)]
partially fix #3111: replication: be less picky when selecting incremental base
After rollback, it might be necessary to start the replication from an
earlier, possibly non-replication, snapshot, because the replication
snapshot might have been removed from the source node. Previously,
replication could only recover in case the current parent snapshot was
already replicated.
To get into the bad situation (with no replication happening between
the steps):
1. have existing replication
2. take new snapshot
3. rollback to that snapshot
In case the partial fix to only remove blocking replication snapshots
for rollback was already applied, an additional step is necessary to
get into the bad situation:
4. take a second new snapshot
Since non-replication snapshots are now also included, where no
timestamp is readily available, it is necessary to filter them out
when probing for replication snapshots.
If no common replication snapshot is present, iterate backwards
through the config snapshots.
The changes are backwards compatible:
If one side is running the old code, and the other the new code,
the fact that one of the two prepare() calls does not return the
new additional snapshot candidates, means that no new match is
possible. Since the new prepare() returns a superset, no previously
possible match is now impossible.
The branch with @desc_sorted_snap is now taken more often, but
it can still only be taken when the volume exists on the remote side
(and has snapshots). In such cases, it is safe to die if no
incremental base snapshot can be found, because a full sync would not
be possible.
Fabian Ebner [Thu, 12 Aug 2021 11:01:07 +0000 (13:01 +0200)]
partially fix #3111: further improve removing replication snapshots
by using the new $blocker parameter. No longer remove all replication
snapshots from affected volumes unconditionally, but check first if
all blocking snapshots are replication snapshots. If they are, remove
them and proceed with rollback. If they are not, die without removing
any.
For backwards compatibility, it's still necessary to remove all
replication snapshots if $blockers is not available.
Get the replicatable volumes from the snapshot config rather than the
current config. And filter those volumes further to those that will
actually be rolled back.
Previously, a volume that only had replication snapshots (e.g. because
it was added after the snapshot was taken, or the vmstate volume)
would lose them. Then, on the next replication run, such a volume
would lead to an error, because replication tried to do a full sync,
but the target volume still exists.
This is not a complete fix. It is still possible to run into problems:
- by removing the last (non-replication) snapshots after a rollback
before replication can run once.
- by creating a snapshot and making a rollback before replication can
run once.
The list of volumes is not required to be sorted for prepare(), but it
is sorted by how foreach_volume() iterates now, so not random.
Fabian Ebner [Mon, 15 Feb 2021 12:24:59 +0000 (13:24 +0100)]
vzdump: mailto: use email-or-username-list format
because it is a more complete pattern. Also, 'mailto' was a '-list' format in
PVE 6.2 and earlier, so this also fixes whitespace-related backwards
compatibility. In particular, this fixes creating a backup job in the GUI
without setting an address, which passes along ''.
For example,
> vzdump 153 --mailto " ,,,admin@proxmox.com;;; developer@proxmox.com , ; "
was valid and worked in PVE 6.2.
Fabian Ebner [Tue, 24 Nov 2020 11:27:28 +0000 (12:27 +0100)]
vzdump: correctly handle prune-backups option in commandline and cron config
Previously only the hash reference was printed instead of the property string.
It's also necessary to parse the property string when reading the cron config.
which lead to current and pending/delete values being returned
separately, and being misinterpreted by the web interface (and probably
other clients as well).
One case where a config value should be skipped, instead of parsed and added
is when the value is not scalar. This is the case for the raw lxc keys
(e.g. lxc.init.cmd, lxc.apparmor.profile) - which get added as array to the
'lxc' key.
This patch reintroduces the skipping of non-scalar values, when parsing the
config but not for the pending values.
From a short look through the commit history the sanity checks were in place
since 2014 (introduced in qemu-server for handling pending configuration
changes), and their removal did not cause any other regressions.
To my knowledge only the raw lxc config keys are parsed into a non-scalar
value.
Tested by adding a 'lxc.init.cmd' key to a container config.
Thomas Lamprecht [Wed, 20 May 2020 15:00:55 +0000 (17:00 +0200)]
fix config_with_pending_array for falsy current values
one could have a config with:
> acpi: 0
and a pending deletion for that to restore the default 1 value.
The config_with_pending_array method then pushed the key twice, one
in the loop iterating the config itself correctly and once in the
pending delete hash, which is normally only for those options not yet
referenced in the config at all. Here the check was on "truthiness"
not definedness, fix that.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fabian Ebner [Tue, 5 May 2020 08:27:17 +0000 (10:27 +0200)]
fix 2682: make sure configuration file is up-to-date for lock_config-variants
See [0] for the details. The call tree for the variants is
lock_config -> lock_config_full -> lock_config_mode
so it is sufficient to adapt lock_config_mode.
replication: dont declare variable in post-if scope
If a variable is defined and assigned in a conditional statement, it
is not defined behavior in Perl.
For more information about this behaviour see
https://perldoc.perl.org/perlsyn.html#Statement-Modifiers
> NOTE: The behaviour of a my, state, or our modified with a
> statement modifier conditional or loop construct (for example,
> `my $x if ...`) is undefined.
> The value of the my variable may be undef, any previously assigned
> value, or possibly anything else.
> Don't rely on it. Future versions of perl might do something
> different from the version of perl you try it out on. Here be
> dragons."
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fabian Ebner [Thu, 26 Mar 2020 08:09:35 +0000 (09:09 +0100)]
foreach_volume(_full): generalize and implement function
Introduce a parameter $opts to allow for better control of which
keys/volumes to use for the iteration and ability to reverse the order.
Also, allow extra parameters for the function.
Removes the '__snapshot'-prefix for future use from outside the module.
Thomas Lamprecht [Tue, 10 Mar 2020 11:17:00 +0000 (12:17 +0100)]
get_backup_volumes: clarifcation and indentation improvements to comment
Aaron clarified that one documented element actually slipped through
and won't be returned.
Also firx the mixed tab/space indentation, for such comments we can
be pretty liberal and I just went with the two spaces per level, as
some part already had that.