Initial support for proxmox backup/restore, on zfs
Proxmox is a debian-based virtualiser that I like, it has a similar style to what I would have done myself.
It has a particular backup mechanism (with an external product, proxmox backup) that is very interesting (it looks like a free Nakivo, for those involved in virtualisation).
The 'internal' backups are done by vzdump, also good, BUT which operates with 'normal' compression (zstd for example), without deduplication AND WITHOUT ENCRYPTION (this is bad, very bad)
proxomox also supports zfs storage, but in a way I do not like much, namely zvol
For those not used to zfs, these are 'volumes' written in blocks, so they are not accessible as files (yep, sometime not 'everything' is a file)
The aim is to have more performance by removing the 'intermediate' layer of virtual-disks-on-files.
However, this makes backups a real nightmare, as there is no 'easy' way to make them (there are no .vmdk or .raw to copy back and forth)
It is possible to force this behaviour (i.e. save to file instead of zvol). I will not go into details.
In this hypothesis (i.e. that the virtual machines are 'real' files, normally present in the /var/lib/vz folder, itself on a zfs storage), I have implemented two new functions for zpaqfranz to make zfs-based-snapshotted proxmox-backups
To reiterate: proxmox supports a thousand different types of storage, I chose the one I like, maybe in the future I will make zpaqfranz more 'smart'. For now I basically use it for my proxmox server backups, together with proxmox backup (better two technologies than one)
proxmox, although opensource, is commercially supported by a German company and, therefore, they are not very keen, understandably, on alternative tools to those offered by them.
(they deleted a thread on their forum without any explanation :)
I tried to contact the developer of proxmox backup systems by e-mail, with no response.
So I share - with anyone who is interested - my little experience
zfsproxbackup
Archiving proxmox backups (from zfs local storage), getting VM disks from /var/lib/vz
- -force Destroy temporary snapshot (before backup)
- -kill Remove snapshot (after backup)
- -all Get all VM
- -not Do not backup (exclude) VMs
- -snapshot kj Make 'snapname' to kj (default: francoproxmox)
Backup/encrypt w/key 'pippo' VM 200 zfsproxbackup /bak/200.zpaq 200 -force -kill -key p
ippo
Backup 2 VMs: 200 and 300 zfsproxbackup /bak/200_300.zpaq 200 300
Backup ALL VMs zfsproxbackup /bak/all.zpaq -all -force -kill
Backup all EXCEPT 200 and 300 zfsproxbackup /bak/part.zpaq -all -not 200 -not 300
-force -kill
This is a "real world" example of taking a zfs-based FreeBSD mailserver
Please note the size taken
/usr/local/bin/zpaqfranz zfsproxbackup /backup/200_posta_G6.zpaq 200 -force -kill -key pippo
zpaqfranz v57.3f-JIT-L, (14 Feb 2023)
franz:-key (hidden)
franz:-force -kill
zfsproxmox-backup VERY EXPERIMENTAL!
Works only on /var/lib/vz VM disk(s)
and on /etc/pve/qemu-server/ config(s)
37720: running Searching vz from zfs list...
38072: Founded pool <<zp0/zd0>>
53549: Archive /backup/200_posta_G6.zpaq
53550: Pool zp0/zd0
53550: Purged Pool zp0_zd0
53552: Mark francoproxmox
38135: VM Path 000 /var/lib/vz/.zfs/snapshot/francoproxmox/images/200
37720: running Destroy snapshot (if any)
38162: x_one zfs destroy zp0/zd0@francoproxmox
37720: running Taking snapshot
38162: x_one zfs snapshot zp0/zd0@francoproxmox
/backup/200_posta_G6.zpaq:
15 versions, 18 files, 693.419 frags, 3.215 blks, 7.764.776.789 bytes (7.23 GB)
Updating /backup/200_posta_G6.zpaq at offset 7.764.776.789 + 0
Adding 42.954.981.615 (40.00 GB) in 2 files (1 dirs), 8 threads @ 2023-02-14 15:45:30
(001%) 1.00% 00:08:55 ( 409.63 MB)->( 0.00 B) of ( 40.00 GB) 81.93 MB/se
(002%) 2.00% 00:07:07 ( 819.29 MB)->( 0.00 B) of ( 40.00 GB) 102.41 MB/se
(...)
(099%) 99.00% 00:00:03 ( 39.60 GB)->( 52.12 MB) of ( 40.00 GB) 107.57 MB/se
(100%) 100.00% 00:00:00 ( 40.00 GB)->( 52.12 MB) of ( 40.00 GB) 107.52 MB/se
1 +added, 0 -removed.
7.764.776.789 + (42.954.981.615 -> 692.206.304 -> 57.182.470) = 7.821.959.259 @ 107.42 MB/s
37720: running Destroy snapshot (if any)
38162: x_one zfs destroy zp0/zd0@francoproxmox
381.635 seconds (000:06:21) (all OK)
zpaqfranz v57.3f-JIT-L, (14 Feb 2023)
franz:-key (hidden)
200_posta_G6.zpaq:
17 versions, 20 files, 702.583 frags, 3.263 blks, 7.826.948.327 bytes (7.29 GB)
-------------------------------------------------------------------------
< Ver > < date > < time > < added > <removed> < bytes added >
-------------------------------------------------------------------------
00000001 2023-02-09 17:55:38 +00000003 -00000000 -> 6.654.630.534
00000002 2023-02-09 18:47:44 +00000001 -00000000 -> 17.394.282
00000003 2023-02-11 13:40:51 +00000001 -00000000 -> 252.707.222
00000004 2023-02-11 17:09:04 +00000001 -00000000 -> 74.337.419
00000005 2023-02-11 17:43:25 +00000001 -00000000 -> 15.669.831
00000006 2023-02-11 19:04:01 +00000001 -00000000 -> 17.670.788
00000007 2023-02-12 00:00:01 +00000001 -00000000 -> 72.951.575
00000008 2023-02-12 08:00:01 +00000001 -00000000 -> 94.408.432
00000009 2023-02-12 16:00:01 +00000001 -00000000 -> 89.868.811
00000010 2023-02-13 00:00:01 +00000001 -00000000 -> 89.430.987
00000011 2023-02-13 08:00:01 +00000001 -00000000 -> 84.165.485
00000012 2023-02-13 16:00:01 +00000001 -00000000 -> 91.936.236
00000013 2023-02-13 17:30:36 +00000002 -00000000 -> 16.994.079
00000014 2023-02-14 00:00:01 +00000001 -00000000 -> 92.889.622
00000015 2023-02-14 08:00:01 +00000001 -00000000 -> 99.721.454
00000016 2023-02-14 15:45:30 +00000001 -00000000 -> 57.182.470
00000017 2023-02-14 16:00:01 +00000001 -00000000 -> 4.989.068
Today's update: at every run gets only some MBs of space (e-mails are generally small, deduplicable and compressible)
(100%) 100.00% 00:00:00 ( 40.00 GB)->( 19.43 MB) of ( 40.00 GB) 93.10 MB/se1 +added, 0 -removed.
7.951.716.338 + (42.954.981.615 -> 254.326.790 -> 22.662.906) = 7.974.379.244 @ 92.96 MB/s
37720: running Destroy snapshot (if any)
38162: x_one zfs destroy zp0/zd0@francoproxmox
441.390 seconds (000:07:21) (all OK)
zfsproxrestore
The corrispondent restore command, of course in the same "expectations"
Restore proxmox backups (on local storage) into /var/lib/vz and /etc/pve/qemu-server
Without files selection restore everything, files can be a sequence of WMIDs (ex. 200 300)
- -kill Remove snapshot (after backup)
- -not Do not restore (exclude)
Restore all VMs zfsproxrestore /backup/allvm.zpaq
Restore 2 VMs: 200 and 300 zfsproxrestore /backup/allvm.zpaq 200 300
Restore VM 200, release snapshot zfsproxrestore /backup/allvm.zpaq 200 -kill
Restore all VMs, except 200 zfsproxrestore /backup/allvm.zpaq -not 200 -kill
pre-compiled binaries
In this release I put some binaries for various platforms, I am checking in particular Synology-Intel-based NAS
zpaqfranz_linux "should" run just about everywhere (for 64 bit Intel-systems); zpaqfranz_qnap_intel on (just about) every 32 bit-Intel-based-simil-Linux systems