github fcorbelli/zpaqfranz 57.4
Windows 32/64 binary,HW accelerated, Linux, FreeBSD

latest releases: 64.2, 64.1, 63.8...
2 years ago

New command: 1on1

Deduplicate a folder against another one, by filenames and checksum, or only checksum

Julius Erving and Larry Bird Go One on One

A deduplication function at file level is to identify files inside folders that have been 'manually' duplicated e.g. by copy-paste

I did not find portable and especially fast programmes: they often use very... stupid approaches (NxM comparisons), with quite high slowdowns.
By using the -ssd switch it is possible to activate the multithread which allows, in the real world, performance above GB/s

To make things clear the file into "-deleteinto" will be (in case) deleted
Dry run (no -kill), =hash,=filename,multithread

zpaqfranz 1on1 c:\dropbox -deleteinto z:\pippero2 -ssd

Real run (because -kill), 0-files too

zpaqfranz 1on1 c:\dropbox -deleteinto z:\pippero2 -zero -kill

Real run, with XXH3, with everything (even file with .zfs). This will delete file with DIFFERENT name, BUT same content

zpaqfranz 1on1 c:\dropbox -deleteinto z:\pippero2 -xxh3 -kill -forcezfs

Updated zfs-something commands

zfsadd

Now support almost every zpaqfranz switch, getting the timestamp from snapshot, not snapshot name

Suppose you have something like that

tank/pippo@franco00000001
tank/pippo@franco00000002
tank/pippo@franco00000003
(...)
tank/pippo@franco00001025

You want to purge those snapshots, but retaining the data, getting everything inside consolidate.zpaq

zpaqfranz zfsadd /tmp/consolidated.zpaq "tank/pippo" "franco" -force

You can get only a folder, read the help!

Then you can purge with

zpaqfranz zfspurge "tank/pippo" "franco" -script launchme.sh

This method is certainly slow, because it requires an exorbitant amount of processing. However, the result is to obtain a single archive that keeps the data in a highly compressed format, which can eventually be extracted at the level of a single version-snapshot

In short, long-term archiving for anti-ransomware policy

Improved zfsreceive

This VERY long term archiving of zfs snapshots is now tested for 1000+ snapshots on 300GB+ datasets, should be fine

Example: "unpack" all zfs snapshots (made by zpaqfranz zfsbackup command) from ordinato.zpaq into
the new dataset rpool/restored

zpaqfranz zfsreceive /tmp/ordinato.zpaq rpool/restored -script myscript.sh

Then run the myscript.sh

Download zpaqfranz

Don't miss a new zpaqfranz release

NewReleases is sending notifications on new releases.