New "hasher"
-nilsimsa
This is a similarity algo, experimental for different sorting method
https://en.wikipedia.org/wiki/Nilsimsa_Hash
Command a (add)
Custom sort of files (study reasons)
- -orderby X
ext;size;name;hash;date;data;nilsimsa;
zpaqfranz a z:\1.zpaq c:\nz -orderby ext;size;hash;
- -desc
descending sort - -nodedup
Do not deduplicate before compress archive. NOT faster (for backward compatibility), but to compare zpaq's compression against other software - -tar
Disable deduplication and compression, something like 'tar'. Can work with optional sorting -ordeby
Command r (robocopy)
- -flagappend
When robocoping a .zpaq do heuristic check of equality (first 64K, mid 64K, last 64K), then append data instead of overwrite from scratch
Scenario: after making an internal backup with zpaqfranz
(ie writing a zpaq archive containing the data to be saved on a different fast internal disk)
you want to make a second copy to a slow device (eg external USB HDD, NAS etc)
to keep it in a different place.
The -append switch, in case the destination files exist and are smaller than the original ones,
will copy only the final, additional portion of the files.
Typically reduces the time needed to update a copy by a factor of 100 or more,
compared to transferring (ex. robocopy) the entire (giant) files each time.
An heuristic check of equality is done (just to be sure): first 64K, mid 64K and last 64K of the .zpaq
Not perfect, but much better than rsync --append.
Note: enabled ONLY on .zpaq files
zpaqfranz a d:\myzpaqfolder\myverybackup.zpaq c:\datafoldertobesaved
zpaqfranz r d:\myzpaqfolder z:\mynasusbdrive -kill -append
instead of something like
zpaqfranz a d:\myzpaqfolder\myverybackup.zpaq c:\datafoldertobesaved
robocopy d:\myzpaqfolder z:\mynasusbdrive /mir
Command s (size)
- -minsize X
Show a warning if freespace less than X. Typically for backup script log, check NAS space etc
C:\zpaqfranz>zpaqfranz s z:\ -minsize 30000000000
zpaqfranz v54.9-experimental (HW BLAKE3), SFX64 v52.15, compiled Nov 4 2021
franz:minsize 30.000.000.000
Get directory size ignoring .zfs and :$DATA
04/11/2021 17:00:21 Scan dir <<z:/>>
z:/System Volume Information/*: error access denied
=====================================================================================================
Free 0 17.661.624.320 ( 16.45 GB) <<z:/>>
***************************************
|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|
WARN: free space < of minsize 17.661.624.320 30.000.000.000
|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|=|
***************************************
Everywhere
In archive and files substitute $hour $min $sec $weekday $year $month $day $week $date $time $datetime
Normally used on filesystems with deduplication (like zfs) to keep "versioned" copies with the r command (robocopy) in many different folders
zpaqfranz a z:\mycopy_$weekday.zpaq c:\datatobesaved
zpaqfranz r c:\datatobesaved \\nas\mybackup\$month -kill
root@prod113:~ # zpool status -D dedup
pool: dedup
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
dedup ONLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors
dedup: DDT entries 1628935, size 420 on disk, 315 in core
bucket allocated referenced
______ ______________________________ ______________________________
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
------ ------ ----- ----- ----- ------ ----- ----- -----
1 2.57K 316M 254M 254M 2.57K 316M 254M 254M
2 3.86K 472M 409M 409M 7.78K 950M 821M 821M
4 107 12.5M 10.2M 10.2M 449 52.3M 42.2M 42.2M
8 33 3.90M 2.11M 2.11M 466 55.4M 30.6M 30.6M
16 1.48M 185G 170G 170G 32.1M 3.92T 3.61T 3.61T
32 57.0K 6.63G 5.95G 5.95G 2.45M 291G 261G 261G
64 11.4K 1.36G 1.18G 1.18G 848K 101G 86.6G 86.6G
128 2.80K 308M 193M 193M 479K 51.2G 32.6G 32.6G
256 1.14K 129M 66.6M 66.6M 442K 48.5G 23.1G 23.1G
512 853 86.1M 48.5M 48.5M 599K 61.2G 34.4G 34.4G
1K 353 38.0M 31.7M 31.7M 431K 46.6G 39.4G 39.4G
2K 12 643K 459K 459K 35.1K 1.70G 1.21G 1.21G
4K 2 128K 1.50K 1.50K 10.8K 602M 7.72M 7.72M
8K 1 512 512 512 8.92K 4.46M 4.46M 4.46M
Total 1.55M 194G 178G 178G 37.3M 4.51T 4.08T 4.08T
About 200GB of "used data", and ~4TB "robocopied"