This new release introduce... new features
As all new code should be deeply tested, before production use
The more feedback and issues open on github, the more I will perfect the program
Compile on OpenBSD 6.6+ and OmniOS (open Solaris Unix) r151042
Not very common, but I like Unix more than Linux
(Windows) sfx module updated
Now, if no -to is specified, it asks from console where to extract the data
General behaviour
Show less infos during execution. It is a working in progress
Switches to REDUCE output are -noeta, -pakka and -summary
Switches to INCREASE output are -verbose and -debug
Advancing by 1s in progress infos
The "dir" command, by default, show dates in European format
Instead of 2022-12-25 => 25/12/2022
It does NOT use "local" translator, because it's way too hard to support many platforms
-utc turn off local time (just like 7.15)
-flagflat use mime64 encoded filenames
In some cases there were problems with reserved words on NTFS filesystems
On Windows more extensive support for -longpath (filenames longer than 255 chars)
On Windows -fixcase and -fixreserved
Handle case collisions (ex. pippo.txt and PIPPO.txt) and reserved filenames (ex lpt1)
Disabled some computation. Use -stat to re-activate
Some checks can be slow for huge archives (reserved filenames, collisions etc)
The -all switch, for turning on multi-thread computation, is now -ssd
In zpaqfranz 55- -all can means "all versions" or "all core". On 55+ (zpaqfranz's extensions) for multithread read/write use -ssd
command d
It is now possible to use different hashes
zpaqfranz d c:\dropbox\ -ssd -blake3
command dir
It is possible now to use different hashes to find duplicates
zpaqfranz dir c:\dropbox\ /s -checksum -blake3
Main changes command a (add)
-debug -zero Add files but zero-filled (debugging)
If you want to send me some kind of strange archive, for debug, without privacy issues
-debug -zero -kill Add 0-byte long file (debugging)
Store empty files, very small archive, to test and debug filesystem issues
-verify Verify hashes against filesystem
-verify -ssd Verify hashes against filesystem MULTITHREAD (do NOT use on spinning drives)
Command i (information)
Is now about 30% faster
-stat to calc statistics on files, like case collisions (slow)
On Windows: new command rd()
Delete hard-to-erase folders on Windows
-force Remove folder if not-zero files present
-kill wet-run
-space do not check writeability (example delete folder from a 0 bytes free drive)
And now... the main thing!
command w Chunked-extraction
Extract/test in chunks, on disk or 'ramdisk' (RAM)
The output -to folder MUST (should) BE EMPTY
The w command essentially works like the x (extract) command but in chunks
It can extract the data into RAM, and to simply check (hash) it, or even write it in order
PRELIMINARY NOTE AGAIN: the -to folder MUST BE EMPTY (no advanced checks-and-balance as zpaq)
There are various scenarios
-
Extracting on spinning drive (HDD)
zpaq's extraction method is not very fit for spinning drive, because can make a lot of seeks while writing data to disk. This can slow down the process. On media with lower latency (SSD, NVMe, ramdisk) the slowdown is much less noticeable. Basically zpaq "loves" SSD and isn't very HDD friendly.
If the largest file in the archive is smaller than the free RAM on your computer, you can use the -ramdisk switch
Extract to a spinning drive (Windows)
zpaqfranz w z:\1.zpaq -to p:\muz7\ -ramdisk -longpath
In this example the archive 1.zpaq will be extracted in RAM, then written on p:\muz7 folder (suppose an HDD drive) at the max speed possible (no seeks at all) -
Checking the hashes without write on disk AND MULTITHREAD AND MULTIPART and whatever
The p (paranoid) command can verify the hash checksum of the files into the archive WITHOUT writing to disk, but it is limited to a SINGLE CORE computation, without multipart archive support
The w command (if the largest file in the archive is smaller than free RAM) does not have such limitations
zpaqfranz w z:\1.zpaq -ramdisk -test -checksum -ssd -frugal -verbose
will test (-test, no write on drive) the archive 1.zpaq, into RAM (-ramdisk), testing hashes (-checksum), in multithread way (-ssd), using as little memory as possible (-frugal) and in -verbose mode -
Paranoid no-limit check, for huge archives (where the largest uncompressed file is bigger than free RAM)
zpaqfranz w z:\1.zpaq -to z:\muz7\ -paranoid -verify -verbose -longpath
will extract everything from 1.zpaq into z:\muz7, with longpath support (example for Windows), then do a -paranoid -verify
At the end into z:\muz7\zfranz the BAD files will be present. If everything is ok this folder should be empty -
Paranoid test-everything, when the biggest uncompressed file is smaller then RAM and using a SSD/NVMe/ramdisk as output (z:)
zpaqfranz w z:\1.zpaq -to z:\kajo -ramdisk -paranoid -verify -checksum -longpath -ssd
Recap of switches
- : -maxsize X Limit chunks to X bytes
- : -ramdisk Use 'RAMDISK', only if uncompressed size of the biggest file (+10%) is smaller than current free RAM (-25%)
- : -frugal Consume as litte RAM as possible (default: get 75% of free RAM)
- : -ssd Multithread writing from ramdisk to disk / Multithread hash computation (ex -all)
- : -test Do not write on media
- : -verbose Show useful infos (default: rather brief)
- : -checksum Do CRC-32 / hashes test in w command (by default: NO)
- : -verify Do a 'check-against-filesystem'
- : -paranoid Extract "real" to filesystem, then delete if hash OK (need -verify)
Yes, I understand, the w command can seems incomprehensible (but this is a zpaq fork, afterall :)
In fact it is developed to avoid the limitations of zpaq in the management of very large archives with huge files (for example virtual machine disks) kept on spinning HDD or handling archives containing a very large number (millions) of relatively small files (such as for example the versioned backup of a file server full of DOC, EML, JPG etc), to be checked on a high-powered machine (with many cores, lots of RAM and SSD), without wearing the media. As is known, writing large amounts of data reduces the lifespan of SSDs and NVMes (HDDs too, but to a lesser extent). And remember: the p command is monothread AND cannot handle archive bigger than RAM.
To make a "quick check" of the "spiegone" try yourself: compare the execution time on a "small" archive (=uncompressed size smaller than your RAM, say 5/10GB for example)
zpaqfranz p p:\1.zpaq
against
zpaqfranz w p:\1.zpaq -test -checksum -ramdisk -ssd -verbose