Overview
The most significant feature addition to this release is a new "extensible enumerator" mechanism, which makes it possible to scan content from arbitrary sources with Nosey Parker without having to write it to the filesystem.
This release also includes several changes that speed up and slim down the scanning process. A 10-30% reduction in wall clock time and a 10-50% reduction in memory use are typical, but in some unusual cases, wall clock and memory use are reduced 10-20x.
Happy secret hunting!
Docker Images
A prebuilt multiplatform Docker image for this release is available for x86_64 and ARM64 architectures:
docker pull ghcr.io/praetorian-inc/noseyparker:v0.20.0
Additionally, a prebuilt Alpine-based image is also available for x86_64 and ARM64 architectures:
docker pull ghcr.io/praetorian-inc/noseyparker-alpine:v0.20.0
Additions
-
An experimental "extensible enumerator mechanism" has been added to the
scan
command (#220). This allows Nosey Parker to scan inputs produced by any program that can emit JSON objects to stdout, without having to first write the inputs to the filesystem. It is invoked with the new--enumerator=FILE
option, whereFILE
is a JSON Lines file. Each line of the enumerator file should be a JSON object with one of the following forms:{ "content_base64": "base64-encoded bytestring to scan", "provenance": <arbitrary object> } { "content": "utf8 string to scan", "provenance": <arbitrary object> }
Shell process substitution can make streaming invocation ergonomic, e.g.,
scan --enumerator=<(my-enumerator-program)
.
Changes
-
Inputs are now enumerated incrementally as scanning proceeds rather than done in an initial batch step (#216). This reduces peak memory use and wall clock time 10-20%, particularly in environments with slow I/O. A consequence of this change is that the total amount of data to scan is not known until it has actually been scanned, and so the scanning progress bar no longer shows a completion percentage.
-
When cloning Git repositories while scanning, the progress bar for now includes the current repository URL (#212).
-
When scanning, automatically cloned Git repositories are now recorded with the path given on the command line instead of the canonicalized path (#212). This makes datastores slightly more portable across different environments, such as within a Docker container and on the host machine, as relative paths can now be recorded.
-
The deprecated
--rules=PATH
alias for--rules-path=PATH
has been removed from thescan
andrules
commands. -
The built-in support for enumerating and interacting with GitHub is now a compile time-selectable feature that is enabled by default (#213). This makes it possible to build a slimmer release for environments where GitHub functionality is unused.
-
A new rule has been added:
-
The default number of parallel scanner jobs is now higher on many systems (#222). This value is determined in part by the amount of system RAM; due to several memory use improvements, the required minim RAM per job has been reduced, allowing for more parallelism.
Fixes
-
The
Google OAuth Credentials
rule has been revised to avoid runtime errors about an empty capture group. -
The
AWS Secret Access Key
rule has been revised to avoid runtimeRegex failed to match
errors. -
The code that determines first-commit provenance information for blobs from Git repositories has been reworked to improve memory use (#222). In typical cases of scanning Git repositories, this reduces both peak memory use and wall clock time by 20-50%. In certain pathological cases, such as Homebrew or nixpkgs, the new implementation uses up to 20x less peak memory and up to 5x less wall clock time.
-
When determining blob provenance informatino from Git repositories, blobs that first appear multiple times within a single commit will now be reported with all names they appear with (#222). Previously, one of the pathnames would be arbitrarily selected.