Backwards Compatibility Notes
ZstdDecompressor.stream_reader().read()
now consistently requires an
argument in both the C and CFFI backends. Before, the CFFI implementation
would assume a default value of-1
, which was later rejected.- The
compress_literals
argument and attribute has been removed from
zstd.ZstdCompressionParameters
because it was removed by the zstd 1.3.5
API. ZSTD_CCtx_setParametersUsingCCtxParams()
is no longer called on every
operation performed againstZstdCompressor
instances. The reason for this
change is that the zstd 1.3.5 API no longer allows this without calling
ZSTD_CCtx_resetParameters()
first. But if we called
ZSTD_CCtx_resetParameters()
on every operation, we'd have to redo
potentially expensive setup when using dictionaries. We now call
ZSTD_CCtx_reset()
on every operation and don't attempt to change
compression parameters.- Objects returned by
ZstdCompressor.stream_reader()
no longer need to be
used as a context manager. The context manager interface still exists and its
behavior is unchanged. - Objects returned by
ZstdDecompressor.stream_reader()
no longer need to be
used as a context manager. The context manager interface still exists and its
behavior is unchanged.
Bug Fixes
ZstdDecompressor.decompressobj().decompress()
should now return all data
from internal buffers in more scenarios. Before, it was possible for data to
remain in internal buffers. This data would be emitted on a subsequent call
todecompress()
. The overall output stream would still be valid. But if
callers were expecting input data to exactly map to output data (say the
producer had usedflush(COMPRESSOBJ_FLUSH_BLOCK)
and was attempting to
map input chunks to output chunks), then the previous behavior would be
wrong. The new behavior is such that output from
flush(COMPRESSOBJ_FLUSH_BLOCK)
fed intodecompressobj().decompress()
should produce all available compressed input.ZstdDecompressor.stream_reader().read()
should no longer segfault after
a previous context manager resulted in error (#56).ZstdCompressor.compressobj().flush(COMPRESSOBJ_FLUSH_BLOCK)
now returns
all data necessary to flush a block. Before, it was possible for the
flush()
to not emit all data necessary to fully represent a block. This
would mean decompressors wouldn't be able to decompress all data that had been
fed into the compressor andflush()
ed. (#55).
New Features
- New module constants
BLOCKSIZELOG_MAX
,BLOCKSIZE_MAX
,
TARGETLENGTH_MAX
that expose constants from libzstd. - New
ZstdCompressor.chunker()
API for manually feeding data into a
compressor and emitting chunks of a fixed size. Likecompressobj()
, the
API doesn't impose restrictions on the input or output types for the
data streams. Unlikecompressobj()
, it ensures output chunks are of a
fixed size. This makes this API useful when the compressed output is being
fed into an I/O layer, where uniform write sizes are useful. ZstdCompressor.stream_reader()
no longer needs to be used as a context
manager (#34).ZstdDecompressor.stream_reader()
no longer needs to be used as a context
manager (#34).- Bundled zstandard library upgraded from 1.3.4 to 1.3.6.
Changes
zstandard.__version__
is now defined (#50).- Upgrade pip, setuptools, wheel, and cibuildwheel packages to latest versions.
- Upgrade various packages used in CI to latest versions. Notably tox (in
order to support Python 3.7). - Use relative paths in setup.py to appease Python 3.7 (#51).
- Added CI for Python 3.7.