Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Enabled conditional execution: support for if/else statements with runtime predicates inside pipeline (#4561, #4618, #4602, #4589, #4617).
- Added GPU
experimental.inputs.video
operator that supports decoding large videos from memorybuffer across multiple iterations (#4613, #4584, #4603, #4564). - Added support for lossless JPEG decoding on CPU and GPU with
fn.experimental.decoders.image
(#4625, #4600, #4587, #4572, #4592, #4548). - Added
fn.experimental.tensor_resize
operator (#4492). - Added
fn.experimental.equalize
operator (#4575, #4565). - Added API for pre-allocation and releasing of memory pools (#4563, #4556).
Fixed Issues
- Fixed GPU
fn.constant
operator synchronization issue (#4643). - Fixed out-of-bounds access with trailing wildcard in
fn.reshape
(#4631). - Fixed insufficient alignment issues in GPU video decoding (#4622).
Improvements
- Dependencies update (#4649)
- Reduce L0 test time (#4645)
- Extend input API utilities to support input operators (#4642)
- Add slice_flip_normalize_* to the minimum build (used by imgcodec)
VideoInput<MixedBackend>
(#4613)- Move slice_flip_kernel* to separate compilation units (#4637)
- Bump nvCOMP to 2.6.1 (#4638)
- Add fn.experimental.crop_mirror_normalize (#4562)
- Simplify setup stage of Cast operator (#4633)
- Move to CUDA 12.0U1 (#4632)
- Fix the warning in the build with sanitizer (#4626)
- Optimize CPU time of JPEG lossless decoder (#4625)
- Support inferring batch size from tensor argument inputs (#4617)
reshape
: restore the support for trailing wildcard inrel_shape
(#4623)- Add DALI Conditionals documentation (#4589)
- Enable nose2 test timer (#4610)
- New SliceFlipNormalizeGPU kernel (#4356)
DataId
mechanism forfn.inputs.video
operator (#4584)- Add experimental.tensor_resize operator (#4492)
MixedBackend
support forInputOperator
(#4603)- Fix HasHwDecoder (#4601)
- Track DataNodes produced by .gpu() in conditionals (#4602)
- Update the math expression docs (#4568)
- Clear operator traces before launching the operator (#4605)
- Skip JPEG lossless tests for compute capability < SM60 (#4600)
- Add experimental python 3.11 support (#4586)
- Improve error message when trying to decode JPEG lossless images on the CPU backend (#4587)
- Improve pipeline graph traversal (#4583)
- Make .so files patched in one go when the wheel is produced (#4582)
- Operator trace mechanism (#4564)
- Add equalize operator (#4575)
- Add equalize kernel (#4565)
- Support for JPEG lossless images in GPU fn.experimental.decoders.image (#4572)
- Add experimental support for if statements in DALI (#4561)
- Add CodeQL workflow for GitHub code scanning (#4438)
- Update nvCOMP to 2.6 (#4579)
- Give the ability to link each part of CUDA toolkit statically (#4570)
- Fix TL0_python-self-test-base-cuda for CUDA 12 (#4577)
- Add functions to preallocate pools and release unused pool memory (#4563)
- Disable strict_overflow warning. (#4567)
- Remove unused
define_graph
argument frombuild
pipeline method (#4555) - Add
release_unused
function to memory pools. (#4556) - Change CUDA C++ standard to C++17 (#4506)
- Create axes_utils.h (#4548)
Bug Fixes
- Fixing API utils (#4651)
constant
operator: Set proper stream in constant storage. (#4643)- Coverity 2023.01-02 (#4641)
- Allow 1-off discrepancies in the equalize op between GPU and CPU baseline (#4639)
- Fix pipeline leak in InputOperatorMixedTest (#4630)
reshape
: Prevent out-of-bounds access with trailing wildcard inrel_shape
(#4631)- Fix @autoserialize problem with unknown module (#4628)
- Fix classification of argument input-only operators in AutoGraph (#4618)
- Fix stack op error message so that it reports dim of offending operand (#4616)
- Make sure that ulMaxWidth is aligned to 32 bytes in the video decoder (#4622)
- Fix sanitizer error: memory & pipeline leaks (#4619)
- Fix
rel_shape
length validation inreshape
(#4595) - Fix non-VMM pool
release_unused
. Don't rely on cudaGetMemInfo in preallocation tests. (#4596) - Fix errors reported by LASAN (#4594)
- Add nvjpeg calls used for lossless jpeg decoding to the stub generator (#4592)
- Fix passing WITH_DYNAMIC_* falgs to conda build (#4597)
- Fix pool preallocation tests (#4585)
- Fix imgcodec fallback and error handling (#4573)
- Fix CUDA_TARGET_ARCHS handling in CMake 3.18+ (#4559)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.23.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.23.0
or for CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.23.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.23.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.23.0-7355174-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.23.0-7355174-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.23.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.23.0-7355173-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.23.0-7355173-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.23.0.tar.gz
FFmpeg source code:
Libsndfile source code: