Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added support for running custom CPU and GPU Python operators (
fn.*python_function
) inside DALI asynchronous pipelines (#4965, #5038). - Improved support for GPU Numba operator (
plugin.numba.fn.experimental.numba_function
) (#4000). - Improved (
fn.crop_mirror_normalize
) performance (#4993, #4992). - Added support for strides in subscript operator (#5007).
- Added support for video in predefined automatic augmentations (#5012).
- Added case insensitive mode in
fn.readers.webdataset
(#5016). - Moved to CUDA 12.2U2 (#5027).
- Added Flax training examples (#5004, #4978).
Fixed Issues
- Fixed GPU
fn.readers.numpy
global shuffling (#5034). - Fixed finalization of custom operator plugins during pipeline shutdown (#5036).
- Fixed synchronization issue in
fn.resize
operator family that could result in distorted outputs in initial iterations (#4990).
Improvements
- Replace GPU dltensor per-sample copying kernel with a batched one (#5038)
- September dependency update (#5043)
- Make download_pip_packages.sh resilient to errors (#5044)
- Move to CUDA 12.2U2 (#5027)
- Clean up and refactor code around Multiple Input Sets (#5035)
- Move to the upstream CV-CUDA 0.4 (#5032)
- Revert "Make nesting conditionals supported only for Python 3.7+" (#5031)
- Move all remaining video files to LFS (#5025)
- Refactor custom op wrappers into separate files of ops module (#5028)
- Add pipeline checkpointing to the Executor (#5008)
- Refactor ops into a submodule (#5018)
- Add checkpointing support to ImageRandomCrop (#4999)
- Replace deprecated fluid APIs to recommended APIs of Paddle (#5020)
- fix: CMakeLists.txt typo (#5006)
- Support video in predefined automatic augmentations (#5012)
- Extend GPU numba support (#4000)
- Add opt-in support for case insensitive webdataset (#5016)
- Add optimized variant of CMN for HWC to HWC pad FP16 case (#4993)
- Added Stride to Subscript and Slice Kernel (#5007)
- Add optimized variant of CMN for HWC to HWC case (#4992)
- Add multiple GPU code to Flax example (#5004)
- Pin inputs to decoder operators as well (#5003)
- Add checkpointing support to stateless operators used in EfficientNet (#4977)
- Use a different way to ensure that the right version of libabseil is used in conda (#4991)
- Make samples' descriptors copy in resize op fully asynchronous (#4989)
- Remove mentions of experimental from conditional tutorial. (#4988)
- Enable python operators in async pipelines (#4965)
- Make sure that the right version of libabseil is used in conda (#4987)
- Coverity fixes - 08.2023 (#4970)
- CPU fn.random operators checkpointing (#4961)
- Add Flax training example (#4978)
- Make error reporting more verbose for rand augment tests (#4958)
Bug Fixes
- Propagate to conda build packages required for DALI installation (#5041)
- Fix wheel predownload (#5023)
- Fix GPU numpy reader global shuffling (#5034)
- Change the way the input operators are traversed during the pipeline shutdown (#5036)
- Fix issues detected by Coverity as of 2023.09.04. (#5030)
- Fix CUDA block sizes in Numba GPU tests. (#5026)
- Change Loader to make checkpoints at the end of an epoch (#5019)
- Disable Flax tutorial test (#5015)
- Fix resize processing cost calculation (#5009)
- Fix abs diff computation in check_batch test utility (#4957)
- Fix sync in Resize operator family (#4990)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.30.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.30.0
or for CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.30.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.30.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.30.0-9783408-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.30.0-9783408-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.30.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.30.0-9783405-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.30.0-9783405-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.30.0.tar.gz
FFmpeg source code:
Libsndfile source code: