Release notes
This release is an important milestone towards the general availability of the NVIDIA DRA Driver for GPUs. It focuses on improving support for NVIDIA's Multi-Node NVLink (MNNVL) in Kubernetes by delivering a number of ComputeDomain improvements and bug fixes.
All commits since the last release can be seen here: v25.3.0-rc.2...v25.3.0-rc.3. The changes are summarized below.
For background on how ComputeDomains enable support for MNNVL workloads on Kubernetes (and on NVIDIA GB200 systems in particular), see this doc and this slide deck.
Improvements
- More predictable
ComputeDomaincleanup semantics: deletion of aComputeDomainis now immediately followed by resource teardown (instead of waiting for workload to complete). - Troubleshooting improvement: a new init container helps users set a correct value for the
nvidiaDriverRootHelm chart variable and overcome common GPU driver setup issues. - All driver components are now based on the same container image (configurable via Helm chart variable). This removes a dependency on Docker Hub and generally helps with compliance and reliability.
- IMEX daemons orchestrated by a
ComputeDomainnow communicate via Pod IP (using a virtual overlay network instead of usinghostnetwork: true) to improve robustness and security. - The dependency on a pre-provisioned NVIDIA Container Toolkit was removed.
Fixes
ComputeDomainteardown now works even after a correspondingResourceClaimwas removed from the API server (#342).- Fixed an issue where the IMEX daemon startup probe failed with “family not supported by protocol“ (#328).
- Pod labels were adjusted so that e.g.
kubectl logs ds/nvidia-dra-driver-gpu-kubelet-pluginactually yields plugin logs (#355). - The IMEX daemon startup probe is now less timing-sensitive (d1f7c).
- Other minor fixes: #321, #334.
Notable changes
- Introduced an IMEX daemon wrapper allowing for more robust and flexible daemon reconfiguration and monitoring.
- Added support for the NVIDIA GPU Driver 580.x releases.
- Added support for the Blackwell GPU architecture in the GPU plugin of the DRA driver.
- The DRA library was updated to
v0.33.0(cf. changes) for various robustness improvements (such as for more reliable rolling upgrades).
Breaking changes
- The
nvidiaCtkPathHelm chart variable does not need to be provided anymore (see above); doing so now results in an error.
The path forward
ComputeDomains
Future versions of the NVIDIA GPU driver (580+) will include IMEX daemons with support for communicating using DNS names in addition to raw IP addresses. This feature allows us to overcome a number of limitations inherent to the existing ComputeDomain implementation – with no breaking changes to the user-facing API.
Highlights include:
-
Removal of the
numNodesfield in theComputeDomainabstraction. Users will no longer need to pre-calculate how many nodes their (static) multi-node workload will ultimately span. -
Support for elastic workloads. The number of pods associated with a mulit-node workload will no longer need to be fixed and forced to match the value of the
numNodesfield in theComputeDomainthe workload is running in. -
Support for running more than one pod per
ComputeDomainon a given node. As of now, all pods of a multi-node workload are artificially forced to run on different nodes, even if there are enough GPUs on a single node to service more than one of them. This new feature will remove this restriction. -
Support for running pods from different
ComputeDomains on the same node. As of now, only one pod from any multi-node workload is allowed to run on a given node associated with aComputeDomain(even if there are enough GPUs available to service more than one of them). This new feature will remove this restriction. -
Improved tolerance to node failures within an IMEX domain. As of now, if one node of an IMEX domain goes down, the entire workload needs to be shut down and rescheduled. This new feature will allow the failed node to be swapped in-place, without needing to shut down the entire IMEX domain (of course, many types of failures may still require the workloads to restart anyway to explicitly recover from a loss of state).
We also plan on adding improvements to the general debuggability and observability of ComputeDomains, including:
- Proper definition of a set of high-level states that a
ComputeDomaincan be in to allow for robust automation. - Export of metrics to allow for monitoring health and performance.
- Actionable error messages and description strings as well as improved component logging for facilitating troubleshooting.
GPUs
The upcoming 25.3.0 release will not include official support for allocating GPUs (only ComputeDomains). In the following release (25.8.0), we will add official support for allocating GPUs. This 25.8.0 release will be integrated with the NVIDIA GPU Operator and does not need to be installed as a standalone Helm chart anymore.
Note: The DRA feature in upstream Kubernetes is slated to go GA in August. The 25.8.0 release of the NVIDIA DRA driver for GPUs is planned to coincide with that.
Features we plan to include in the 25.8.0 release:
- GPU selection via complex constraints
- Support for having multiple GPU types per node
- Controlled GPU sharing via ResourceClaims
- User-mediated Time-Tlicing across a subset of GPUs on a node
- User-mediated MPS sharing across a subset of GPUs on a node
- Allocation of statically partitioned MIG devices
- Custom policies to align multiple resource types (e.g. GPUs, CPUs, and NICs)
Features for future releases in the near term:
- Dynamic allocation of MIG devices
- System-mediated sharing of GPUs via Time-slicing and MPS
- “Management” pods with access to all GPUs / MIG devices without allocating them
- Dynamic swapping of NVIDIA driver with vfio driver depending on intended use of GPU
- Ability to use DRA to allocate GPUs with “traditional” API (e.g.
nvidia.com/gpu: 2)