This is a promotion of the v1.13.0-rc.3
release to GA.
This release of the NVIDIA Container Toolkit adds the following features:
- Improved support for the Container Device Interface (CDI) specifications for GPU devices when using the NVIDIA Container Toolkit in the context of the GPU Operator.
- Added the generation CDI specifications on WSL2-based systems using the
nvidia-ctk cdi generate
command. This is now the recommended mechanism for using GPUs on WSL2 andpodman
is the recommended container engine.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.13.0
nvidia-container-toolkit 1.13.0
nvidia-container-runtime 3.13.0
nvidia-docker2 2.13.0
The packages for this release are published to the libnvidia-container
package repositories.
Full Changelog: v1.12.0...v1.13.0
v1.13.0-rc.3
- Only initialize NVML for modes that require it when runing
nvidia-ctk cdi generate
. - Prefer /run over /var/run when locating nvidia-persistenced and nvidia-fabricmanager sockets.
- Fix the generation of CDI specifications for management containers when the driver libraries are not in the LDCache.
- Add transformers to deduplicate and simplify CDI specifications.
- Generate a simplified CDI specification by default. This means that entities in the common edits in a spec are not included in device definitions.
- Also return an error from the nvcdi.New constructor instead of panicing.
- Detect XOrg libraries for injection and CDI spec generation.
- Add
nvidia-ctk system create-device-nodes
command to create control devices. - Add
nvidia-ctk cdi transform
command to apply transforms to CDI specifications. - Add
--vendor
and--class
options tonvidia-ctk cdi generate
Changes from libnvidia-container
v1.13.0-rc.3
- Fix segmentation fault when RPC initialization fails.
- Build centos variants of the NVIDIA Container Library with static libtirpc v1.3.2.
- Remove make targets for fedora35 as the centos8 packages are compatible.
Changes in the toolkit-container
- Add
nvidia-container-runtime.modes.cdi.annotation-prefixes
config option that allows the CDI annotation prefixes that are read to be overridden. - Create device nodes when generating CDI specification for management containers.
- Add
nvidia-container-runtime.runtimes
config option to set the low-level runtime for the NVIDIA Container Runtime
v1.13.0-rc.2
- Don't fail chmod hook if paths are not injected
- Only create
by-path
symlinks if CDI devices are actually requested. - Fix possible blank
nvidia-ctk
path in generated CDI specifications - Fix error in postun scriplet on RPM-based systems
- Only check
NVIDIA_VISIBLE_DEVICES
for environment variables if no annotations are specified. - Add
cdi.default-kind
config option for constructing fully-qualified CDI device names in CDI mode - Add support for
accept-nvidia-visible-devices-envvar-unprivileged
config setting in CDI mode - Add
nvidia-container-runtime-hook.skip-mode-detection
config option to bypass mode detection. This allowslegacy
andcdi
mode, for example, to be used at the same time. - Add support for generating CDI specifications for GDS and MOFED devices
- Ensure CDI specification is validated on save when generating a spec
- Rename
--discovery-mode
argument to--mode
fornvidia-ctk cdi generate
Changes from libnvidia-container
v1.13.0-rc.2
- Fix segfault on WSL2 systems. This was triggered in the
v1.12.1
andv1.13.0-rc.1
releases.
Changes in the toolkit-container
- Add
--cdi-enabled
flag to toolkit config - Install
nvidia-ctk
from toolkit container - Use installed
nvidia-ctk
path in NVIDIA Container Toolkit config - Bump CUDA base images to 12.1.0
- Set
nvidia-ctk
path in the - Add
cdi.k8s.io/*
to set of allowed annotations in containerd config - Generate CDI specification for use in management containers
- Install experimental runtime as
nvidia-container-runtime.experimental
instead ofnvidia-container-runtime-experimental
- Install and configure mode-specific runtimes for
cdi
andlegacy
modes
v1.13.0-rc.1
- Include MIG-enabled devices as GPUs when generating CDI specification
- Fix missing NVML symbols when running
nvidia-ctk
on some platforms [#49] - Add CDI spec generation for WSL2-based systems to
nvidia-ctk cdi generate
command - Add
auto
mode tonvidia-ctk cdi generate
command to automatically detect a WSL2-based system over a standard NVML-based system. - Add mode-specific (
.cdi
and.legacy
) NVIDIA Container Runtime binaries for use in the GPU Operator - Discover all
gsb*.bin
GSP firmware files when generating CDI specification. - Align
.deb
and.rpm
release candidate package versions - Remove
fedora35
packaging targets
Changes in toolkit-container
- Install
nvidia-container-toolkit-operator-extensions
package for mode-specific executables. - Allow
nvidia-container-runtime.mode
to be set when configuring the NVIDIA Container Toolkit
Changes from libnvidia-container
v1.13.0-rc.1
- Include all
gsp*.bin
firmware files if present - Align
.deb
and.rpm
release candidate package versions - Remove
fedora35
packaging targets