github triton-inference-server/server v2.67.0
Release 2.67.0 corresponding to NGC container 26.03

6 hours ago

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

Important

Triton 26.02 was the last Triton release for Jetson platform devices on GitHub. Triton 26.03 does not publish new Jetson release artifacts on GitHub; for Jetson, use the packages from Triton 26.02 / v2.66.0 where applicable.
Jetson AGX Plaform now supported via SBSA(arm64) container image.

New Features and Improvements

  • Fixed path traversal vulnerabilities in the SageMaker server and in the MLflow–Triton deployment API.

  • Added validation for OpenAI frontend LoRA paths.

  • Applied HTTP restrictions to SageMaker and Vertex AI endpoints and improved Vertex AI redirect handling.

  • Refactored the vLLM build to use the upstream container image; updated the TensorRT-LLM build and switched to a stable API.

  • Fixed a race condition in AddNextResponse for concurrent streaming responses when cancelling requests.

  • Fixed ensemble requests that could remain stuck indefinitely when a step's max_queue_size was exceeded.

  • Added model name validation for model management requests.

  • Introduced safe GetElementCount and GetByteSize APIs with proper validation and overflow protection (including fixes in the common library).

  • Restored a model-instance code path that had been accidentally removed in a recent commit.

  • PyTorch backend — AOT Inductor (PT2): Full support for PyTorch PT2 format model archives using AOT Inductor: new platform torch_aoti with default model file model.pt2; new provider classes InductorModel and InductorModelInstance; pytorch_libtorch and torch_aoti separated into distinct namespaces; helper utilities, macros, TritonException, and optional debug trace logging (ENABLE_DEBUG_TRACE_*); complements the existing pytorch_libtorch platform and model.pt workflow.

  • Python backend: Support for a user-defined is_ready() in readiness checks for custom health logic in Python models.

  • ONNX Runtime backend: Enabled bfloat16 I/O tensor dtype support; updated the ONNX Runtime generation script (removed obsolete Windows and iGPU references, sourced TensorRT version from the container image, updated OpenVINO and Python versions, installed ccache, improved RHEL image installation steps and build reliability).

  • Client: Relaxed the upper bound on the gRPC dependency for flexibility; added requirements bounds checks; updated jackson-databind for security.

  • Model Analyzer: Updated default branch tracking and packaging for 1.52.0 / 26.03.

  • Triton CLI: Updated to support TensorRT-LLM 1.1.0.

  • TensorRT-LLM backend: Documentation for multi-instance configuration with llmapi; updated TensorRT-LLM build versions, dependencies, and submodule (e.g. release/1.2); updated base images; fixed broken package installations during build; added torchgen for torch package compatibility; adjusted PyTorch dependency wrapping, setuptools, and related submodule versions.

  • vLLM backend: Addressed an API compatibility issue.

  • Testing: Added L0_backend_onnxruntime coverage for bfloat16 dtype; warmed up the CUDA cache before tests in L0_batcher for GB300 runners; added tests for the safe GetElementCount and GetByteSize APIs; fixed misuse of a log file argument in tests; refreshed development versions and documentation.

Known Issues

  • Avoid the specific APIs/flows (low-level OCB, BIO_f_linebuffer with short writes, CMS password-based decryption, low-level GF(2^m) with untrusted params) for Manylinux binaries, in order to avoid exposure to known issues in OpenSSL 1.1.1.

  • Since 25.10, the vLLM backend uses the V1 engine by default. You might see invalid characters in logprobs output; the issue has been reported to the vLLM team.

  • The PyTorch backend supports PyTorch 2.0 with the limitation that models must be provided as a serialized model file (aka model.pt). AOT Inductor models use platform torch_aoti and model.pt2 as documented for this release. Please see the Triton PyTorch Backend documentation for details.

  • vLLM's v0 API and Ray are affected by vulnerabilities. Users should consider their own architecture and mitigation steps which may include but should not be limited to:

    • Do not expose Ray executors and vLLM hosts to a network where any untrusted connections might reach the host.
    • Ensure that only the other vLLM hosts are able to connect to the TCP port used for the XPUB socket. Note that the port used is random.
  • When using Valgrind or other leak detection tools on AGX-Thor or DGX-Spark systems, you might see memory leaks attributed to NvRmGpuLibOpen. The root cause has been identified and fixed in CUDA.

  • Valgrind or other memory leak detection tools may occasionally report leaks related to DCGM. These reports are intermittent and often disappear on retry. The root cause is under investigation.

  • CuPy has issues with the CUDA 13 Device API in multithreaded contexts. Avoid using tritonclient cuda_shared_memory APIs in multithreaded environments until fixed by CuPy.

  • TensorRT calibration cache may require size adjustment in some cases, which was observed for the IGX platform.

  • The core Python binding may incur an additional D2H and H2D copy if the backend and frontend both specify device memory to be used for response tensors.

  • A segmentation fault related to DCGM and NSCQ may be encountered during server shutdown on NVSwitch systems. A possible workaround for this issue is to disable the collection of GPU metrics: tritonserver --allow-gpu-metrics false ...

  • When using TensorRT models, if auto-complete configuration is disabled and is_non_linear_format_io:true for reformat-free tensors is not provided in the model configuration, the model may not load successfully.

  • When using Python models in decoupled mode, users need to ensure that the ResponseSender goes out of scope or is properly cleaned up before unloading the model to guarantee that the unloading process executes correctly.

  • Triton Inference Server with vLLM backend currently does not support running vLLM models with tensor parallelism sizes greater than 1 and the default "distributed_executor_backend" setting when using explicit model control mode. When loading a vLLM model (tp > 1) in explicit mode, users could potentially see failure at the initialize step: could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads. For the default model control mode, after server shutdown, vLLM-related sub-processes are not killed. Related vLLM issue: vllm-project/vllm#6766. Please specify "distributed_executor_backend":"ray" in the model.json when deploying vLLM models with tensor parallelism > 1.

  • When loading models with file override, multiple model configuration files are not supported. Users must provide the model configuration by setting parameter "config" : "<JSON>" instead of a custom configuration file in the following format: "file:configs/<model-config-name>.pbtxt" : "<base64-encoded-file-content>".

  • TensorRT-LLM backend provides limited support of Triton extensions and features.

  • The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.

  • The Java CAPI is known to have intermittent segfaults.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. TCMalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. NVIDIA recommends experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names match what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Triton Client pip wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of the Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices.

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

The client libraries and examples are available in this release exclusively via the Ubuntu 24.04–based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer, and Model Analyzer. See Getting the Client Libraries for more information.

Triton TRT-LLM Container Support Matrix

The Triton TensorRT-LLM container image and base layers are updated for this release. Please refer to the support matrix and compatibility.md in the TensorRT-LLM backend repository for all dependency versions.

Dependency Version
TensorRT-LLM 1.2.0
TensorRT See compatibility.md for the TensorRT version pinned to the 26.03 TRT-LLM container

ManyLinux Assets (early access)

This release was compiled with AlmaLinux 8.9 based out of manylinux_2_28 and can be used on RHEL 8 and later versions.
See the included README.md for complete details about installation, verification, and support.
This release supports ensembles. Confirm CUDA, TensorRT, ONNX Runtime, PyTorch, and Python versions in the shipped README.md.
Some optional backend features such as the PyTorch backend's TorchTRT extension are not currently supported.

Don't miss a new server release

NewReleases is sending notifications on new releases.