Triton Inference Server
The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.
Python backend now supports setting and retrieving Inference Response Parameters on InferenceResponse objects on model.py.
Optimized the core Python binding architecture leading to improved OpenAI frontend performance.
Added dynamic sampling parameter handling, improving flexibility and consistency across vllm interactions. Added support for “guided_generation” request parameter for efficient constrained decoding workflows.
Improved Multi-Lora handling in TRTLLM GRPC Client GenAI-Perf added the ability to format output using Jinja2 templates.
GenAI-Perf telemetry now supports multiple metric endpoints.
GenAI-Perf now supports increased corpus size, 90x the previously supported size.
GenAI-Perf now supports keys without values as input.
GenAI-Perf fixed the OSL issue due to Performance Analyzer not removing the first 4 bytes from output.
GenAI-Perf added a chat template option for the TRT-LLM engine.
Performance Analyzer fixed TRITON_ENABLE_GPU compile definition bug.
Performance Analyzer bumped minimum required C++ version to C++20.
Performance Analyzer modified to disallow user attempts to use concurrency and warmup with the schedule flag.
The core Python binding may incur an additional D2H and H2D copy if the backend and frontend both specify device memory to be used for response tensors.
A segmentation fault related to DCGM and NSCQ may be encountered during server shutdown on NVSwitch systems. A possible workaround for this issue is to disable the collection of GPU metrics vLLM backend currently does not take advantage of the vLLM v0.6 performance improvement when metrics are enabled.
Incorrect results are known to occur when using TensorRT (TRT) Backend for inference using int8 data type for I/O on the Blackwell GPU architecture.
When using TensorRT models, if auto-complete configuration is disabled and When using Python models in decoupled mode, users need to ensure that the Restart support was temporarily removed for Python models.
Triton Inference Server with vLLM backend currently does not support running vLLM models with tensor parallelism sizes greater than 1 and the default "distributed_executor_backend" setting when using explicit model control mode. In attempt to load a vllm model (tp > 1) in explicit mode, users could potentially see failure at When loading models with file override, multiple model configuration files are not supported. Users must provide the model configuration by setting parameter TensorRT-LLM backend provides limited support of Triton extensions and features.
The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.
The Java CAPI is known to have intermittent segfaults.
Some systems which implement Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: https://github.com/pytorch/pytorch/issues/38273
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.
Triton cannot retrieve GPU metrics with MIG-enabled GPU devices.
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.
Python backend support for Windows is limited and does not currently support the following features:
Ubuntu 24.04 builds of the client libraries and examples are included in this release in the attached [!NOTE] A release of Triton for IGX is provided in the attached tar file: The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to The wheel for the Python client library is present in the tar file and can be installed by running the following command:
The Triton TensorRT-LLM container is built from the 25.02 image New Features and Improvements
end_to_end_grpc_client.py
Known Issues
tritonserver --allow-gpu-metrics false ...
is_non_linear_format_io:true
for reformat-free tensors is not provided in the model configuration, the model may not load successfully.
ResponseSender
goes out of scope or is properly cleaned up before unloading the model to guarantee that the unloading process executes correctly.
initialize
step: could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads
. For the default model control mode, after server shutdown, vllm related sub-processes are not killed. Related vllm issue: vllm-project/vllm#6766 . Please specify "distributed_executor_backend":"ray" in the model.json
when deploying vllm models with tensor parallelism > 1.
"config" : "<JSON>"
instead of custom configuration file in the following format: "file:configs/<model-config-name>.pbtxt" : "<base64-encoded-file-content>"
.
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. TCMalloc
and jemalloc
are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. NVIDIA recommends experimenting with both tcmalloc
and jemalloc
to determine which one works better for your use case.
--disable-auto-complete-config
.
Client Libraries and Examples
v2.55.0_ubuntu2404.clients.tar.gz
file. The SDK is also available for as an Ubuntu 24.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.
Windows Support
The 25.02 Windows release is under development.
Jetson iGPU Support
tritonserver2.55.0-igpu.tgz
.
2.17.0
, TensorRT 10.8.0.40
, Onnx Runtime 1.20.1
, PyTorch 2.6.0a0+ecf3bae40a
, Python 3.12
and as well as ensembles.
jetson.md
.
python3 -m pip install --upgrade clients/python/tritonclient-2.55.0-py3-none-manylinux2014_aarch64.whl[all]
Triton TRT-LLM Container Support Matrix
nvcr.io/nvidia/tritonserver:25.02-py3-min
. Please refer to the support matrix for all dependency versions related to 25.02. However, the packages listed below have different versions than those specified in the support matrix.
Dependency
Version
TensorRT-LLM
0.17.0
TensorRT
10.8.0.40