github triton-inference-server/server v2.49.0
Release 2.49.0 corresponding to NGC container 24.08

22 days ago

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

New Features and Improvements

  • OpenAI-compatible embeddings and Hugging Face TEI re-ranker API-compatible rankings can now be profiled via GenAI-Perf.

  • GenAI-Perf can now receive multiple user-specified prompts via --input-file.

  • The request-rate for async requests have been updated in the OpenAI and HTTP clients to send requests at exactly that rate. Users submitting more requests than their models can handle can see increased latency.

  • The stabilization metric for Perf Analyzer has been updated due to these changes, so if latency does not stabilize for async models, a warning will be printed but Perf Analyzer will still complete.

  • Perf Analyzer will not validate any user-supplied inputs and outputs, returning an error if the model does not contain them.

  • Python backend now supports BF16 tensors via DLPack

  • vLLM backend now supports these reporting metrics.

    • vllm:prompt_tokens_total
    • vllm:generation_tokens_total
    • vllm:time_to_first_token_seconds
    • vllm:time_to_first_token_seconds

    To enable the vLLM model's metrics reporting, add these lines to config.pbtxt:

parameters: {
  key: "REPORT_CUSTOM_METRICS"
  value: {
    string_value:"yes"
  }
}
  • TensorRT-LLM backend now supports specifying GPU device IDs per instance using the “gpu_device_ids” field.

  • After the model config is updated to load new model versions, any loaded model versions whose model files are unmodified will not be reloaded.

Known Issues

  • When running Torch TRT models, the output may differ from running the same model on a previous release. This issue is expected to be fixed on the next release.

  • When using TensorRT models, if auto-complete configuration is disabled and is_non_linear_format_io:true for reformat-free tensors is not provided in the model configuration, the model may not load successfully.

  • When using Python models in decoupled mode, users need to ensure that the ResponseSender goes out of scope or is properly cleaned up before unloading the model to guarantee that the unloading process executes correctly.

  • Restart support was temporarily removed for Python models.

  • Triton TensorRT-LLM Backend container image uses TensorRT-LLM version 0.12.0 and built out of nvcr.io/nvidia/tritonserver:24.07-py3-min. Please refer to the Triton TRT-LLM Container Support Matrix section in the GitHub release note for more details.

  • Triton Inference Server with vLLM backend currently does not support running vLLM models with tensor parallelism sizes greater than 1 and the default "distributed_executor_backend" setting when using explicit model control mode. In attempt to load a vllm model (tp > 1) in explicit mode, users could potentially see failure at initialize step: could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads. For the default model control mode, after server shutdown, vllm related sub-processes are not killed. Related vllm issue: https://github.com/vllm-project/vllm/issues/6766 . Please specify "distributed_executor_backend":"ray" in the model.json` when deploying vllm models with tensor parallelism > 1.

  • When loading models with file override, multiple model configuration files are not supported. Users must provide the model configuration by setting parameter "config" : "" instead of custom configuration file in the following format:"file:configs/.pbtxt" : "".

  • Perf Analyzer no longer supports --trace-file option.

  • TensorRT-LLM backend provides limited support of Triton extensions and features.

  • The TensorRT-LLM backend may core dump on server shutdown. This impacts server teardown only and will not impact inferencing.

  • The Java CAPI is known to have intermittent segfaults.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed in the Triton container and can be used by specifying the library in LD_PRELOAD. NVIDIA recommends experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: https://github.com/pytorch/pytorch/issues/38273

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices.

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

  • When cloud storage (AWS, GCS, AZURE) is used as a model repository and a model has multiple versions, Triton creates an extra local copy of the cloud model’s folder in the temporary directory, which is deleted upon server’s shutdown.

  • Python backend support for Windows is limited and does not currently support the following features:

    • GPU tensors
    • CPU and GPU-related metrics
    • Custom execution environments
    • The model load/unload APIs

Client Libraries and Examples

Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.49.0_ubuntu22.04.clients.tar.gz file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For Windows, the client libraries and some examples are available in the attached tritonserver2.49.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.49.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.18.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2024.0.0.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

Known Issues

  • In our internal testing, we observed large latency in retrieving inference results from HTTP client. We recommend using gRPC to circumvent this issue.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • Python 3.10.11

  • CUDA 12.5.1

  • cuDNN 9.3.0.75

  • TensorRT 10.3.0.26

Jetson iGPU Support

A release of Triton for IGX is provided in the attached tar file: tritonserver2.49.0-igpu.tgz.

  • This release supports TensorFlow 2.16.1, TensorRT 10.3.0.26, Onnx Runtime 1.19.0, PyTorch 2.5.0a0+872d972, Python 3.10 and as well as ensembles.
  • ONNX Runtime backend does not support the OpenVINO and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.49.0-py3-none-manylinux2014_aarch64.whl[all]

Triton TRT-LLM Container Support Matrix

The Triton TensorRT-LLM container is built from the 24.07 image nvcr.io/nvidia/tritonserver:24.07-py3-min. Please refer to the support matrix for all dependency versions related to 24.07. However, the packages listed below have different versions than those specified in the support matrix.

Dependency Version
TensorRT-LLM 0.12.0
TensorRT 10.3.0.26

Don't miss a new server release

NewReleases is sending notifications on new releases.