pypi onnxruntime 1.2.0
ONNX Runtime v1.2.0

latest releases: 1.17.3, 1.17.1, 1.17.0...
4 years ago

Key Updates

Execution Providers

  • [Preview] Availability of Windows Machine Learning (WinML) APIs in Windows builds of ONNX Runtime, with DirectML for GPU acceleration
    • Windows ML is a WinRT API designed specifically for Windows developers that already ships as an inbox component in newer Windows versions
    • Compatible with Windows 8.1 for CPU and Windows 10 1709 for GPU
    • Available as source code in the GitHub and pre-built Nuget packages (windows.ai.machinelearning.dll)
    • For additional documentation and samples on getting started, visit the Windows ML API Reference documentation
  • TensorRT Execution Provider upgraded to TRT 7
  • CUDA updated to 10.1
    • Linux build requires CUDA Runtime 10.1.243, cublas10-10.2.1.243, and CUDNN 7.6.5.32. Note: cublas 10.1.x will not work
    • Windows build requires CUDA Runtime 10.1.243, CUDNN 7.6.5.32
    • onnxruntime now depends on curand lib, which is part of the CUDA SDK. If you already have the SDK fully installed, then it won't be an issue

Builds and Packages

  • Nuget package structure updated. There is now a separate Managed Assembly (Microsoft.ML.OnnxRuntime.Managed) shared between the CPU and GPU Nuget packages. The "native" Nuget will depend on the "managed" Nuget to bring it into relevant projects automatically. PR 3104 Note that this should transparent for customers installing the Nuget packages. ORT package details are here.
  • Build system: support getting dependencies from vcpkg (a C++ package manager for Windows, Linux, and MacOS)
  • Capability to generate an onnxruntime Android Archive (AAR) file from source, which can be imported directly in Android Studio

API Updates

  • SessionOptions:
    • default value of max_num_graph_transformation_steps increased to 10
    • default value of graph optimization level is changed to ORT_ENABLE_ALL(99)
  • OrtEnv can be created/destroyed multiple times
  • Java API
    • Gradle now required to build onnxruntime
    • Available on Android
  • C API Additions:
    • GetDenotationFromTypeInfo
    • CastTypeInfoToMapTypeInfo
    • CastTypeInfoToSequenceTypeInfo
    • GetMapKeyType
    • GetMapValueType
    • GetSequenceElementType
    • ReleaseMapTypeInfo
    • ReleaseSequenceTypeInfo
    • SessionEndProfiling
    • SessionGetModelMetadata
    • ModelMetadataGetProducerName
    • ModelMetadataGetGraphName
    • ModelMetadataGetDomain
    • ModelMetadataGetDescription
    • ModelMetadataLookupCustomMetadataMap
    • ModelMetadataGetVersion
    • ReleaseModelMetadata

Operators

  • This release introduces a change to the forward-compatibility pattern ONNX Runtime previously followed. This change was added to guarantee correctness of model prediction and removes behavior ambiguity due to missing opset information. This release adds a model opset number and IR version check - ONNX Runtime will not support models with ONNX versions higher than the supported opset implemented for that version (see version matrix). If higher opset versions are needed, consider using custom operators via ORT's custom schema/kernel registry mechanism.
  • Int8 type support for Where Op
  • Updates to Contrib ops:
    • Changes: ReorderInput in kMSNchwcDomain, SkipLayerNormalization
    • New: QLinearAdd, QLinearMul, QLinearReduceMean, MulInteger, QLinearAveragePool
  • Added featurizer operators as an expansion of Contrib operators - these are not part of the official build and are experimental

Contributions

We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Eric Cousineau (Toyota Research Institute), Adam Pocock (Oracle), tinchi, Changyoung Koh, Andrews548, Jianhao Zhang, nicklas-mohr-jas, James Yuzawa, William Tambellini, Maher Jendoubi, Mina Asham, Saquib Nadeem Hashmi, Sanster, and Takeshi Watanabe.

Don't miss a new onnxruntime release

NewReleases is sending notifications on new releases.