Key Updates
General
- ONNX 1.7 support
- Opset 12
- Function expansion support that enables several new ONNX 1.7 ops such as NegativeLogLikelihoodLoss, GreaterOrEqual, LessOrEqual, Celu to run without a kernel implementation.
- [Preview] ONNX Runtime Training
- ONNX Runtime Training is a new capability released in preview to accelerate training transformer models. See the sample here to use this feature in your training experiments.
- Improved threadpool support for better resource utilization
- Improved threadpool abstractions that switch between openmp and Eigen threadpools based on build settings. All operators have been updated to use these new abstractions.
- Improved Eigen based threadpool now allow ops to provide cost (among other things like thread affinity) for operations
- Simpler configuration of thread count. If built with OpenMP, use the OpenMP env variables; else use the ORT APIs to configure the number of threads.
- Support for sessions to share global threadpool. See this for more information.
- Performance improvements
- ~10% average measured latency improvements amongst key representative models (including ONNX model zoo models, MLPerf, and production models shipped in Microsoft products)
- Further latency improvements for Transformer models on CPU and GPU - benchmark script
- Improved batch inferencing latency for scikit-learn models for large batch sizes
- Significant improvements in the implementations of the following ONNX operators: TreeEnsembleRegressor, TreeEnsembleClassifier, LinearRegressor, LinearClassifier, SVMRegressor, SVMClassifier, TopK
- C# API optimizations - PR3171
- Telemetry enabled for Windows (more details on telemetry collection)
- Improved error reporting when a kernel cannot be found due to missing type implementation
- Minor fixes based on static code analysis
Dependency updates
Please note that this version of onnxruntime depends on Visual C++ 2019 runtime. Previous versions depended on Visual C++ 2017. Please also refer https://github.com/microsoft/onnxruntime/tree/rel-1.3.0#system-requirements for the full set of system requirements.
APIs and Packages
- [General Availability] Windows Machine Learning APIs - package published on Nuget - Microsoft.AI.MachineLearning
- Performance improvements
- Opset updates
- [General Availability] ONNX Runtime with DirectML package published on Nuget -Microsoft.ML.OnnxRuntime.DirectML
- [General Availability] Java API - Maven package coming soon.
- [Preview] Javascript (node.js) API now available to build from the master branch.
- ARM64 Linux CPU Python package now available on Pypi. Note: this requires building ONNX for ARM64.
- Nightly dev builds from master (Nuget feed, TestPypi-CPU, GPU)
- API Updates
- I/O binding support for Python API - This reduces execution time significantly by allowing users to setup inputs/outputs on the GPU prior to model execution.
- API to specify free dimensions based on both denotations and symbolic names.
Execution Providers
- OpenVINO v2.0 EP
- DirectML EP updates
- Updated graph interface to abstract GPU-dependent graph optimization
- ONNX opset 10 and 11 support
- Initial support of 8bit and quantized operators
- Performance optimizations
- [Preview] Rockchip NPU EP
- [Preview] Xilinx FPGA Vitis-AI EP
- Capability to build execution providers as DLLs - supported for DNNL EP, work in progress for other EPs.
- If enabled in the build, the provider will be available as a shared library. Previously, EPs had to be statically linked with the core code.
- No runtime cost to include the EP if it isn't loaded; can now dynamically decide when to load it based on the model
Contributions
We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: Adam Pocock, pranavm-nvidia, Andrew Kane, Takeshi Watanabe, Jianhao Zhang, Colin Jermain, Andrews548, Jan Scholz, Pranav Prakash, suryasidd, and S. Manohar Karlapalem.
The ONNX Runtime Training code was originally developed internally at Microsoft, before being ported to Github. We’d like to recognize the original contributors: Aishwarya Bhandare, Ashwin Kumar, Cheng Tang, Du Li, Edward Chen, Ethan Tao, Fanny Nina Paravecino, Ganesan Ramalingam, Harshitha Parnandi Venkata, Jesse Benson, Jorgen Thelin, Ke Deng, Liqun Fu, Li-Wen Chang, Peng Wang, Sergii Dymchenko, Sherlock Huang, Stuart Schaefer, Tao Qin, Thiago Crepaldi, Tianju Xu, Weichun Wang, Wei Zuo, Wei-Sheng Chin, Weixing Zhang, Xiaowan Dong, Xueyun Zhu, Zeeshan Siddiqui, and Zixuan Jiang.
Known Issues
- The source doesn't compile on Ubuntu 14.04. See #4048
- Crash when setting IntraOpNumThreads using the C/C++/C# API. Fix is available in the master branch.
Workaround: Setting IntraOpNumThreads is inconsequential when using ORT that is built with openmp enabled. Hence it's not required and can be safely commented out. Use the openmp env variables to set the threading params for openmp enabled builds (which is the recommended way).