pypi onnx 1.4.0
v1.4.0

latest releases: 1.16.0, 1.15.0, 1.14.1...
5 years ago

We are excited to announce the v1.4 release of ONNX is now available! For those who aren't aware of or know about the ONNX, you can learn more about the project, who is involved and what tools are available at the onnx.ai site.

TL;DR

  • The ONNX project now has more than 27 companies on board and 31 runtimes, converters, frameworks and other tools officially supporting ONNX.
  • This release added several big features including support for large models (larger than 2GB) and store the data externally, enhanced support for control flow operators, added a test driver for ONNXIFI enabling C++ tests.
  • IR version is bumped from 3 to 4 and the opset version from 8 to 9.
  • All told this release included 270+ commits since the last release

How do I get the latest ONNX?

You can simply pip upgrade using the following command or of course build from source from the latest on Github(our source of the truth):

pip install onnx --upgrade

Quick update on what's happened since our last release:

December 4, 2018 - ONNX Runtime for inferencing machine learning models open sourced by Microsoft
ONNX Runtime, a high-performance inference engine for machine learning models in the ONNX format, is now open source. ONNX Runtime is the first publicly available inference engine that fully implements the ONNX specification, including the ONNX-ML profile. Python, C#, and C APIs are available for Linux, Windows, and Mac. ONNX Runtime can deliver an average performance gain of 2X for inferencing. Partners in the ONNX community including Intel and NVIDIA are actively integrating their technology with ONNX Runtime to enable more acceleration. READ MORE

November 29, 2018 - ONNX.js for running ONNX models on browsers and Node.js
ONNX.js, an open source Javascript library for running ONNX models on browsers and on Node.js, is now available. It allows web developers to score pre-trained ONNX models directly on browsers, and has adopted WebAssembly and WebGL technologies for providing an optimized ONNX model inference runtime for both CPUs and GPUs. ONNX.js is the first solution to utilize multi-threading in a Javascript-based AI inference engine (via Web Workers), offering significant performance improvements over existing solutions on CPU. READ MORE

October 24, 2018 - CEVA Adds ONNX Support to CDNN Neural Network Compiler
CEVA, Inc., the leading licensor of signal processing platforms and artificial intelligence processors for smarter, connected devices, today announced that the latest release of its award-winning CEVA Deep Neural Network (CDNN) compiler supports the Open Neural Network Exchange (ONNX) format. READ MORE

October 16, 2018 - ONNX Runtime for inferencing machine learning models now in preview
We are excited to release the preview of ONNX Runtime, a high-performance inference engine for machine learning models in the Open Neural Network Exchange (ONNX) format. ONNX Runtime is compatible with ONNX version 1.2 and comes in Python packages that support both CPU and GPU to enable inferencing using Azure Machine Learning service and on any Linux machine running Ubuntu 16. READ MORE

September 6, 2018 - Synopsys Announces Support for the Open Neural Network Exchange Format in ARC MetaWare EV Development Toolkit
Synopsys, Inc. today announced support for the Open Neural Network Exchange (ONNX) format in the upcoming release of its DesignWare® ARC® MetaWare EV Development Toolkit, a complete set of tools, runtime software and libraries to develop vision and artificial intelligence (AI) applications for ARC EV6x Embedded Vision Processor IP. READ MORE

Commits since the v1.3 release (by area):

New Operator and Operator Updates:
Adding generator op ConstantLike (#1406)
Supporting int32 and int64 in Less and Greater (#1390)
fix AvgPool doc. add default value for count_include_pad (#1391)
Add DynamicSlice experimental op (#1377)
fix the doc for softmax (#1374)
Fix the shape inference for concat (#1361)
Add several hyperbolic function ops. (#1499)
Add OneHot op to ONNX. (#1567)
Fix MaxUnpool shape inference when output_shape is provided as input (#…
Add type shape inferencing for the If operator (#1571)
fix ConvTranspose spec (#1566)
Change upsample operator to allow dynamic 'scales' (#1467)
Fix output type bug in MaxUnpool definition. (#1553)
Add Compress Op (#1454)
Add MaxUnpool op to ONNX. (#1494)
support more types for Gemm, Flatten and PRelu (#1472)
deprecate no-spatial mode of BN (#1637)
Add Where op. (#1569)
Fix output_shape of a testcase for ConvTranspose (#1437)
Adding EyeLike generator op. (#1428)
Clarify the spec for convolution transpose input shape (#1413)
Separate types of inputs 1 and 2 in OneHot op. (#1610)
make output shape clear enough for Softmax family (#1634)
fix batchnorm doc (#1633)
Add Scatter op to ONNX (#1517)
Add Erf operator for computing error function (#1675)
Add IsNaN operator. (#1656)
Add Sign Op (#1658)
Update scan (#1653)
add isnan data (#1685)
Clarify some aspects of the Loop spec. (#1587)
repaire convtranspose shape inference (#1660)
Remove ConstantLike op. Updates to ConstantOfShape op. (#1716)
add constantofshape (#1582)
Add Shrink operator (#1622)
Scan test update (#1732)
Scan output axes (#1737)
Add NonZero op. (#1714)fix the test cases for constantofshape (#1746)
Add sample implementation support (#1712)
Update definition of Cast Op to support casting to/from string (#1704)
Update ConstantOfShape op (#1744)
Add TfIdfVectorizer operator to ONNX (#1721)

ONNXIFI:
ONNXIFI cpp test driver (#1290)
Remove ONNXIFI_CHECK_RESULT from onnxRelease* functions (#1397)
Change onnxifi test driver classname (#1396)
Silence usused result warning in ONNXIFI wrapper cleanup. Fix #1344 (#…
[ONNXIFI]Fix gtest assert (#1482)
[ONNXIFI]Reliable memory of shape in test driver (#1480)
onnxifi test driver bugs fixed (#1462)
[ONNXIFI]gtest:expect to assert (#1456)
[ONNXIFI]Fix the crash when weightCount = 0 (#1451)
[ONNXIFI]Make TEST_P be able to show the test case name directly (#1487)
[onnxifi] Make sure that backend handles run async. (#1599)
Fix onnxifi test (#1617)

Miscellaneous:
bump up the node test to opset 9 (#1431)
remove unindexed ConstantLike test case (#1432)
Add node name for error & Fix typo (#1426)
Fix the typo in the doc (#1427)
Adding checker/typeshape inference logic for Function (#1423)
[cmake] Allow adding extra source files to the onnx lib (#1439)
Add the ability to deprecate an OpSchema (#1317)
[Anderspapitto patch] fix the shape inference for broadcasting (#1368)
external_data: Store large tensor values in separate files (#678)
Add opaque type support (#1408)
Fix checker logic (#1459)
Add version table to Versioning.md to provide a clear mapping (#1418)
serialized model data in test driver, ir version is now corrected (#1455
refresh onnx-ml.proto (#1448)
Fix ONNX_NAMESPACE definition (#1444)
Add BFLOAT16 data type (FLOAT32 truncated to 16 bits) (#1421)
Use strings directly for casing as np.object w/o redundant StringHold
Remove default value for 'dtype' attribute in ConstantLike op. (#1461)
Fix TensorProto int32_data comment (#1509)
fix ninja external (#1507)
Shut up warnings about markers. (#1505)
add the script (#1501)
Minor cleanup in circleci build scripts (#1498)
fix onnx checker to support proto3 models. (#1495)
Add config files for CircleCI (#1490)
Change function ownership to ONNX (#1493)
maintain the integration of gtest arguments (#1491)
Skip some warning for clang-cl (#1484)
Make ONNX compatible with gcc-8 (#1488)
Build with old version protobuf on Windows (#1486)
Clean memory when failed test (#1476)
Change Function registry flow; Get rid of whole-archive in compile (#…
fix the bug of loading model input/output proto (#1477)
Operator set versioning - tighten wording regarding breaking changes (#…
add skip in gtest & update gtest version (#1473)
Opaque type ToString() does not wrap the result into the supplied (#1468
Fix compiler warnings on unhandled bfloat16 switch case (#1470)
Move the definition of the singleton DomainToVersionRange to .cc file (
fix some issue with namespace (#1533)
Remove Opaque type parameters as not needed. Adjust DataType handling. (
Use vector instead of set to keep the order of the opt passes (#1524)
Pin awscli to last known good version (#1518)
Update docker image version used in CircleCI (#1511)
Fix the mapping for Complex128 data type (#1422)
add default value to doc (#1410)
Fixup handling of captured values as graph outputs (#1411)
[build] Add ONNX_API for protos in all cases (#1407)
[compiler flag] Issue a warning if class has virtual method but missi…
Add a virtual destructor to GraphInferencer (#1574)
Add Scan type/shape inferencing (#1503)
Add hook to InferenceContext to allow running type/shape inferencing … (
Implemented shape inference for Gather (#1525)
add eliminate nop monotone argmax pass (#1519)
Enable -Wall -Wextra -Werror for CI (#1547)
Introduce SparseTensor ML proto (#1554)
In driver test check the return status of onnxGetBackendIDs (#1597)
Make CI log less verbose (#1595)
Loop type shape inferencing (#1591)
add uint8 (#1590)
Add domain as an optional parameter for make_node function (#1588)
Remove unreachable code in shape_inference.h (#1585)
fix a newline in Scan doc (#1541)
allow variadic parameters of different types (#1615)
Fix a bug in vector address access (#1598)
Handle new types in the switch. (#1608)
Bump docker image version to 230 used in CircleCI (#1606)
type proto does not exactly match the type str, (#1545)
Fix 'line break after binary operator' flake8 warnings. (#1550)
remove inappropriate consts (#1632)
Shape inference fix for broadcast, concat and scan (#1594)
mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (#1466)
Add a capability to input/output unicode strings (#1734)
Include guidance on adding new operators (#1416)
Clarify namescopes in the presence of nested subgraphs (#1665)
use an empty initializer to create map (#1643)
Remove redundant const (#1639)
Show the op's type and name when the shape inference is failed. (#1623)
link the tutorial (#1650)
Upgrade label encoder to support more input types (#1596)
Add Doc about Adding New Operator into ONNX (#1647)
Fix unused var warning (#1669)
Changes done internally at Facebook (#1668)
Replace np.long by np.int64 (#1664)
Infer shape from data in Constant nodes (#1667)
fix the const map initializatoin (#1662)
Add scan test case (#1586)
Add bfloat16 support. (#1699)
ONNX does not maintain versions for experimental ops (#1696)
Correct type of value_info in Graph (#1694)
Fix typos (#1686)
Use int instead of enum to store data type (#1626)
fix broken link in VersionConverter.md (#1683)
add a shape inference test for group conv (#1719)
Set symbol visibility to hidden for non-Windows (#1707)
[Minor] Fix Windows line ending in test coverage generating script (#…
Support rtol and atol at the model granularity (#1723)
turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion
typos fixed: iutput -> input (#1726)
print some information (#1724)
Update README.md (#1722)
Handle negative axis in scan shape inference (#1748)
remove stale test cases (#1434)
Show string names of data types instead of int IDs (#1749)
Relax constraint that the initializers must be a subset of graph inputs (#1718)
Fix typo in scan shape inferencing (#1753)

Cheers!
-The ONNX Team

Don't miss a new onnx release

NewReleases is sending notifications on new releases.