pypi onnx 1.7.0
v1.7.0

latest releases: 1.16.0, 1.15.0, 1.14.1...
3 years ago

ONNX v1.7 is now available with exciting new features! We would like to thank everyone who contributed to this release! You may learn more about the project, who is involved and what tools are available at the onnx.ai site.

Change Log

Major changes and updates since the v1.6.0 release:

Training Support, as a tech preview

  • A set of new training features are introduced to represent neural network models in the process of model training.
  • A protobuf message TrainingInfoProto is added to store training information, including the algorithm and initializers, as well as new operators Gradient and GraphCall and new functions describing most commonly used Loss functions and Optimizers, all in the domain ai.onnx.preview.training.
  • The new spec allows one to create a model training task or a partially trained model in one framework, then export it in ONNX and load into a runtime or another framework where the training can proceed, with the expectation of theoretically similar outcome to the model trained in the original framework.
  • Note the converters do not support training yet. The goal of this tech preview is to test the new spec and to enable converters to add full support in future releases.

Operator changes

  • Opset has been updated to version 12.

  • Preview training opset has been added as version 1.

  • New operators:

  • Updated operators:

  • General Features

    • Operator registration APIs are updated to support dynamic function body (sub-graph) registration.

      onnx/onnx/defs/schema.h

      Line 674 in d343755

      OpSchema& SetContextDependentFunctionBodyBuilder(ContextDependentFunctionBodyBuilder);
    • Functions’ body graph are extended to be able to rely on multiple external operator sets.https://github.com/onnx/onnx/blob/master/onnx/onnx-operators.proto#L77
    • Some of the operators (for example, all loss functions) added are actually “functions”, as it’s strongly advocated to add functions instead of (primitive) ops.
    • The model checker is enhanced (#2367)
      • Call shape-inference to do the extra-checking performed by the type-and-shape-inference methods of ops
      • Check that the typing constraints specified by the op schema are satisfied
      • Infer output types of nodes from the typing constraints specified by the op schema
    • Documentation enhancement
      • Add function description in IR.md (#2596)
      • Add external tensor data in IR.md (#2323)
      • Update documentation Split (#2544), QLinearConv (#2464), Loop (#2337), NonZero and Slice (#2429)

** Bug fixes **

  • Fix the attribute types section in IR.md (#2590)
  • Fix a bug in ScatterND shape inference (#2577)
  • Copy sizes in some optimizers to remain shape information (#2574)
  • Fix the intermediate zero calculation for DynamicQuantizeLinear (#2556)
  • Fix Slice op’s shape inference logic (#2526)
  • Correct the order of arguments of InferShapes (#2500)
  • Fix the optimize pass of fuse_consecutive_transposes (#2471)
  • Fix fuse_consecutive_concat order bug in onnx optimizer (#2447)
  • Keep symbolic dims in Concat with a single input (#2418)
  • Fix broken error message string formatting in softmax shape inferencing (#2403)
  • Fix bug in function body verifier (#2390)
  • Fix shape inference for Split with split attribute (#2328)

Installation

You can simply pip upgrade using the following command or build from source following the instructions on Github.

pip install onnx --upgrade

Commits and Pull Requests Since v1.6.

You can find all the commits and pull requests on Github, https://github.com/onnx/onnx/pulls?q=is%3Apr+milestone%3A1.7+

Additional Notes

Python 2.7 support will be deprecated in ONNX 1.8 release. Please plan accordingly.

Don't miss a new onnx release

NewReleases is sending notifications on new releases.