Major changes and updates since v1.6.0 release:
Export model in SavedModel format
- API: “onnx_tf.backend_rep.TensorflowRep.export_graph” and CLI: “convert” will create a TensorFlow SavedModel for user to deploy it in TensorFlow.
Auto data type cast to support data types that are not natively supported by TensorFlow
- User can set auto_cast=True in API: “onnx_tf.backend.prepare” or CLI: “convert” to enable this auto_cast feature.
Convert model to run in a GPU or CPU environment base on user input
- User can set device=’CPU’(default) or device=’CUDA’ in API: “onnx_tf.backend.prepare” or CLI: “convert” to set the model inferencing environment.
Support Opset 12 operators
- All Opset 12 operators are supported except training ops, please refer to support_status_v1_7_0.md for details.
Create graph using tf.function(recommended in tf-2.x) instead of tf.Graph(deprecated in tf-2.x)
- Used tf.Module as the base class of the converted model
- Used tf.function to generate the graph automatically
Define a template to compare inference result with other backend
- Added a model stepping test for MNIST model to compare inference result with ONNX runtime.
- Migrated travis CI from travis.org to travis.com.
- Updated CI to skip unsupported operators and allow failure against ONNX latest master branch.