github bentoml/BentoML v1.0.1
BentoML - 1.0.1

latest releases: v1.3.7, v1.3.6, v1.3.5...
2 years ago

🍱 We have just released BentoML v1.0.1 with a number of features and bug fixes requested by the community.
  • Added support for custom model versions, e.g. bentoml.tensorflow.save_model("model_name:1.0.2", model).
  • Fixed PyTorch Runner payload serialization issue due to tensor not on CPU.
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
  • Fixed Transformers GPU device assignment due to kwargs handling.
  • Fixed excessive Runner thread spawning issue under high load.
  • Fixed PyTorch Runner inference error due to saving tensor during inference mode.
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.
  • Fixed Keras Runner error when the input has only a single element.
  • Deprecated the validate_json option in JSON IO descriptor and recommended specifying validation logic natively in the Pydantic model.

🎨 We added an examples directory and in it you will find interesting sample projects demonstrating various applications of BentoML. We welcome your contribution if you have a project idea and would like to share with the community.

💡 We continue to update the documentation on every release to help our users unlock the full power of BentoML.

What's Changed

New Contributors

Full Changelog: https://github.com/bentoml/BentoML/compare/v1.0.0...v1.0.1

Don't miss a new BentoML release

NewReleases is sending notifications on new releases.