github bentoml/BentoML v1.0.0-rc3
BentoML - 1.0.0-rc3

latest releases: v1.3.12, v1.3.11, v1.3.10...
pre-release2 years ago

We have just released BentoML 1.0.0rc3 with a number of highly anticipated features and improvements. Check it out with the following command!

$ pip install -U bentoml --pre

⚠️ BentoML will release the official 1.0.0 version next week and remove the need to use --pre tag to install BentoML versions after 1.0.0. If you wish to stay on the 0.13.1 LTS version, please lock the dependency with bentoml==0.13.1.

  • Added support for framework runners in the following ML frameworks.
  • Added support for Huggingface Transformers custom pipelines.
  • Fixed a logging issue causing the api_server and runners to not generate error logs.
  • Optimized Tensorflow inference procedure.
  • Improved resource request configuration for runners.
    • Resource request can be now configured in the BentoML configuration. If unspecified, runners will be scheduled to best utilized the available system resources.

      runners:
        resources:
          cpu: 8.0
          nvidia.com/gpu: 4.0
    • Updated the API for custom runners to declare the types of supported resources.

      import bentoml
      
      class MyRunnable(bentoml.Runnable):
      	SUPPORTS_CPU_MULTI_THREADING = True  # Deprecated SUPPORT_CPU_MULTI_THREADING
              SUPPORTED_RESOURCES = ("nvidia.com/gpu", "cpu")  # Deprecated SUPPORT_NVIDIA_GPU
              ...
      
      my_runner = bentoml.Runner(
          MyRunnable,
          runnable_init_params={"foo": foo, "bar": bar},
          name="custom_runner_name",
          ...
      )
    • Deprecated the API for specifying resources from the framework to_runner() and custom Runner APIs. For better flexibility at runtime, it is recommended to specifying resources through configuration.

What's Changed

New Contributors

Full Changelog: v1.0.0-rc2...v1.0.0-rc3

Don't miss a new BentoML release

NewReleases is sending notifications on new releases.