pypi huggingface-hub 0.21.0
v0.21.0: dataclasses everywhere, file-system, PyTorchModelHubMixin, serialization and more.

latest releases: 0.25.0, 0.25.0rc1, 0.25.0rc0...
6 months ago

Discuss about the release in our Community Tab. Feedback welcome!! 🤗

🖇️ Dataclasses everywhere!

All objects returned by the HfApi client are now dataclasses!

In the past, objects were either dataclasses, typed dictionaries, non-typed dictionaries and even basic classes. This is now all harmonized with the goal of improving developer experience.

Kudos goes to the community for the implementation and testing of all the harmonization process. Thanks again for the contributions!

💾 FileSystem

The HfFileSystem class implements the fsspec interface to allow loading and writing files with a filesystem-like interface. The interface is highly used by the datasets library and this release will improve further the efficiency and robustness of the integration.

🧩 Pytorch Hub Mixin

The PyTorchModelHubMixin class let's you upload ANY pytorch model to the Hub in a few lines of code. More precisely, it is a class that can be inherited in any nn.Module class to add the from_pretrained, save_pretrained and push_to_hub helpers to your class. It handles serialization and deserialization of weights and configs for you and enables download counts on the Hub.

With this release, we've fixed 2 pain points holding back users from using this lib:

  1. Configs are now better handled. The mixin automatically detects if the base class defines a config, saves it on the Hub and then injects it at load time, either as a dictionary or a dataclass depending on the base class's expectations.
  2. Weights are now saved as .safetensors files instead of pytorch pickles for safety reasons. Loading from previous pytorch pickles is still supported but we are moving toward completely deprecating them (in a mid to long term plan).

✨ InferenceClient improvements

Audio-to-audio task is now supported by both by the InferenceClient!

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>>     with open(f"output_{i}.flac", "wb") as f:
            f.write(item["blob"])

Also fixed a few things:

  • Fix intolerance for new field in TGI stream response: 'index' by @danielpcox in #2006
  • Fix optional model in tabular tasks by @Wauplin in #2018
  • Added best_of to non-TGI ignored parameters by @dopc in #1949

📤 Model serialization

With the aim of harmonizing repo structures and file serialization on the Hub, we added a new module serialization with a first helper split_state_dict_into_shards that takes a state dict and split it into shards. Code implementation is mostly taken from transformers and aims to be reused by other libraries in the ecosystem. It seamlessly supports torch, tensorflow and numpy weights, and can be easily extended to other frameworks.

This is a first step in the harmonization process and more loading/saving helpers will be added soon.

  • Framework-agnostic split_state_dict_into_shards helper by @Wauplin in #1938

📚 Documentation

🌐 Translations

Community is actively getting the job done to translate the huggingface_hub to other languages. We now have docs available in Simplified Chinese (here) and in French (here) to help democratize good machine learning!

Docs misc

Docs fixes

🛠️ Misc improvements

Creating a commit with an invalid README will fail early instead of uploading all LFS files before failing to commit.

Added a revision_exists helper, working similarly to repo_exists and file_exists:

>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False

InferenceClient.wait(...) now raises an error if the endpoint is in a failed state.

Improved progress bar when downloading a file

Other stuff:

💔 Breaking changes

  • Classes ModelFilter and DatasetFilter are deprecated when listing models and datasets in favor of a simpler API that lets you pass the parameters directly to list_models and list_datasets.
>>> from huggingface_hub import list_models, ModelFilter

# use
>>> list_models(language="zh")
# instead of 
>>> list_models(filter=ModelFilter(language="zh"))

Cleaner, right? ModelFilter and DatasetFilter will still be supported until v0.24 release.

  • In the inference client, ModelStatus.compute_type is not a string anymore but a dictionary with more detailed information (instance type + number of replicas). This breaking change reflects a server-side update.

Small fixes and maintenance

⚙️ fixes

⚙️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @2404589803
    • [i18n-CN] Translated some files to implified Chinese #1915 (#1916)
  • @jamesbraza
    • Added hf_transfer extra into setup.py and docs/ (#1970)
    • Finished migration from setup.cfg to pyproject.toml (#1971)
    • Documenting CLI default for download --repo-type (#1986)
    • Newer pre-commit (#1987)
    • Removed now unnecessary setup.cfg path variable (#1990)
    • Added toml-sort tool (#1972)
  • @Ahmedniz1
    • Use dataclasses for all objects returned by HfApi #1911 (#1974)
    • Updating HfApi objects to use dataclass (#1988)
    • Added audio to audio in inference client (#2020)
  • @druvdub
    • Deprecate ModelFilter/DatasetFilter (#2028)
  • @JibrilEl
    • [i18n-FR] Translated files in french and reviewed them (#2024)
  • @bmuskalla
    • Use safetensors by default for PyTorchModelHubMixin (#2033)

Don't miss a new huggingface-hub release

NewReleases is sending notifications on new releases.