pypi huggingface-hub 0.26.0
v0.26.0: Multi-tokens support, conversational VLMs and quality of life improvements

11 hours ago

🔐 Multiple access tokens support

Managing fine-grained access tokens locally just became much easier and more efficient!
Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.

To make managing these tokens easier, we've added a ✨ new set of CLI commands ✨ that allow you to handle them programmatically:

  • Store multiple tokens on your machine by simply logging in with the login() command with each token:
huggingface-cli login
  • Switch between tokens and choose the one that will be used for all interactions with the Hub:
huggingface-cli auth switch
  • List available access tokens on your machine:
huggingface-cli auth list
  • Delete a specific token from your machine with:
huggingface-cli logout [--token-name TOKEN_NAME]

✅ Nothing changes if you are using the HF_TOKEN environment variable as it takes precedence over the token set via the CLI. More details in the documentation. 🤗

⚡️ InferenceClient improvements

🖼️ Conversational VLMs support

Conversational vision-language models inference is now supported with InferenceClient's chat completion!

from huggingface_hub import InferenceClient

# works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"

client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image_url",
                    "image_url": {"url": image_url},
                },
                {
                    "type": "text",
                    "text": "Describe this image in one sentence.",
                },
            ],
        },
    ],
)

print(output.choices[0].message.content)
#A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.

🔧 More complete support for inference parameters

You can now pass additional inference parameters to more task methods in the InferenceClient, including: image_classification, text_classification, image_segmentation, object_detection, document_question_answering and more!
For more details, visit the InferenceClient reference guide.

✅ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗

  • Support VLM in chat completion (+some specs updates) by @Wauplin in #2556
  • [Inference Client] Add task parameters and a maintenance script of these parameters by @hanouticelina in #2561
  • Document vision chat completion with Llama 3.2 11B V by @Wauplin in #2569

✨ HfApi

update_repo_settings can now be used to switch visibility status of a repo. This is a drop-in replacement for update_repo_visibility which is deprecated and will be removed in version v0.29.0.

- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)
  • Feature: switch visibility with update_repo_settings by @WizKnight in #2541

📄 Daily papers API is now supported in huggingface_hub, enabling you to search for papers on the Hub and retrieve detailed paper information.

>>> from huggingface_hub import HfApi

>>> api = HfApi()
# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
# Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")

🌐 📚 Documentation

Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result here.

  • Translated index.md and installation.md to Tamil by @Raghul-M in #2555

💔 Breaking changes

A few breaking changes have been introduced:

  • cached_download(), url_to_filename(), filename_to_url() methods are now completely removed. From now on, you will have to use hf_hub_download() to benefit from the new cache layout.
  • legacy_cache_layout argument from hf_hub_download() has been removed as well.

These breaking changes have been announced with a regular deprecation cycle.

Also, any templating-related utility has been removed from huggingface_hub. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.

Prepare for release 0.26 by @hanouticelina in #2579
Remove templating utility by @Wauplin in #2611

🛠️ Small fixes and maintenance

😌 QoL improvements

🐛 fixes

🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

Don't miss a new huggingface-hub release

NewReleases is sending notifications on new releases.