🚀 Ready. Xet. Go!
This might just be our biggest update in the past two years! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by xet-core, a Rust-based package that handles all the low-level details.
You can start using Xet today by installing the optional dependency:
pip install -U huggingface_hub[hf_xet]
With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.
Blog post: Xet on the Hub
Docs: Storage backends → Xet
Tip
Want to store your own files with Xet? We’re gradually rolling out support on the Hugging Face Hub, so hf_xet
uploads may need to be enabled for your repo. Join the waitlist to get onboarded soon!
This is the result of collaborative work by @bpronan, @hanouticelina, @rajatarya, @jsulz, @assafvayner, @Wauplin, + many others on the infra/Hub side!
- Xet download workflow by @hanouticelina in #2875
- Add ability to enable/disable xet storage on a repo by @hanouticelina in #2893
- Xet upload workflow by @hanouticelina in #2887
- Xet Docs for huggingface_hub by @rajatarya in #2899
- Adding Token Refresh Xet Tests by @rajatarya in #2932
- Using a two stage download path for xet files. by @bpronan in #2920
- add
xetEnabled
as an expand property by @hanouticelina in #2907 - Xet integration by @Wauplin in #2958
⚡ Enhanced InferenceClient
The InferenceClient
has received significant updates and improvements in this release, making it more robust and easy to work with.
We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.
- Add Cohere as an Inference Provider by @alexrs-cohere in #2888
- Add Cerebras provider by @Wauplin in #2901
- remove cohere from testing and fix quality by @hanouticelina in #2902
Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate:
from huggingface_hub import InferenceClient
client = InferenceClient(provider="novita")
video = client.text_to_video(
"A young man walking on the street",
model="Wan-AI/Wan2.1-T2V-14B",
)
- [Inference Providers] Add text-to-video support for Novita by @hanouticelina in #2922
It is now possible to centralize billing on your organization rather than individual accounts! This helps companies managing their budget and setting limits at a team level. Organization must be subscribed to Enterprise Hub.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="openai")
image = client.text_to_image(
"A majestic lion in a fantasy forest",
model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")
Handling long-running inference tasks just got easier! To prevent request timeouts, we’ve introduced asynchronous calls for text-to-video inference. We are expecting more providers to leverage the same structure soon, ensuring better robustness and developer-experience.
- [Inference Providers] Async calls for fal.ai by @hanouticelina in #2927
- update polling interval by @hanouticelina in #2937
- [Inference Providers] Fix status and response URLs when polling text-to-video results with fal-ai by @hanouticelina in #2943
Miscellaneous improvements:
- [Bot] Update inference types by @HuggingFaceInfra in #2832
- Update
InferenceClient
docstring to reflect thattoken=False
is no longer accepted by @abidlabs in #2853 - [Inference providers] Root-only base URLs by @Wauplin in #2918
- Add prompt in image_to_image type by @Wauplin in #2956
- [Inference Providers] fold OpenAI support into
provider
parameter by @hanouticelina in #2949 - clean up some inference stuff by @Wauplin in #2941
- regenerate cassettes by @hanouticelina in #2925
- Fix payload model name when model id is a URL by @hanouticelina in #2911
- [InferenceClient] Fix token initialization and add more tests by @hanouticelina in #2921
- [Inference Providers] check inference provider mapping for HF Inference API by @hanouticelina in #2948
✨ New Features and Improvements
This release also includes several other notable features and improvements.
It's now possible to pass a path with wildcard to the upload command instead of passing --include=...
option:
huggingface-cli upload my-cool-model *.safetensors
- Added support for Wildcards in huggingface-cli upload by @devesh-2002 in #2868
Deploying an Inference Endpoint from the Model Catalog just got 100x easier! Simply select which model to deploy and we handle the rest to guarantee the best hardware and settings for your dedicated endpoints.
from huggingface_hub import create_inference_endpoint_from_catalog
endpoint = create_inference_endpoint_from_catalog("unsloth/DeepSeek-R1-GGUF")
endpoint.wait()
endpoint.client.chat_completion(...)
The ModelHubMixin
got two small updates:
- authors can provide a paper URL that will be added to all model cards pushed by the library.
- dataclasses are now supported for any init arg (was only the case of
config
until now)
- Add paper URL to hub mixin by @NielsRogge in #2917
- [HubMixin] handle dataclasses in all args, not only 'config' by @Wauplin in #2928
You can now sort by name, size, last updated and last used where using the delete-cache
command:
huggingface-cli delete-cache --sort=size
- feat: add
--sort
arg todelete-cache
to sort by size by @AlpinDale in #2815
Since end 2024, it is possible to manage the LFS files stored in a repo from the UI (see docs). This release makes it possible to do the same programmatically. The goal is to enable users to free-up some storage space in their private repositories.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")
# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))
# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)
Warning
This is a power-user tool to use carefully. Deleting LFS files from a repo is a non-revertible action.
💔 Breaking Changes
labels
has been removed from InferenceClient.zero_shot_classification
and InferenceClient.zero_shot_image_classification
tasks in favor of candidate_labels
. There has been a proper deprecation warning for that.
🛠️ Small Fixes and Maintenance
🐛 Bug and Typo Fixes
- Fix revision bug in _upload_large_folder.py by @yuantuo666 in #2879
- bug fix in inference_endpoint wait function for proper waiting on update by @Ajinkya-25 in #2867
- Update SpaceHardware enum by @Wauplin in #2891
- Fix: Restore sys.stdout in notebook_login after error by @LEEMINJOO in #2896
- Remove link to unmaintained model card app Space by @davanstrien in #2897
- Fixing a typo in chat_completion example by @Wauplin in #2910
- chore: Link to Authentication by @FL33TW00D in #2905
- Handle file-like objects in curlify by @hanouticelina in #2912
- Fix typos by @omahs in #2951
- Add expanduser and expandvars to path envvars by @FredHaa in #2945
🏗️ Internal
Thanks to the work previously introduced by the diffusers
team, we've published a GitHub Action that runs code style tooling on demand on Pull Requests, making the life of contributors and reviewers easier.
- add style bot GitHub action by @hanouticelina in #2898
- fix style bot GH action by @hanouticelina in #2906
- Fix bot style GH action (again) by @hanouticelina in #2909
Other minor updates:
- Fix prerelease CI by @Wauplin in #2877
- Update update-inference-types.yaml by @Wauplin in #2926
- [Internal] Fix check parameters script by @hanouticelina in #2957
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @Ajinkya-25
- bug fix in inference_endpoint wait function for proper waiting on update (#2867)
- @abidlabs
- Update
InferenceClient
docstring to reflect thattoken=False
is no longer accepted (#2853)
- Update
- @devesh-2002
- Added support for Wildcards in huggingface-cli upload (#2868)
- @alexrs-cohere
- Add Cohere as an Inference Provider (#2888)
- @NielsRogge
- Add paper URL to hub mixin (#2917)
- @AlpinDale
- feat: add
--sort
arg todelete-cache
to sort by size (#2815)
- feat: add
- @FredHaa
- Add expanduser and expandvars to path envvars (#2945)
- @omahs
- Fix typos (#2951)