๐ค Powering LLMs with Tools: MCP Client & Tiny Agents CLI
โจ The huggingface_hub
library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient
and provides a seamless way to connect LLMs to both local and remote tool servers!
pip install -U huggingface_hub[mcp]
In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:
import os
from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient
async def main():
async with MCPClient(
provider="nebius",
model="Qwen/Qwen2.5-72B-Instruct",
api_key=os.environ["HF_TOKEN"],
) as client:
await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
messages = [
{
"role": "user",
"content": "Generate a picture of a cat on the moon",
}
]
async for chunk in client.process_single_turn_with_tools(messages):
# Log messages
if isinstance(chunk, ChatCompletionStreamOutput):
delta = chunk.choices[0].delta
if delta.content:
print(delta.content, end="")
# Or tool calls
elif isinstance(chunk, ChatCompletionInputMessage):
print(
f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
For even simpler development, we now also offer a higher-level Agent
class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient
. It's designed to be a simple while loop built right on top of an MCPClient.
You can run these Agents directly from the command line:
> tiny-agents run --help
Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...
Run the Agent in the CLI
โญโ Arguments โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ path [PATH] Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset โ
โ (https://huggingface.co/datasets/tiny-agents/tiny-agents) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.
This is an early version of the MCPClient
, and community contributions are welcome ๐ค
- [MCP] Add documentation by @hanouticelina in #3102
- [MCP] add support for SSE + HTTP by @Wauplin in #3099
- [MCP] Tiny Agents in Python by @hanouticelina in #3098
- PoC:
InferenceClient
is also aMCPClient
by @julien-c in #2986
โก Inference Providers
Thanks to @diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!
Weโre thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models ๐ฅ
We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient
follows the OpenAI API specs structured output.
- [Inference Providers] Fix structured output schema in chat completion by @hanouticelina in #3082
๐พ Serialization
We've introduced a new @strict
decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field
# Custom validator to ensure a value is positive
def positive_int(value: int):
if not value > 0:
raise ValueError(f"Value must be positive, got {value}")
class Config:
model_type: str
hidden_size: int = positive_int(default=16)
vocab_size: int = 32 # Default value
# Methods named `validate_xxx` are treated as class-wise validators
def validate_big_enough_vocab(self):
if self.vocab_size < self.hidden_size:
raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")
config = Config(model_type="bert", hidden_size=24) # Valid
config = Config(model_type="bert", hidden_size=-1) # Raises StrictDataclassFieldValidationError
# `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16) # Raises StrictDataclassClassValidationError
This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.
This release brings also support for DTensor
in _get_unique_id
/ get_torch_storage_size
helpers, allowing transformers
to seamlessly use save_pretrained
with DTensor
.
โจ HF API
When creating an Endpoint, the default for scale_to_zero_timeout
is now None
, meaning endpoints will no longer scale to zero by default unless explicitly configured.
- Dont set scale to zero as default when creating an Endpoint by @tomaarsen in #3062
We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.
๐ Documentation
We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient
can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).
- [Inference] Mention local endpoints inference + remove separate HF Inference API mentions by @hanouticelina in #3085
๐ ๏ธ Small fixes and maintenance
๐ QoL improvements
- bump hf-xet min version by @hanouticelina in #3078
- Add
api.endpoint
to arguments for_get_upload_mode
by @matthewgrossman in #3077 - surface 401 unauthorized errors more directly in snapshot_download by @hanouticelina in #3092
๐ Bug and typo fixes
- [HfFileSystem] Fix end-of-file
read()
by @lhoestq in #3080 - [Inference Endpoints] fix inference endpoint creation with custom image by @hanouticelina in #3076
- Expand file lock scope to resolve concurrency issues during downloads by @humengyu2012 in #3063
- Documentation Issue by @thanosKivertzikidis in #3091
- Do not fetch /preupload if already done in upload-large-folder by @Wauplin in #3100
๐๏ธ internal
- [Internal] make hf-xet (again) a required dependency #3103
- fix conda by @hanouticelina in #3058
- fix create branch failure test by @hanouticelina in #3074
- [Internal] make
hf-xet
optional by @hanouticelina in #3079
Community contributions
- Refactor
huggingface-cli repo create
command by @Wauplin in #3094 - Update mypy to 1.15.0 (current latest) by @Wauplin in #3095
Significant community contributions
The following contributors have made significant changes to the library over the last release:
- @diadorer
- [Inference Providers] Add feature extraction task for Nebius (#3057)
- @tomaarsen
- Dont set scale to zero as default when creating an Endpoint (#3062)
- @nbarr07
- ๐ฟ adding support for Nscale inference provider (#3068)
- @S1ro1
- Feat: support DTensor when saving (#3042)
- @humengyu2012
- Expand file lock scope to resolve concurrency issues during downloads (#3063)