What's New
Serializable Configuration for AgentChat
- Make FunctionTools Serializable (Declarative) by @victordibia in #5052
- Make AgentChat Team Config Serializable by @victordibia in #5071
- improve component config, add description support in dump_component by @victordibia in #5203
This new feature allows you to serialize an agent or a team to a JSON string, and deserialize them back into objects. Make sure to also read about save_state
and load_state
: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/state.html.
You now can serialize and deserialize both the configurations and the state of agents and teams.
For example, create a RoundRobinGroupChat
, and serialize its configuration and state.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team
from autogen_agentchat.ui import Console
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def dump_team_config() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
assistant = AssistantAgent(
"assistant",
model_client=model_client,
system_message="You are a helpful assistant.",
)
critic = AssistantAgent(
"critic",
model_client=model_client,
system_message="Provide feedback. Reply with 'APPROVE' if the feedback has been addressed.",
)
termination = TextMentionTermination("APPROVE", sources=["critic"])
group_chat = RoundRobinGroupChat(
[assistant, critic], termination_condition=termination
)
# Run the group chat.
await Console(group_chat.run_stream(task="Write a short poem about winter."))
# Dump the team configuration to a JSON file.
config = group_chat.dump_component()
with open("team_config.json", "w") as f:
f.write(config.model_dump_json(indent=4))
# Dump the team state to a JSON file.
state = await group_chat.save_state()
with open("team_state.json", "w") as f:
f.write(json.dumps(state, indent=4))
asyncio.run(dump_team_config())
Produces serialized team configuration and state. Truncated for illustration purpose.
{
"provider": "autogen_agentchat.teams.RoundRobinGroupChat",
"component_type": "team",
"version": 1,
"component_version": 1,
"description": "A team that runs a group chat with participants taking turns in a round-robin fashion\n to publish a message to all.",
"label": "RoundRobinGroupChat",
"config": {
"participants": [
{
"provider": "autogen_agentchat.agents.AssistantAgent",
"component_type": "agent",
"version": 1,
"component_version": 1,
"description": "An agent that provides assistance with tool use.",
"label": "AssistantAgent",
"config": {
"name": "assistant",
"model_client": {
"provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
"component_type": "model",
"version": 1,
"component_version": 1,
"description": "Chat completion client for OpenAI hosted models.",
"label": "OpenAIChatCompletionClient",
"config": {
"model": "gpt-4o"
}
{
"type": "TeamState",
"version": "1.0.0",
"agent_states": {
"group_chat_manager/25763eb1-78b2-4509-8607-7224ae383575": {
"type": "RoundRobinManagerState",
"version": "1.0.0",
"message_thread": [
{
"source": "user",
"models_usage": null,
"content": "Write a short poem about winter.",
"type": "TextMessage"
},
{
"source": "assistant",
"models_usage": {
"prompt_tokens": 25,
"completion_tokens": 150
},
"content": "Amidst the still and silent air, \nWhere frost adorns the branches bare, \nThe world transforms in shades of white, \nA wondrous, shimmering, quiet sight.\n\nThe whisper of the wind is low, \nAs snowflakes drift and dance and glow. \nEach crystal, delicate and bright, \nFalls gently through the silver night.\n\nThe earth is hushed in pure embrace, \nA tranquil, glistening, untouched space. \nYet warmth resides in hearts that roam, \nFinding solace in the hearth of home.\n\nIn winter\u2019s breath, a promise lies, \nBeneath the veil of cold, clear skies: \nThat spring will wake the sleeping land, \nAnd life will bloom where now we stand.",
"type": "TextMessage"
Load the configuration and state back into objects.
import asyncio
import json
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team
async def load_team_config() -> None:
# Load the team configuration from a JSON file.
with open("team_config.json", "r") as f:
config = json.load(f)
group_chat = Team.load_component(config)
# Load the team state from a JSON file.
with open("team_state.json", "r") as f:
state = json.load(f)
await group_chat.load_state(state)
assert isinstance(group_chat, RoundRobinGroupChat)
asyncio.run(load_team_config())
This new feature allows you to manage persistent sessions across server-client based user interaction.
Azure AI Client for Azure-Hosted Models
- Feature/azure ai inference client by @lspinheiro and @rohanthacker in #5153
This allows you to use Azure and GitHub-hosted models, including Phi-4, Mistral models, and Cohere models.
import asyncio
import os
from autogen_core.models import UserMessage
from autogen_ext.models.azure import AzureAIChatCompletionClient
from azure.core.credentials import AzureKeyCredential
async def main() -> None:
client = AzureAIChatCompletionClient(
model="Phi-4",
endpoint="https://models.inference.ai.azure.com",
# To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
# Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
credential=AzureKeyCredential(os.environ["GITHUB_TOKEN"]),
model_info={
"json_output": False,
"function_calling": False,
"vision": False,
"family": "unknown",
},
)
result = await client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result)
asyncio.run(main())
Rich Console UI for Magentic One CLI
You can now enable pretty printed output for m1
command line tool by adding --rich
argument.
m1 --rich "Find information about AutoGen"
Default In-Memory Cache for ChatCompletionCache
- Implement default in-memory store for ChatCompletionCache by @srjoglekar246 in #5188
This allows you to cache model client calls without specifying an external cache service.
import asyncio
from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache
async def main() -> None:
# Create a model client.
client = OpenAIChatCompletionClient(model="gpt-4o")
# Create a cached wrapper around the model client.
cached_client = ChatCompletionCache(client)
# Call the cached client.
result = await cached_client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result.content, result.cached)
# Call the cached client again.
result = await cached_client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result.content, result.cached)
asyncio.run(main())
The capital of France is Paris. False
The capital of France is Paris. True
Docs Update
- Update model client documentation add Ollama, Gemini, Azure AI models by @ekzhu in #5196
- Add Model Client Cache section to migration guide by @ekzhu in #5197
- docs: Enhance documentation for SingleThreadedAgentRuntime with usage examples and clarifications; undeprecate process_next by @ekzhu in #5230
- docs: Update user guide notebooks to enhance clarity and add structured output by @ekzhu in #5224
- docs: Core API doc update: split out model context from model clients; separate framework and components by @ekzhu in #5171
- docs: Add a helpful comment to swarm.ipynb by @withsmilo in #5145
- docs: Enhance documentation for SingleThreadedAgentRuntime with usage examples and clarifications; undeprecate process_next by @ekzhu in #5230
Bug Fixes
- fix: update SK model adapter constructor by @lspinheiro in #5150. This allows the SK Model Client to be used inside an
AssistantAgent
. - Fix function tool naming to avoid overriding the name input by @Pierrolo in #5165
- fix: Enhance OpenAI client to handle additional stop reasons and improve tool call validation in tests to address empty tool_calls list. by @ekzhu in #5223
Other Changes
- Make ChatAgent an ABC by @jackgerrits in #5129
- Update website for 0.4.3 by @jackgerrits in #5139
- Make Memory and Team an ABC by @victordibia in #5149
- Closes #5059 by @fbpazos in #5156
- Update proto to include remove sub, move to rpc based operations by @jackgerrits in #5168
- Add dependencies to distributed group chat example by @MohMaz in #5175
- Communicate client id via metadata in grpc runtime by @jackgerrits in #5185
- Fixed typo fixing issue #5186 by @raimondasl in #5187
- Improve grpc type checking by @jackgerrits in #5189
- Impl register and add sub RPC by @jackgerrits in #5191
- rysweet-unsubscribe-and-agent-tests-4744 by @rysweet in #4920
- make AssistantAgent and Handoff use BaseTool by @victordibia in #5193
- docs: s/Exisiting/Existing/g by @bih in #5202
- Rysweet 5201 refactor runtime interface by @rysweet in #5204
- Update model client documentation add Ollama, Gemini, Azure AI models by @ekzhu in #5196
- Rysweet 5207 net runtime interface to match python add registration to interface and inmemoryruntime by @rysweet in #5215
- Rysweet 5217 add send message by @rysweet in #5219
- Update literature-review.ipynb to fix possible copy-and-paste error by @xtophs in #5214
- Updated docs for _azure_ai_client.py by @rohanthacker in #5199
- Refactor Dotnet core to align with Python by @jackgerrits in #5225
- Remove channel based control plane APIs, cleanup proto by @jackgerrits in #5236
- update versions to 0.4.4 and m1 cli to 0.2.3 by @ekzhu in #5229
- feat: Enable queueing and step mode in InProcessRuntime by @lokitoth in #5239
- feat: Expose self-delivery for InProcessRuntime in AgentsApp by @lokitoth in #5240
- refactor: Reduce reflection calls when using HandlerInvoker by @lokitoth in #5241
- fix: Various fixes and cleanups to dotnet autogen core by @bassmang in #5242
- Start from just protos in core.grpc by @jackgerrits in #5243
New Contributors
- @fbpazos made their first contribution in #5156
- @withsmilo made their first contribution in #5145
- @Pierrolo made their first contribution in #5165
- @raimondasl made their first contribution in #5187
- @bih made their first contribution in #5202
- @xtophs made their first contribution in #5214
Full Changelog: v0.4.3...v0.4.4