What's New
AgentChat Message Types (Type Hint Changes)
Important
TL;DR: If you are not using custom agents or custom termination conditions, you don't need to change anything.
Otherwise, update AgentEvent
to BaseAgentEvent
and ChatMessage
to BaseChatMessage
in your type hints.
This is a breaking change on type hinting only, not on usage.
We updated the message types in AgentChat in this new release.
The purpose of this change is to support custom message types defined by applications.
Previously, message types are fixed and we use the union types ChatMessage
and AgentEvent
to refer to all the concrete built-in message types.
Now, in the main branch, the message types are organized into hierarchy: existing built-in concrete message types are subclassing either BaseChatMessage
and BaseAgentEvent
, depending it was part of the ChatMessage
or AgentEvent
union. We refactored all message handlers on_messages
, on_messages_stream
, run
, run_stream
and TerminationCondition
to use the base classes in their type hints.
If you are subclassing BaseChatAgent
to create your custom agents, or subclassing TerminationCondition
to create your custom termination conditions, then you need to rebase the method signatures to use BaseChatMessage
and BaseAgentEvent
.
If you are using the union types in your existing data structures for serialization and deserialization, then you can keep using those union types to ensure the messages are being handled as concrete types. However, this will not work with custom message types.
Otherwise, your code should just work, as the refactor only makes type hint changes.
This change allows us to support custom message types. For example, we introduced a new message type StructureMessage[T]
generic, that can be used to create new message types with a BaseModel content. On-going work is to get AssistantAgent to respond with StructuredMessage[T]
where T is the structured output type for the model.
See the API doc on AgentChat message types: https://microsoft.github.io/autogen/stable/reference/python/autogen_agentchat.messages.html
- Use class hierarchy to organize AgentChat message types and introduce StructuredMessage type by @ekzhu in #5998
- Rename to use BaseChatMessage and BaseAgentEvent. Bring back union types. by @ekzhu in #6144
Structured Output
We enhanced support for structured output in model clients and agents.
For model clients, use json_output
parameter to specify the structured output type
as a Pydantic model. The model client will then return a JSON string
that can be deserialized into the specified Pydantic model.
import asyncio
from typing import Literal
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel
# Define the structured output format.
class AgentResponse(BaseModel):
thoughts: str
response: Literal["happy", "sad", "neutral"]
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
# Generate a response using the tool.
response = await model_client.create(
messages=[
SystemMessage(content="Analyze input text sentiment using the tool provided."),
UserMessage(content="I am happy.", source="user"),
],
json_ouput=AgentResponse,
)
print(response.content)
# Should be a structured output.
# {"thoughts": "The user is happy.", "response": "happy"}
For AssistantAgent
, you can set output_content_type
to the structured output type. The agent will automatically reflect on the tool call result and generate a StructuredMessage
with the output content type.
import asyncio
from typing import Literal
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_core.tools import FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel
# Define the structured output format.
class AgentResponse(BaseModel):
thoughts: str
response: Literal["happy", "sad", "neutral"]
# Define the function to be called as a tool.
def sentiment_analysis(text: str) -> str:
"""Given a text, return the sentiment."""
return "happy" if "happy" in text else "sad" if "sad" in text else "neutral"
# Create a FunctionTool instance with `strict=True`,
# which is required for structured output mode.
tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True)
# Create an OpenAIChatCompletionClient instance that supports structured output.
model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
)
# Create an AssistantAgent instance that uses the tool and model client.
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[tool],
system_message="Use the tool to analyze sentiment.",
output_content_type=AgentResponse,
)
stream = agent.on_messages_stream([TextMessage(content="I am happy today!", source="user")], CancellationToken())
await Console(stream)
---------- assistant ----------
[FunctionCall(id='call_tIZjAVyKEDuijbBwLY6RHV2p', arguments='{"text":"I am happy today!"}', name='sentiment_analysis')]
---------- assistant ----------
[FunctionExecutionResult(content='happy', call_id='call_tIZjAVyKEDuijbBwLY6RHV2p', is_error=False)]
---------- assistant ----------
{"thoughts":"The user expresses a clear positive emotion by stating they are happy today, suggesting an upbeat mood.","response":"happy"}
You can also pass a StructuredMessage
to the run
and run_stream
methods of agents and teams as task messages. Agents will automatically deserialize the message to string and place them in their model context. StructuredMessage
generated by an agent will also be passed to other agents in the team, and emitted as messages in the output stream.
- Add structured output to model clients by @ekzhu in #5936
- Support json schema for response format type in OpenAIChatCompletionClient by @ekzhu in #5988
- Add output_format to AssistantAgent for structured output by @ekzhu in #6071
Azure AI Search Tool
Added a new tool for agents to perform search using Azure AI Search.
See the documentation for more details.
- Add Azure AI Search tool implementation by @jay-thakur in #5844
SelectorGroupChat
Improvements
- Implement 'candidate_func' parameter to filter down the pool of candidates for selection by @Ethan0456 in #5954
- Add async support for
selector_func
andcandidate_func
inSelectorGroupChat
by @Ethan0456 in #6068
Code Executors Improvements
- Add cancellation support to docker executor by @ekzhu in #6027
- Move start() and stop() as interface methods for CodeExecutor by @ekzhu in #6040
- Changed Code Executors default directory to temporary directory by @federicovilla55 in #6143
Model Client Improvements
- Improve documentation around model client and tool and how it works under the hood by @ekzhu in #6050
- Add support for thought field in AzureAIChatCompletionClient by @jay-thakur in #6062
- Add a thought process analysis, and add a reasoning field in the ModelClientStreamingChunkEvent to distinguish the thought tokens. by @y26s4824k264 in #5989
- Add thought field support and fix LLM control parameters for OllamaChatCompletionClient by @jay-thakur in #6126
- Modular Transformer Pipeline and Fix Gemini/Anthropic Empty Content Handling by @SongChiYoung in #6063
- Doc/moudulor transform oai by @SongChiYoung in #6149
- Model family resolution to support non-prefixed names like Mistral by @SongChiYoung in #6158
TokenLimitedChatCompletionContext
Introduce TokenLimitedChatCompletionContext
to limit the number of tokens in the context
sent to the model.
This is useful for long-running agents that need to keep a long history of messages in the context.
- [feat] token-limited message context by @bassmang in #6087
- Fix token limited model context by @ekzhu in #6137
Bug Fixes
- Fix logging error with ollama client by @ekzhu in #5917
- Fix: make sure system message is present in reflection call by @ekzhu in #5926
- Fixes an error that can occur when listing the contents of a directory. by @afourney in #5938
- Upgrade llama cpp to 0.3.8 to fix windows related error by @ekzhu in #5948
- Fix R1 reasoning parser for openai client by @ZakWork in #5961
- Filter invalid parameters in Ollama client requests by @federicovilla55 in #5983
- Fix AssistantAgent polymorphism bug by @ZacharyHuang in #5967
- Update mimum openai version to 1.66.5 as import path changed by @ekzhu in #5996
- Fix bytes in markdown converter playwright by @husseinmozannar in #6044
- FIX: Anthropic multimodal(Image) message for Anthropic >= 0.48 aware by @SongChiYoung in #6054
- FIX: Anthropic and Gemini could take multiple system message by @SongChiYoung in #6118
- Fix MCP tool bug by dropping unset parameters from input by @ekzhu in #6125
- Update mcp version to 1.6.0 to avoid bug in closing client. by @ekzhu in #6162
- Ensure message sent to LLMCallEvent for Anthropic is serializable by @victordibia in #6135
- Fix streaming + tool bug in Ollama by @ekzhu in #6193
- Fix/anthropic colud not end with trailing whitespace at assistant content by @SongChiYoung in #6168
- Stop run when an error occured in a group chat by @ekzhu in #6141
Other Python Related Changes
- update website for v0.4.9 by @ekzhu in #5906
- Revert Allow Voice Access to find clickable cards commit by @peterychang in #5911
- update ref for v0.4.9 website by @ekzhu in #5914
- Update MarkItDown. by @afourney in #5920
- bugfix: Workaround for pydantic/#7713 by @nissa-seru in #5893
- Update memory.ipynb - fixed typo chroma_user_memory by @yusufk in #5901
- Improve AgentChat Teams Doc by @victordibia in #5930
- Use SecretStr type for api key by @ekzhu in #5939
- Update AgentChat Docs for RAGAgent / Teachability by @victordibia in #5935
- Ensure SecretStr is cast to str on load for model clients by @victordibia in #5947
- Fix
poe check
on Windows by @nissa-seru in #5942 - Improve docs for model clients by @ekzhu in #5952
- Improvements to agbench by @ekzhu in #5776
- Added a flag to agbench to enable Azure identity. by @afourney in #5977
- Some pandas series were not being handled correctly by @afourney in #5972
- ci: Remove --locked from uv sync in Integration test project by @lokitoth in #5993
- redundancy package delete. by @zhanluxianshen in #5976
- Add API doc for save_state and load_state for SingleThreadedAgentRuntime by @ekzhu in #5984
- Fix issue #5946: changed code for ACASessionsExecutor _ensure_access_token to be https:/ /dynamicsessions.io/.default by @EdwinInnovation in #6001
- Properly close model clients in documentation and samples by @federicovilla55 in #5898
- Limit what files and folders FileSurfer can access. by @afourney in #6024
- Announce current page on sidebar links, version by @peterychang in #5986
- Add linter to AGBench by @gagb in #6022
- add alt text to images by @peterychang in #6045
- Add alt text for clickable cards on website by @peterychang in #6043
- Correct README command examples for chess game sample. by @trevor211 in #6008
- Improve grammar of README.md by @ucg8j in #5999
- Update migration guide type name by @stuartleeks in #5978
- [Accessibility] fix screen reader is not announcing 'Copied' information by @cheng-tan in #6059
- Allow Docker-out-of-docker in AGBench by @afourney in #6047
- [Accessibility] Fix: screen reader does not announce theme change and nested nav label by @cheng-tan in #6061
- Add Tracing docs to agentchat by @victordibia in #5995
- Add model_context property to AssistantAgent by @jspv in #6072
- AssistantAgent.metadata for user/application identity information associated with the agent. #6048 by @tongyu0924 in #6057
- add utf encoding in websurfer read file by @victordibia in #6094
- Take the output of the tool and use that to create the HandoffMessage by @Kurok1 in #6073
- add stdio_read_timeout for create_mcp_server_session by @Septa2112 in #6080
- Add autogen user agent to azure openai requests by @jackgerrits in #6124
- FEAT: Add missing OpenAI-compatible models (GPT-4.5, Claude models) by @SongChiYoung in #6120
- Add suppress_result_output to ACADynamicSessionsCodeExecutor initializer by @stuartleeks in #6130
- code optimization by @zhanluxianshen in #5980
- Fix docs typos. by @zhanluxianshen in #5975
- fix: the installation instruction had a missing step by @dicaeffe in #6166
- Add session_id_param to ACADynamicSessionsCodeExecutor by @stuartleeks in #6171
- FIX:simple fix on tool calling test for anthropic by @SongChiYoung in #6181
- Update versions to 0.5.0 by @ekzhu in #6184
- Update version to 0.5.1 by @ekzhu in #6195
New Contributors
- @nissa-seru made their first contribution in #5893
- @yusufk made their first contribution in #5901
- @gunt3001 made their first contribution in #5932
- @ZakWork made their first contribution in #5961
- @Ethan0456 made their first contribution in #5954
- @federicovilla55 made their first contribution in #5983
- @ZacharyHuang made their first contribution in #5967
- @zhanluxianshen made their first contribution in #5976
- @EdwinInnovation made their first contribution in #6001
- @trevor211 made their first contribution in #6008
- @ucg8j made their first contribution in #5999
- @SongChiYoung made their first contribution in #6054
- @tongyu0924 made their first contribution in #6057
- @Kurok1 made their first contribution in #6073
- @y26s4824k264 made their first contribution in #5989
- @Septa2112 made their first contribution in #6080
- @dicaeffe made their first contribution in #6166
Full Changelog: python-v0.4.9.3...python-v0.5.1