github microsoft/autogen python-v0.5.1

one day ago

What's New

AgentChat Message Types (Type Hint Changes)

Important

TL;DR: If you are not using custom agents or custom termination conditions, you don't need to change anything.
Otherwise, update AgentEvent to BaseAgentEvent and ChatMessage to BaseChatMessage in your type hints.

This is a breaking change on type hinting only, not on usage.

We updated the message types in AgentChat in this new release.
The purpose of this change is to support custom message types defined by applications.

Previously, message types are fixed and we use the union types ChatMessage and AgentEvent to refer to all the concrete built-in message types.

Now, in the main branch, the message types are organized into hierarchy: existing built-in concrete message types are subclassing either BaseChatMessage and BaseAgentEvent, depending it was part of the ChatMessage or AgentEvent union. We refactored all message handlers on_messages, on_messages_stream, run, run_stream and TerminationCondition to use the base classes in their type hints.

If you are subclassing BaseChatAgent to create your custom agents, or subclassing TerminationCondition to create your custom termination conditions, then you need to rebase the method signatures to use BaseChatMessage and BaseAgentEvent.

If you are using the union types in your existing data structures for serialization and deserialization, then you can keep using those union types to ensure the messages are being handled as concrete types. However, this will not work with custom message types.

Otherwise, your code should just work, as the refactor only makes type hint changes.

This change allows us to support custom message types. For example, we introduced a new message type StructureMessage[T] generic, that can be used to create new message types with a BaseModel content. On-going work is to get AssistantAgent to respond with StructuredMessage[T] where T is the structured output type for the model.

See the API doc on AgentChat message types: https://microsoft.github.io/autogen/stable/reference/python/autogen_agentchat.messages.html

  • Use class hierarchy to organize AgentChat message types and introduce StructuredMessage type by @ekzhu in #5998
  • Rename to use BaseChatMessage and BaseAgentEvent. Bring back union types. by @ekzhu in #6144

Structured Output

We enhanced support for structured output in model clients and agents.

For model clients, use json_output parameter to specify the structured output type
as a Pydantic model. The model client will then return a JSON string
that can be deserialized into the specified Pydantic model.

import asyncio
from typing import Literal

from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel

# Define the structured output format.
class AgentResponse(BaseModel):
    thoughts: str
    response: Literal["happy", "sad", "neutral"]

 model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

 # Generate a response using the tool.
response = await model_client.create(
    messages=[
        SystemMessage(content="Analyze input text sentiment using the tool provided."),
        UserMessage(content="I am happy.", source="user"),
    ],
    json_ouput=AgentResponse,
)

print(response.content)
# Should be a structured output.
# {"thoughts": "The user is happy.", "response": "happy"}

For AssistantAgent, you can set output_content_type to the structured output type. The agent will automatically reflect on the tool call result and generate a StructuredMessage with the output content type.

import asyncio
from typing import Literal

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_agentchat.ui import Console
from autogen_core import CancellationToken
from autogen_core.tools import FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from pydantic import BaseModel

# Define the structured output format.
class AgentResponse(BaseModel):
    thoughts: str
    response: Literal["happy", "sad", "neutral"]


# Define the function to be called as a tool.
def sentiment_analysis(text: str) -> str:
    """Given a text, return the sentiment."""
    return "happy" if "happy" in text else "sad" if "sad" in text else "neutral"


# Create a FunctionTool instance with `strict=True`,
# which is required for structured output mode.
tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True)

# Create an OpenAIChatCompletionClient instance that supports structured output.
model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
)

# Create an AssistantAgent instance that uses the tool and model client.
agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    tools=[tool],
    system_message="Use the tool to analyze sentiment.",
    output_content_type=AgentResponse,
)

stream = agent.on_messages_stream([TextMessage(content="I am happy today!", source="user")], CancellationToken())
await Console(stream)
---------- assistant ----------
[FunctionCall(id='call_tIZjAVyKEDuijbBwLY6RHV2p', arguments='{"text":"I am happy today!"}', name='sentiment_analysis')]
---------- assistant ----------
[FunctionExecutionResult(content='happy', call_id='call_tIZjAVyKEDuijbBwLY6RHV2p', is_error=False)]
---------- assistant ----------
{"thoughts":"The user expresses a clear positive emotion by stating they are happy today, suggesting an upbeat mood.","response":"happy"}

You can also pass a StructuredMessage to the run and run_stream methods of agents and teams as task messages. Agents will automatically deserialize the message to string and place them in their model context. StructuredMessage generated by an agent will also be passed to other agents in the team, and emitted as messages in the output stream.

  • Add structured output to model clients by @ekzhu in #5936
  • Support json schema for response format type in OpenAIChatCompletionClient by @ekzhu in #5988
  • Add output_format to AssistantAgent for structured output by @ekzhu in #6071

Azure AI Search Tool

Added a new tool for agents to perform search using Azure AI Search.

See the documentation for more details.

SelectorGroupChat Improvements

  • Implement 'candidate_func' parameter to filter down the pool of candidates for selection by @Ethan0456 in #5954
  • Add async support for selector_func and candidate_func in SelectorGroupChat by @Ethan0456 in #6068

Code Executors Improvements

  • Add cancellation support to docker executor by @ekzhu in #6027
  • Move start() and stop() as interface methods for CodeExecutor by @ekzhu in #6040
  • Changed Code Executors default directory to temporary directory by @federicovilla55 in #6143

Model Client Improvements

  • Improve documentation around model client and tool and how it works under the hood by @ekzhu in #6050
  • Add support for thought field in AzureAIChatCompletionClient by @jay-thakur in #6062
  • Add a thought process analysis, and add a reasoning field in the ModelClientStreamingChunkEvent to distinguish the thought tokens. by @y26s4824k264 in #5989
  • Add thought field support and fix LLM control parameters for OllamaChatCompletionClient by @jay-thakur in #6126
  • Modular Transformer Pipeline and Fix Gemini/Anthropic Empty Content Handling by @SongChiYoung in #6063
  • Doc/moudulor transform oai by @SongChiYoung in #6149
  • Model family resolution to support non-prefixed names like Mistral by @SongChiYoung in #6158

TokenLimitedChatCompletionContext

Introduce TokenLimitedChatCompletionContext to limit the number of tokens in the context
sent to the model.
This is useful for long-running agents that need to keep a long history of messages in the context.

Bug Fixes

  • Fix logging error with ollama client by @ekzhu in #5917
  • Fix: make sure system message is present in reflection call by @ekzhu in #5926
  • Fixes an error that can occur when listing the contents of a directory. by @afourney in #5938
  • Upgrade llama cpp to 0.3.8 to fix windows related error by @ekzhu in #5948
  • Fix R1 reasoning parser for openai client by @ZakWork in #5961
  • Filter invalid parameters in Ollama client requests by @federicovilla55 in #5983
  • Fix AssistantAgent polymorphism bug by @ZacharyHuang in #5967
  • Update mimum openai version to 1.66.5 as import path changed by @ekzhu in #5996
  • Fix bytes in markdown converter playwright by @husseinmozannar in #6044
  • FIX: Anthropic multimodal(Image) message for Anthropic >= 0.48 aware by @SongChiYoung in #6054
  • FIX: Anthropic and Gemini could take multiple system message by @SongChiYoung in #6118
  • Fix MCP tool bug by dropping unset parameters from input by @ekzhu in #6125
  • Update mcp version to 1.6.0 to avoid bug in closing client. by @ekzhu in #6162
  • Ensure message sent to LLMCallEvent for Anthropic is serializable by @victordibia in #6135
  • Fix streaming + tool bug in Ollama by @ekzhu in #6193
  • Fix/anthropic colud not end with trailing whitespace at assistant content by @SongChiYoung in #6168
  • Stop run when an error occured in a group chat by @ekzhu in #6141

Other Python Related Changes

New Contributors

Full Changelog: python-v0.4.9.3...python-v0.5.1

Don't miss a new autogen release

NewReleases is sending notifications on new releases.