github langchain-ai/langchain langchain==1.0.0a1

latest releases: langchain-text-splitters==0.3.11, langchain-cli==0.0.37, langchain-tests==0.3.21...
pre-release5 days ago

This is a pre-release for langchain v1!

Generally, we've reduced LangChain's surface area and are narrowing in on popular and essential abstractions.

We've also moved create_react_agent from langgraph to langchain! Some important changes on that front:

Enhanced Structured Output

create_react_agent supports coercion of outputs to structured data types like pydantic models, dataclasses, typed dicts, or JSON schemas specifications.

Structural Changes

In langgraph < 1.0, create_react_agent implemented support for structured output via an additional LLM call to the model after the standard model / tool calling loop finished. This introduced extra expense and was unnecessary.

This new version implements structured output support in the main loop, allowing a model to choose between calling tools or generating structured output (or both).

The same basic pattern for structured output generation works:

from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent("openai:gpt-4o-mini", tools=[weather_tool], response_format=Weather)
print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')

Advanced Configuration

The new API exposes two ways to configure how structured output is generated. Under the hood, LangChain will attempt to pick the best approach if not explicitly specified. That is, if provider native support is available for a given model, that takes priority over artificial tool calling.

  1. Artificial tool calling (the default for most models)

LangChain generates a tool (or tools) under the hood that match the schema of your response format. When the model calls those tools, LangChain coerces the args to the desired format. Note, LangChain does not validate outputs adhering to JSON schema specifications.

Extended example
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather, tool_message_content="Final Weather result generated"
    ),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_Gg933BMHMwck50Q39dtBjXm7)
 Call ID: call_Gg933BMHMwck50Q39dtBjXm7
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================
Tool Calls:
  Weather (call_9xOkYUM7PuEXl9DQq9sWGv5l)
 Call ID: call_9xOkYUM7PuEXl9DQq9sWGv5l
  Args:
    temperature: 70
    condition: sunny
================================= Tool Message =================================
Name: Weather

Final Weather result generated
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
  1. Provider implementations (limited to OpenAI, Groq)

Some providers support structured output generating directly. For those cases, we offer the ProviderStrategy hint:

Extended example
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ProviderStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ProviderStrategy(Weather),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_OFJq1FngIXS6cvjWv5nfSFZp)
 Call ID: call_OFJq1FngIXS6cvjWv5nfSFZp
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================

{"temperature":70,"condition":"sunny"}
Weather(temperature=70.0, condition='sunny')
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')

Note! The final tool message has the custom content provided by the dev.

Prompted output was previously supported and is no longer supported via the response_format argument to create_react_agent. If there's significant demand for this, we'd be happy to engineer a solution.

Error Handling

create_react_agent now exposes an API for managing errors associated with structured output generation. There are two common problems with structured output generation (w/ artificial tool calling):

  1. Parsing error -- the model generates data that doesn't match the desired structure for the output
  2. Multiple tool calls error -- the model generates 2 or more tool calls associated with structured output schemas

A developer can control the desired behavior for this via the handle_errors arg to ToolStrategy.

Extended example
from langchain_core.messages import HumanMessage
from pydantic import BaseModel

from langchain.agents import create_react_agent
from langchain.agents.structured_output import StructuredOutputValidationError, ToolStrategy


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""
    return f"it's sunny and 70 degrees in {city}"


def handle_validation_error(error: Exception) -> str:
    if isinstance(error, StructuredOutputValidationError):
        return (
            f"Please call the {error.tool_name} call again with the correct arguments. "
            f"Your mistake was: {error.source}"
        )
    raise error


agent = create_react_agent(
    "openai:gpt-5",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather,
        handle_errors=handle_validation_error,
    ),
)

Error Handling for Tool Calling

Tools fail for two main reasons:

  1. Invocation failure -- the args generated by the model for the tool are incorrect (missing, incompatible data types, etc)
  2. Execution failure -- the tool execution itself fails due to a developer error, network error, or some other exception.

By default, when tool invocation fails, the react agent will return an artificial ToolMessage to the model asking it to correct its mistakes and retry.

Now, when tool execution fails, the react agent raises the ToolException by default instead of asking the model to retry. This helps to avoid looping that should be avoided due to the aforementioned issues.

Developers can configure their desired behavior for retries / error handling via the handle_tool_errors arg to ToolNode.

Pre-Bound Models

create_react_agent no longer supports inputs to model that have been pre-bound w/ tools or other configuration. To properly support structured output generation, the agent itself needs the power to bind tools + structured output kwargs.

This also makes the devx cleaner - it's always expected that model is an instance of BaseChatModel (or str that we coerce into a chat model instance).

Dynamic model functions can return a pre-bound model IF structured output is not also used. Dynamic model functions can then bind tools / structured output logic.

Import Changes

Users should now use create_react_agent from langchain.agents instead of langgraph.prebuilts.
Other imports have a similar migration path, ToolNode and AgentState for example.

  • chat_agent_executor.py -> react_agent.py

Some notes:

  1. Disabled blockbuster + some linting in langchain/agents -- beyond ideal, but necessary to get this across the line for the alpha. We should re-enable before official release.

Don't miss a new langchain release

NewReleases is sending notifications on new releases.