🎉 LangChain v1.0 is here! This release provides a focused, production-ready foundation for building agents. We've streamlined the framework around three core improvements: createAgent
, standard content blocks, and a simplified package structure. See the release notes for complete details.
✨ Major Features
createAgent
- A new standard for building agents
createAgent
is the new standard way to build agents in LangChain 1.0. It provides a simpler interface than createReactAgent
from LangGraph while offering greater customization potential through middleware.
Key features:
- Clean, intuitive API: Build agents with minimal boilerplate
- Built on LangGraph: Get persistence, streaming, human-in-the-loop, and time travel out of the box
- Middleware-first design: Highly customizable through composable middleware
- Improved structured output: Generate structured outputs in the main agent loop without additional LLM calls
Example:
import { createAgent } from "langchain";
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5-20250929",
tools: [getWeather],
systemPrompt: "You are a helpful assistant.",
});
const result = await agent.invoke({
messages: [{ role: "user", content: "What is the weather in Tokyo?" }],
});
console.log(result.content);
Under the hood, createAgent
is built on the basic agent loop—calling a model using LangGraph, letting it choose tools to execute, and then finishing when it calls no more tools.
Built on LangGraph features (work out of the box):
- Persistence: Conversations automatically persist across sessions with built-in checkpointing
- Streaming: Stream tokens, tool calls, and reasoning traces in real-time
- Human-in-the-loop: Pause agent execution for human approval before sensitive actions
- Time travel: Rewind conversations to any point and explore alternate paths
Structured output improvements:
- Generate structured outputs in the main loop instead of requiring an additional LLM call
- Models can choose between calling tools or using provider-side structured output generation
- Significant cost reduction by eliminating extra LLM calls
Example:
import { createAgent } from "langchain";
import * as z from "zod";
const weatherSchema = z.object({
temperature: z.number(),
condition: z.string(),
});
const agent = createAgent({
model: "openai:gpt-4o-mini",
tools: [getWeather],
responseFormat: weatherSchema,
});
const result = await agent.invoke({
messages: [{ role: "user", content: "What is the weather in Tokyo?" }],
});
console.log(result.structuredResponse);
For more information, see Agents documentation.
Middleware
Middleware is what makes createAgent
highly customizable, raising the ceiling for what you can build. Great agents require context engineering—getting the right information to the model at the right time. Middleware helps you control dynamic prompts, conversation summarization, selective tool access, state management, and guardrails through a composable abstraction.
Prebuilt middleware for common patterns:
import {
createAgent,
summarizationMiddleware,
humanInTheLoopMiddleware,
piiRedactionMiddleware,
} from "langchain";
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5-20250929",
tools: [readEmail, sendEmail],
middleware: [
piiRedactionMiddleware({ patterns: ["email", "phone", "ssn"] }),
summarizationMiddleware({
model: "anthropic:claude-sonnet-4-5-20250929",
maxTokensBeforeSummary: 500,
}),
humanInTheLoopMiddleware({
interruptOn: {
sendEmail: {
allowedDecisions: ["approve", "edit", "reject"],
},
},
}),
] as const,
});
Custom middleware with lifecycle hooks:
Hook | When it runs | Use cases |
---|---|---|
beforeAgent
| Before calling the agent | Load memory, validate input |
beforeModel
| Before each LLM call | Update prompts, trim messages |
wrapModelCall
| Around each LLM call | Intercept and modify requests/responses |
wrapToolCall
| Around each tool call | Intercept and modify tool execution |
afterModel
| After each LLM response | Validate output, apply guardrails |
afterAgent
| After agent completes | Save results, cleanup |
Example custom middleware:
import { createMiddleware } from "langchain";
const contextSchema = z.object({
userExpertise: z.enum(["beginner", "expert"]).default("beginner"),
});
const expertiseBasedToolMiddleware = createMiddleware({
wrapModelCall: async (request, handler) => {
const userLevel = request.runtime.context.userExpertise;
if (userLevel === "expert") {
const tools = [advancedSearch, dataAnalysis];
return handler(request.replace("openai:gpt-5", tools));
}
const tools = [simpleSearch, basicCalculator];
return handler(request.replace("openai:gpt-5-nano", tools));
},
});
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5-20250929",
tools: [simpleSearch, advancedSearch, basicCalculator, dataAnalysis],
middleware: [expertiseBasedToolMiddleware],
contextSchema,
});
For more information, see the complete middleware guide.
Simplified Package
LangChain v1 streamlines the langchain
package namespace to focus on essential building blocks for agents. The package exposes only the most useful and relevant functionality (most re-exported from @langchain/core
for convenience).
What's in the core langchain
package:
createAgent
and agent-related utilities- Core message types and content blocks
- Middleware infrastructure
- Tool definitions and schemas
- Prompt templates
- Output parsers
- Base runnable abstractions
🔄 Migration Notes
@langchain/classic
for Legacy Functionality
Legacy functionality has moved to @langchain/classic
to keep the core package lean and focused.
What's in @langchain/classic
:
- Legacy chains and chain implementations
- The indexing API
@langchain/community
exports- Other deprecated functionality
To migrate legacy code:
-
Install
@langchain/classic
:npm install @langchain/classic
-
Update your imports:
import { ... } from "langchain"; // [!code --] import { ... } from "@langchain/classic"; // [!code ++] import { ... } from "langchain/chains"; // [!code --] import { ... } from "@langchain/classic/chains"; // [!code ++]
Upgrading to v1
Install the v1 packages:
npm install langchain@1.0.0 @langchain/core@1.0.0