Highlights
Observational Memory for long-running agents
Observational Memory is a new Mastra Memory feature which makes small context windows behave like large ones, while retaining long-term memory. It compresses conversations into dense observations logs (5–40x smaller than raw messages). When observations grow too long, they're condensed into reflections. Supports thread and resource scopes. It requires the latest versions of @mastra/core, @mastra/memory, mastra, and @mastra/pg, @mastra/libsql, or @mastra/mongodb.
Skills.sh ecosystem integration (server + UI + CLI)
@mastra/server adds skills.sh proxy endpoints (search/browse/preview/install/update/remove), Playground UI adds an “Add Skill” dialog for browsing/installing skills, and the CLI wizard can optionally install Mastra skills during create-mastra (with non-interactive --skills support).
Dynamic tool discovery with ToolSearchProcessor
Adds ToolSearchProcessor to let agents search and load tools on demand via built-in search_tools and load_tool meta-tools, dramatically reducing context usage for large tool libraries (e.g., MCP/integration-heavy setups).
New @mastra/editor: store, version, and resolve agents from a database
Introduces @mastra/editor for persisting complete agent configurations (instructions, models, tools, workflows, nested agents, processors, memory), managing versions/activation, and instantiating dependencies from the Mastra registry with caching and type-safe serialization.
Breaking Changes
@mastra/elasticsearch: vector document IDs now come from Elasticsearch_id; storedidfields are no longer written (breaking if you relied onsource.id).
Changelog
@mastra/core@1.2.0
- Update provider registry and model documentation with latest models and providers
Fixes: e6fc281
- Fixed processors returning
{ tools: {}, toolChoice: 'none' }being ignored. Previously, when a processor returned empty tools with an explicittoolChoice: 'none'to prevent tool calls, the toolChoice was discarded and defaulted to 'auto'. This fix preserves the explicit 'none' value, enabling patterns like ensuring a final text response whenmaxStepsis reached.
Fixes: #12601
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
- Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
-
Fix moonshotai/kimi-k2.5 multi-step tool calling failing with "reasoning_content is missing in assistant tool call message"
-
Changed moonshotai and moonshotai-cn (China version) providers to use Anthropic-compatible API endpoints instead of OpenAI-compatible
- moonshotai:
https://api.moonshot.ai/anthropic/v1 - moonshotai-cn:
https://api.moonshot.cn/anthropic/v1
- moonshotai:
-
This properly handles reasoning_content for kimi-k2.5 model
Fixes: #12530
- Fixed custom input processors from disabling workspace skill tools in generate() and stream(). Custom processors now replace only the processors you configured, while memory and skills remain available. Fixes #12612.
Fixes: #12676
- Fixed
Workspace search index names now use underscores so they work with SQL-based vector stores (PgVector, LibSQL).
Added
You can now set a custom index name with searchIndexName.
Why
Some SQL vector stores reject hyphens in index names.
Example
// Before - would fail with PgVector
new Workspace({ id: "my-workspace", vectorStore, embedder });
// After - works with all vector stores
new Workspace({ id: "my-workspace", vectorStore, embedder });
// Or use a custom index name
new Workspace({ vectorStore, embedder, searchIndexName: "my_workspace_vectors" });Fixes: #12673
- Added logger support to Workspace filesystem and sandbox providers. Providers extending MastraFilesystem or MastraSandbox now automatically receive the Mastra logger for consistent logging of file operations and command executions.
Fixes: #12606
- Added ToolSearchProcessor for dynamic tool discovery.
Agents can now discover and load tools on demand instead of having all tools available upfront. This reduces context token usage by ~94% when working with large tool libraries.
New API:
import { ToolSearchProcessor } from "@mastra/core/processors";
import { Agent } from "@mastra/core";
// Create a processor with searchable tools
const toolSearch = new ToolSearchProcessor({
tools: {
createIssue: githubTools.createIssue,
sendEmail: emailTools.send
// ... hundreds of tools
},
search: {
topK: 5, // Return top 5 results (default: 5)
minScore: 0.1 // Filter results below this score (default: 0)
}
});
// Attach processor to agent
const agent = new Agent({
name: "my-agent",
inputProcessors: [toolSearch],
tools: {
/* always-available tools */
}
});How it works:
The processor automatically provides two meta-tools to the agent:
search_tools- Search for available tools by keyword relevanceload_tool- Load a specific tool into the conversation
The agent discovers what it needs via search and loads tools on demand. Loaded tools are available immediately and persist within the conversation thread.
Why:
When agents have access to 100+ tools (from MCP servers or integrations), including all tool definitions in the context can consume significant tokens (~1,500 tokens per tool). This pattern reduces context usage by giving agents only the tools they need, when they need them.
Fixes: #12290
- Catch up evented workflows on parity with default execution engine
Fixes: #12555
-
Expose token usage from embedding operations
-
saveMessagesnow returnsusage: { tokens: number }with aggregated token count from all embeddings -
recallnow returnsusage: { tokens: number }from the vector search query embedding -
Updated abstract method signatures in
MastraMemoryto include optionalusagein return types
This allows users to track embedding token usage when using the Memory class.
Fixes: #12556
- Fixed a security issue where sensitive observability credentials (such as Langfuse API keys) could be exposed in tool execution error logs. The tracingContext is now properly excluded from logged data.
Fixes: #12669
- Fixed issue where some models incorrectly call skill names directly as tools instead of using skill-activate. Added clearer system instructions that explicitly state skills are NOT tools and must be activated via skill-activate with the skill name as the "name" parameter. Fixes #12654.
Fixes: #12677
- Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
- Improved workspace tool descriptions with clearer usage guidance for read_file, edit_file, and execute_command tools.
Fixes: #12640
- Fixed JSON parsing in agent network to handle malformed LLM output. Uses parsePartialJson from AI SDK to recover truncated JSON, missing braces, and unescaped control characters instead of failing immediately. This reduces unnecessary retry round-trips when the routing agent generates slightly malformed JSON for tool/workflow prompts. Fixes #12519.
Fixes: #12526
@mastra/client-js@1.2.0
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
- Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
- Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
@mastra/convex@1.0.2
- Fixed import path for storage constants in Convex server storage to use the correct @mastra/core/storage/constants subpath export
Fixes: #12560
@mastra/editor@0.2.0
- Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
@mastra/elasticsearch@1.1.0
- Added API key, basic, and bearer authentication options for Elasticsearch connections.
Changed vector IDs now come from Elasticsearch _id; stored id fields are no longer written (breaking if you relied on source.id).
Why This aligns with Elasticsearch auth best practices and avoids duplicate IDs in stored documents.
Before
const store = new ElasticSearchVector({ url, id: "my-index" });After
const store = new ElasticSearchVector({
url,
id: "my-index",
auth: { apiKey: process.env.ELASTICSEARCH_API_KEY! }
});Fixes: #11298
@mastra/evals@1.1.0
- Added
getContexthook to hallucination scorer for dynamic context resolution at runtime. This enables live scoring scenarios where context (like tool results) is only available when the scorer runs. Also addedextractToolResultsutility function to help extract tool results from scorer output.
Before (static context):
const scorer = createHallucinationScorer({
model: openai("gpt-4o"),
options: {
context: ["The capital of France is Paris.", "France is in Europe."]
}
});After (dynamic context from tool results):
import { extractToolResults } from "@mastra/evals/scorers";
const scorer = createHallucinationScorer({
model: openai("gpt-4o"),
options: {
getContext: ({ run }) => {
const toolResults = extractToolResults(run.output);
return toolResults.map((t) => JSON.stringify({ tool: t.toolName, result: t.result }));
}
}
});Fixes: #12639
@mastra/fastify@1.1.1
- Fixed missing cross-origin headers on streaming responses when using the Fastify adapter. Headers set by plugins (like @fastify/cors) are now preserved when streaming. See #12622
Fixes: #12633
@mastra/inngest@1.0.2
- Fix long running steps causing inngest workflow to fail
Fixes: #12522
@mastra/libsql@1.2.0
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
- Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
@mastra/mcp-docs-server@1.1.0
- Restructure and tidy up the MCP Docs Server. It now focuses more on documentation and uses fewer tools.
Removed tools that sourced content from:
- Blog
- Package changelog
- Examples
The local docs source is now using the generated llms.txt files from the official documentation, making it more accurate and easier to maintain.
Fixes: #12623
@mastra/memory@1.1.0
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
-
Expose token usage from embedding operations
-
saveMessagesnow returnsusage: { tokens: number }with aggregated token count from all embeddings -
recallnow returnsusage: { tokens: number }from the vector search query embedding -
Updated abstract method signatures in
MastraMemoryto include optionalusagein return types
This allows users to track embedding token usage when using the Memory class.
Fixes: #12556
@mastra/mongodb@1.2.0
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
- Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
@mastra/observability@1.2.0
- Increased default serialization limits for AI tracing. The maxStringLength is now 128KB (previously 1KB) and maxDepth is 8 (previously 6). These changes prevent truncation of large LLM prompts and responses during tracing.
To restore the previous behavior, set serializationOptions in your observability config:
serializationOptions: {
maxStringLength: 1024,
maxDepth: 6,
}Fixes: #12579
- Fix CloudFlare Workers deployment failure caused by
fileURLToPathbeing called at module initialization time.
Moved SNAPSHOTS_DIR calculation from top-level module code into a lazy getter function. In CloudFlare Workers (V8 runtime), import.meta.url is undefined during worker startup, causing the previous code to throw. The snapshot functionality is only used for testing, so deferring initialization has no impact on normal operation.
Fixes: #12540
@mastra/pg@1.2.0
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
- Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
@mastra/playground-ui@9.0.0
- Use EntryCell icon prop for source indicator in agent table
Fixes: #12515
- Add Observational Memory UI to the playground. Shows observation/reflection markers inline in the chat thread, and adds an Observational Memory panel to the agent info section with observations, reflection history, token usage, and config. All OM UI is gated behind a context provider that no-ops when OM isn't configured.
Fixes: #12599
- Added MultiCombobox component for multi-select scenarios, and JSONSchemaForm compound component for building JSON schema definitions visually. The Combobox component now supports description text on options and error states.
Fixes: #12616
- Added ContentBlocks, a reusable drag-and-drop component for building ordered lists of editable content. Also includes AgentCMSBlocks, a ready-to-use implementation for agent system prompts with add, delete, and reorder functionality.
Fixes: #12629
- Redesigned toast component with outline circle icons, left-aligned layout, and consistent design system styling
Fixes: #12618
- Updated Badge component styling: increased height to 28px, changed to pill shape with rounded-full, added border, and increased padding for better visual appearance.
Fixes: #12511
- Fixed custom gateway provider detection in Studio.
What changed:
- Studio now correctly detects connected custom gateway providers (e.g., providers registered as
acme/customare now found when the agent uses modelacme/custom/gpt-4o) - The model selector properly displays and updates models for custom gateway providers
- "Enhance prompt" feature works correctly with custom gateway providers
Why:
Custom gateway providers are stored with a gateway prefix (e.g., acme/custom), but the model router extracts just the provider part (e.g., custom). The lookups were failing because they only did exact matching. Now both backend and frontend use fallback logic to find providers with gateway prefixes.
Fixes: #11815
- Fixed variable highlighting in markdown lists - variables like
{{name}}now correctly display in orange inside list items.
Fixes: #12653
- Added markdown language support to CodeEditor with syntax highlighting for headings, emphasis, links, and code blocks. New
languageprop accepts 'json' (default) or 'markdown'. Added variable highlighting extension that visually distinguishes{{variableName}}patterns with orange styling whenhighlightVariablesprop is enabled.
Fixes: #12621
- Fixed the Tools page incorrectly displaying as empty when tools are defined inline in agent files.
Fixes: #12531
- Fixed rule engine bugs: type-safe comparisons for greater_than/less_than operators, array support for contains/not_contains, consistent path parsing for dot notation, and prevented Infinity/NaN strings from being converted to JS special values
Fixes: #12624
- Added Add Skill dialog for browsing and installing skills from skills.sh registry.
New features:
- Search skills or browse popular skills from skills.sh
- Preview skill content with rendered SKILL.md (tables, code blocks, etc.)
- Install, update, and remove skills directly from the UI
- Shows installed status for skills already in workspace
Fixes: #12492
- Fixed toast imports to use custom wrapper for consistent styling
Fixes: #12618
- Fixed sidebar tooltip styling in collapsed mode by removing hardcoded color overrides
Fixes: #12537
- Added CMS block conditional rules component and unified JsonSchema types across the codebase. The new AgentCMSBlockRules component allows content blocks to be displayed conditionally based on rules. Also added jsonSchemaToFields utility for bi-directional schema conversion.
Fixes: #12651
- Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
- Fixed combobox dropdowns in agent create/edit dialogs to render within the modal container, preventing z-index and scrolling issues.
Fixes: #12510
- Added warning toast and banner when installing skills that aren't discovered due to missing .agents/skills path configuration.
Fixes: #12547
@mastra/schema-compat@1.1.0
- Added Standard Schema support to
@mastra/schema-compat. This enables interoperability with any schema library that implements the Standard Schema specification.
New exports:
toStandardSchema()- Convert Zod, JSON Schema, or AI SDK schemas to Standard Schema formatStandardSchemaWithJSON- Type for schemas implementing both validation and JSON Schema conversionInferInput,InferOutput- Utility types for type inference
Example usage:
import { toStandardSchema } from "@mastra/schema-compat/schema";
import { z } from "zod";
// Convert a Zod schema to Standard Schema
const zodSchema = z.object({ name: z.string(), age: z.number() });
const standardSchema = toStandardSchema(zodSchema);
// Use validation
const result = standardSchema["~standard"].validate({ name: "John", age: 30 });
// Get JSON Schema
const jsonSchema = standardSchema["~standard"].jsonSchema.output({ target: "draft-07" });Fixes: #12527
@mastra/server@1.2.0
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});What's new:
observationalMemory: trueenables the three-tier memory system (recent messages → observations → reflections)- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
- Manual
observe()API for triggering observation outside the normal agent loop - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
Agent.findProcessor()method for looking up processors by IDprocessorStatesfor persisting processor state across loop iterations- Abort signal propagation to processors
ProcessorStreamWriterfor custom stream events from processors
Fixes: #12599
- Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
- Agent Storage & Retrieval: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
- Version Management: Create and manage multiple versions of agents, with support for activating specific versions
- Dependency Resolution: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
- Caching: Built-in caching for improved performance when repeatedly accessing stored agents
- Type Safety: Full TypeScript support with proper typing for stored configurations
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });Storage Improvements:
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
- Improved agent resolution queries to properly merge version data
- Enhanced type safety for serialized configurations
Fixes: #12631
- Fixed custom gateway provider detection in Studio.
What changed:
- Studio now correctly detects connected custom gateway providers (e.g., providers registered as
acme/customare now found when the agent uses modelacme/custom/gpt-4o) - The model selector properly displays and updates models for custom gateway providers
- "Enhance prompt" feature works correctly with custom gateway providers
Why:
Custom gateway providers are stored with a gateway prefix (e.g., acme/custom), but the model router extracts just the provider part (e.g., custom). The lookups were failing because they only did exact matching. Now both backend and frontend use fallback logic to find providers with gateway prefixes.
Fixes: #11815
- Added skills.sh proxy endpoints for browsing, searching, and installing skills from the community registry.
New endpoints:
- GET /api/workspaces/:id/skills-sh/search - Search skills
- GET /api/workspaces/:id/skills-sh/popular - Browse popular skills
- GET /api/workspaces/:id/skills-sh/preview - Preview skill SKILL.md content
- POST /api/workspaces/:id/skills-sh/install - Install a skill from GitHub
- POST /api/workspaces/:id/skills-sh/update - Update installed skills
- POST /api/workspaces/:id/skills-sh/remove - Remove an installed skill
Fixes: #12492
- Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
mastra@1.2.0
- Fixed peer dependency checker fix command to suggest the correct package to upgrade:
- If peer dep is too old (below range) → suggests upgrading the peer dep (e.g.,
@mastra/core) - If peer dep is too new (above range) → suggests upgrading the package requiring it (e.g.,
@mastra/libsql)
Fixes: #12529
- New feature: You can install the Mastra skill during the
create-mastrawizard.
The wizard now asks you to install the official Mastra skill. Choose your favorite agent and your newly created project is set up.
For non-interactive setup, use the --skills flag that accepts comma-separated agent names (e.g. --skills claude-code).
Fixes: #12582
- Pre-select Claude Code, Codex, OpenCode, and Cursor as default agents when users choose to install Mastra skills during project creation. Codex has been promoted to the popular agents list for better visibility.
Fixes: #12626
- Add
AGENTS.mdfile (and optionallyCLAUDE.md) duringcreate mastracreation
Fixes: #12658