github mastra-ai/mastra mastra@1.3.0
February, 11 2026

5 hours ago

Highlights

Observational Memory Async Buffering (default-on) + New Streaming Events

Observational memory now buffers background observations/reflections by default to avoid blocking as conversations grow, and introduces structured streaming status/events (data-om-status plus buffering start/end/failed markers) for better UI/telemetry.

Workspace Mounts (CompositeFilesystem)

Workspaces can now mount multiple filesystem providers (S3/GCS/local/etc.) into a single unified directory tree via CompositeFilesystem, so agents and tools can access files across backends through one path structure.

Changelog

@mastra/core@1.3.0

Minor Changes

  • Added mount support to workspaces, so you can combine multiple storage providers (S3, GCS, local disk, etc.) under a single directory tree. This lets agents access files from different sources through one unified filesystem. (#12851)

    Why: Previously a workspace could only use one filesystem. With mounts, you can organize files from different providers under different paths — for example, S3 data at /data and GCS models at /models — without agents needing to know which provider backs each path.

    What's new:

    • Added CompositeFilesystem for combining multiple filesystems under one tree
    • Added descriptive error types for sandbox and mount failures (e.g., SandboxTimeoutError, MountError)
    • Improved MastraFilesystem and MastraSandbox base classes with safer concurrent lifecycle handling
    import { Workspace, CompositeFilesystem } from "@mastra/core/workspace";
    
    // Mount multiple filesystems under one tree
    const composite = new CompositeFilesystem({
      mounts: {
        "/data": s3Filesystem,
        "/models": gcsFilesystem
      }
    });
    
    const workspace = new Workspace({
      filesystem: composite,
      sandbox: e2bSandbox
    });
    

Stored agent fields (tools, model, workflows, agents, memory, scorers, inputProcessors, outputProcessors, defaultOptions) can now be configured as conditional variants with rule groups that evaluate against request context at runtime. All matching variants accumulate — arrays are concatenated and objects are shallow-merged — so agents dynamically compose their configuration based on the incoming request context.

New requestContextSchema field

Stored agents now accept an optional requestContextSchema (JSON Schema) that is converted to a Zod schema and passed to the Agent constructor, enabling request context validation.

Conditional field example

await agentsStore.create({
  agent: {
    id: "my-agent",
    name: "My Agent",
    instructions: "You are a helpful assistant",
    model: { provider: "openai", name: "gpt-4" },
    tools: [
      { value: { "basic-tool": {} } },
      {
        value: { "premium-tool": {} },
        rules: {
          operator: "AND",
          conditions: [{ field: "tier", operator: "equals", value: "premium" }]
        }
      }
    ],
    requestContextSchema: {
      type: "object",
      properties: { tier: { type: "string" } }
    }
  }
});
  • Add native @ai-sdk/groq support to model router. Groq models now use the official AI SDK package instead of falling back to OpenAI-compatible mode. (#12741)

    • Added a new scorer-definitions storage domain for storing LLM-as-judge and preset scorer configurations in the database
    • Introduced a VersionedStorageDomain generic base class that unifies AgentsStorage, PromptBlocksStorage, and ScorerDefinitionsStorage with shared CRUD methods (create, getById, getByIdResolved, update, delete, list, listResolved)
    • Flattened stored scorer type system: replaced nested preset/customLLMJudge config with top-level type, instructions, scoreRange, and presetConfig fields
    • Refactored MastraEditor to use a namespace pattern (editor.agent.*, editor.scorer.*, editor.prompt.*) backed by a CrudEditorNamespace base class with built-in caching and an onCacheEvict hook
    • Added rawConfig support to MastraBase and MastraScorer via toRawConfig(), so hydrated primitives carry their stored configuration
    • Added prompt block and scorer registration to the Mastra class (addPromptBlock, removePromptBlock, addScorer, removeScorer)

    Creating a stored scorer (LLM-as-judge):

    const scorer = await editor.scorer.create({
      id: "my-scorer",
      name: "Response Quality",
      type: "llm-judge",
      instructions: "Evaluate the response for accuracy and helpfulness.",
      model: { provider: "openai", name: "gpt-4o" },
      scoreRange: { min: 0, max: 1 }
    });
    

    Retrieving and resolving a stored scorer:

    // Fetch the stored definition from DB
    const definition = await editor.scorer.getById("my-scorer");
    
    // Resolve it into a runnable MastraScorer instance
    const runnableScorer = editor.scorer.resolve(definition);
    
    // Execute the scorer
    const result = await runnableScorer.run({
      input: "What is the capital of France?",
      output: "The capital of France is Paris."
    });
    

    Editor namespace pattern (before/after):

    // Before
    const agent = await editor.getStoredAgentById("abc");
    const prompts = await editor.listPromptBlocks();
    
    // After
    const agent = await editor.agent.getById("abc");
    const prompts = await editor.prompt.list();
    

    Generic storage domain methods (before/after):

    // Before
    const store = storage.getStore("agents");
    await store.createAgent({ agent: input });
    await store.getAgentById({ id: "abc" });
    await store.deleteAgent({ id: "abc" });
    
    // After
    const store = storage.getStore("agents");
    await store.create({ agent: input });
    await store.getById("abc");
    await store.delete("abc");
    
  • Added mount status and error information to filesystem directory listings, so the UI can show whether each mount is healthy or has issues. Improved error handling when mount operations fail. Fixed tree formatter to use case-insensitive sorting to match native tree output. (#12605)

  • Added workspace registration and tool context support. (#12607)

    Why - Makes it easier to manage multiple workspaces at runtime and lets tools read/write files in the intended workspace.

    Workspace Registration - Added a workspace registry so you can list and fetch workspaces by id with addWorkspace(), getWorkspaceById(), and listWorkspaces(). Agent workspaces are auto-registered when adding agents.

    Before

    const mastra = new Mastra({ workspace: myWorkspace });
    // No way to look up workspaces by id or list all workspaces
    

    After

    const mastra = new Mastra({ workspace: myWorkspace });
    
    // Look up by id
    const ws = mastra.getWorkspaceById("my-workspace");
    
    // List all registered workspaces
    const allWorkspaces = mastra.listWorkspaces();
    
    // Register additional workspaces
    mastra.addWorkspace(anotherWorkspace);
    

    Tool Workspace Access - Tools can access the workspace through context.workspace during execution, enabling filesystem and sandbox operations.

    const myTool = createTool({
      id: "file-reader",
      execute: async ({ context }) => {
        const fs = context.workspace?.filesystem;
        const content = await fs?.readFile("config.json");
        return { content };
      }
    });
    

    Dynamic Workspace Configuration - Workspace can be configured dynamically via agent config functions. Dynamically created workspaces are auto-registered with Mastra, making them available via listWorkspaces().

    const agent = new Agent({
      workspace: ({ mastra, requestContext }) => {
        // Return workspace dynamically based on context
        const workspaceId = requestContext?.get("workspaceId") || "default";
        return mastra.getWorkspaceById(workspaceId);
      }
    });
    
    • Changed stored agent tools field from string[] to Record<string, { description?: string }> to allow per-tool description overrides
    • When a stored agent specifies a custom description for a tool, the override is applied at resolution time
    • Updated server API schemas, client SDK types, and editor resolution logic accordingly
  • Breaking: Removed cloneAgent() from the Agent class. Agent cloning is now handled by the editor package via editor.agent.clone(). (#12904)

    If you were calling agent.cloneAgent() directly, use the editor's agent namespace instead:

    // Before
    const result = await agent.cloneAgent({ newId: "my-clone" });
    
    // After
    const editor = mastra.getEditor();
    const result = await editor.agent.clone(agent, { newId: "my-clone" });
    

    Why: The Agent class should not be responsible for storage serialization. The editor package already handles converting between runtime agents and stored configurations, so cloning belongs there.

    Added getConfiguredProcessorIds() to the Agent class, which returns raw input/output processor IDs for the agent's configuration.

Patch Changes

  • Update provider registry and model documentation with latest models and providers (717ffab)

  • Fixed observational memory progress bars resetting to zero after agent responses finish. (#12934)

  • Fixed issues with stored agents (#12790)

  • Fixed sub-agent tool approval and suspend events not being surfaced to the parent agent stream. This enables proper suspend/resume workflows and approval handling when nested agents require tool approvals. (#12732)

    Related to issue #12552.

  • Fixed stale agent data in CMS pages by adding removeAgent method to Mastra and updating clearStoredAgentCache to clear both Editor cache and Mastra registry when stored agents are updated or deleted (#12693)

  • Fixed stored scorers not being registered on the Mastra instance. Scorers created via the editor are now automatically discoverable through mastra.getScorer() and mastra.getScorerById(), matching the existing behavior of stored agents. Previously, stored scorers could only be resolved inline but were invisible to the runtime registry, causing lookups to fail. (#12903)

  • Fixed generateTitle running on every conversation turn instead of only the first, which caused redundant title generation calls. This happened when lastMessages was disabled or set to false. Titles are now correctly generated only on the first turn. (#12890)

  • Fixed workflow step errors not being propagated to the configured Mastra logger. The execution engine now properly propagates the Mastra logger through the inheritance chain, and the evented step executor logs errors with structured MastraError context (matching the default engine behavior). Closes #12793 (#12834)

  • Update memory config and exports: (#12704)

    • Updated SerializedMemoryConfig to allow embedder?: EmbeddingModelId | string for flexibility
    • Exported EMBEDDING_MODELS and EmbeddingModelInfo for use in server endpoints
  • Fixed a catch-22 where third-party AI SDK providers (like ollama-ai-provider-v2) were rejected by both stream() and streamLegacy() due to unrecognized specificationVersion values. (#12856)

    When a model has a specificationVersion that isn't 'v1', 'v2', or 'v3' (e.g., from a third-party provider), two fixes now apply:

    1. Auto-wrapping in resolveModelConfig(): Models with unknown spec versions that have doStream/doGenerate methods are automatically wrapped as AI SDK v5 models, preventing the catch-22 entirely.
    2. Improved error messages: If a model still reaches the version check, error messages now show the actual unrecognized specificationVersion instead of creating circular suggestions between stream() and streamLegacy().
  • Fixed routing output so users only see the final answer when routing handles a request directly. Previously, an internal routing explanation appeared before the answer and was duplicated. Fixes #12545. (#12786)

  • Supporting changes for async buffering in observational memory, including new config options, streaming events, and UI markers. (#12891)

  • Fixed an issue where processor retry (via abort({ retry: true }) in processOutputStep) would send the rejected assistant response back to the LLM on retry. This confused models and often caused empty text responses. The rejected response is now removed from the message list before the retry iteration. (#12799)

  • Fixed Moonshot AI (moonshotai and moonshotai-cn) models using the wrong base URL. The Anthropic-compatible endpoint was not being applied, causing API calls to fail with an upstream LLM error. (#12750)

  • Fixed messages not being persisted to the database when using the stream-legacy endpoint. The thread is now saved to the database immediately when created, preventing a race condition where storage backends like PostgreSQL would reject message inserts because the thread didn't exist yet. Fixes #12566. (#12774)

  • When calling mastra.setLogger(), memory instances were not being updated with the new logger. This caused memory-related errors to be logged via the default ConsoleLogger instead of the configured logger. (#12905)

  • Fixed tool input validation failing when LLMs return stringified JSON for array or object parameters. Some models (e.g., GLM4.7) send "[\"file.py\"]" instead of ["file.py"] for array fields, which caused Zod validation to reject the input. The validation pipeline now automatically detects and parses stringified JSON values when the schema expects an array or object. (GitHub #12757) (#12771)

  • Fixed working memory tools being injected when no thread or resource context is provided. Made working memory tool execute scope-aware: thread-scoped requires threadId, resource-scoped requires resourceId (previously both were always required regardless of scope). (#12831)

  • Fixed a crash when using agent workflows that have no input schema. Input now passes through on first invocation, so workflows run instead of failing. (#12739) (#12785)

  • Fixes issue where client tools could not be used with agent.network(). Client tools configured in an agent's defaultOptions will now be available during network execution. (#12821)

    Fixes #12752

  • Steps now support an optional metadata property for storing arbitrary key-value data. This metadata is preserved through step serialization and is available in the workflow graph, enabling use cases like UI annotations or custom step categorization. (#12861)

    import { createStep } from "@mastra/core/workflows";
    import { z } from "zod";
    
    const step = createStep({
      //...step information
    +  metadata: {
    +    category: "orders",
    +    priority: "high",
    +    version: "1.0.0",
    +  },
    });
    

    Metadata values must be serializable (no functions or circular references).

  • Fixed: You can now pass workflows with a requestContextSchema to the Mastra constructor without a type error. Related: #12773. (#12857)

  • Fixed TypeScript type errors when using .optional().default() in workflow input schemas. Workflows with default values in their schemas no longer produce false type errors when chaining steps with .then(). Fixes #12634 (#12778)

  • Fix setLogger to update workflow loggers (#12889)

    When calling mastra.setLogger(), workflows were not being updated with the new logger. This caused workflow errors to be logged via the default ConsoleLogger instead of the configured logger (e.g., PinoLogger with HttpTransport), resulting in missing error logs in Cloud deployments.

@mastra/agent-builder@1.0.3

Patch Changes

@mastra/deployer@1.3.0

Minor Changes

  • Added support for request context presets in Mastra Studio. You can now define a JSON file with named requestContext presets and pass it via the --request-context-presets CLI flag to both mastra dev and mastra studio commands. A dropdown selector appears in the Studio Playground, allowing you to quickly switch between preset configurations. (#12501)

    Usage:

    mastra dev --request-context-presets ./presets.json
    mastra studio --request-context-presets ./presets.json
    

    Presets file format:

    {
      "development": { "userId": "dev-user", "env": "development" },
      "production": { "userId": "prod-user", "env": "production" }
    }
    

    When presets are loaded, a dropdown appears above the JSON editor on the Request Context page. Selecting a preset populates the editor, and manually editing the JSON automatically switches back to "Custom".

Patch Changes

@mastra/editor@0.3.0

Minor Changes

  • Added requestContextSchema and rule-based conditional fields for stored agents. (#12896)

    Stored agent fields (tools, model, workflows, agents, memory, scorers, inputProcessors, outputProcessors, defaultOptions) can now be configured as conditional variants with rule groups that evaluate against request context at runtime. All matching variants accumulate — arrays are concatenated and objects are shallow-merged — so agents dynamically compose their configuration based on the incoming request context.

    New requestContextSchema field

    Stored agents now accept an optional requestContextSchema (JSON Schema) that is converted to a Zod schema and passed to the Agent constructor, enabling request context validation.

    Conditional field example

    await agentsStore.create({
      agent: {
        id: "my-agent",
        name: "My Agent",
        instructions: "You are a helpful assistant",
        model: { provider: "openai", name: "gpt-4" },
        tools: [
          { value: { "basic-tool": {} } },
          {
            value: { "premium-tool": {} },
            rules: {
              operator: "AND",
              conditions: [{ field: "tier", operator: "equals", value: "premium" }]
            }
          }
        ],
        requestContextSchema: {
          type: "object",
          properties: { tier: { type: "string" } }
        }
      }
    });
    
  • Added dynamic instructions for stored agents. Agent instructions can now be composed from reusable prompt blocks with conditional rules and variable interpolation, enabling a prompt-CMS-like editing experience. (#12861)

    Instruction blocks can be mixed in an agent's instructions array:

    • text — static text with {{variable}} interpolation
    • prompt_block_ref — reference to a versioned prompt block stored in the database
    • prompt_block — inline prompt block with optional conditional rules

    Creating a prompt block and using it in a stored agent:

    // Create a reusable prompt block
    const block = await editor.createPromptBlock({
      id: "security-rules",
      name: "Security Rules",
      content: "You must verify the user's identity. The user's role is {{user.role}}.",
      rules: {
        operator: "AND",
        conditions: [{ field: "user.isAuthenticated", operator: "equals", value: true }]
      }
    });
    
    // Create a stored agent that references the prompt block
    await editor.createStoredAgent({
      id: "support-agent",
      name: "Support Agent",
      instructions: [
        { type: "text", content: "You are a helpful support agent for {{company}}." },
        { type: "prompt_block_ref", id: "security-rules" },
        {
          type: "prompt_block",
          content: "Always be polite.",
          rules: { operator: "AND", conditions: [{ field: "tone", operator: "equals", value: "formal" }] }
        }
      ],
      model: { provider: "openai", name: "gpt-4o" }
    });
    
    // At runtime, instructions resolve dynamically based on request context
    const agent = await editor.getStoredAgentById("support-agent");
    const result = await agent.generate("Help me reset my password", {
      requestContext: new RequestContext([
        ["company", "Acme Corp"],
        ["user.isAuthenticated", true],
        ["user.role", "admin"],
        ["tone", "formal"]
      ])
    });
    

    Prompt blocks are versioned — updating a block's content takes effect immediately for all agents referencing it, with no cache clearing required.

    • Added a new scorer-definitions storage domain for storing LLM-as-judge and preset scorer configurations in the database
    • Introduced a VersionedStorageDomain generic base class that unifies AgentsStorage, PromptBlocksStorage, and ScorerDefinitionsStorage with shared CRUD methods (create, getById, getByIdResolved, update, delete, list, listResolved)
    • Flattened stored scorer type system: replaced nested preset/customLLMJudge config with top-level type, instructions, scoreRange, and presetConfig fields
    • Refactored MastraEditor to use a namespace pattern (editor.agent.*, editor.scorer.*, editor.prompt.*) backed by a CrudEditorNamespace base class with built-in caching and an onCacheEvict hook
    • Added rawConfig support to MastraBase and MastraScorer via toRawConfig(), so hydrated primitives carry their stored configuration
    • Added prompt block and scorer registration to the Mastra class (addPromptBlock, removePromptBlock, addScorer, removeScorer)

    Creating a stored scorer (LLM-as-judge):

    const scorer = await editor.scorer.create({
      id: "my-scorer",
      name: "Response Quality",
      type: "llm-judge",
      instructions: "Evaluate the response for accuracy and helpfulness.",
      model: { provider: "openai", name: "gpt-4o" },
      scoreRange: { min: 0, max: 1 }
    });
    

    Retrieving and resolving a stored scorer:

    // Fetch the stored definition from DB
    const definition = await editor.scorer.getById("my-scorer");
    
    // Resolve it into a runnable MastraScorer instance
    const runnableScorer = editor.scorer.resolve(definition);
    
    // Execute the scorer
    const result = await runnableScorer.run({
      input: "What is the capital of France?",
      output: "The capital of France is Paris."
    });
    

    Editor namespace pattern (before/after):

    // Before
    const agent = await editor.getStoredAgentById("abc");
    const prompts = await editor.listPromptBlocks();
    
    // After
    const agent = await editor.agent.getById("abc");
    const prompts = await editor.prompt.list();
    

    Generic storage domain methods (before/after):

    // Before
    const store = storage.getStore("agents");
    await store.createAgent({ agent: input });
    await store.getAgentById({ id: "abc" });
    await store.deleteAgent({ id: "abc" });
    
    // After
    const store = storage.getStore("agents");
    await store.create({ agent: input });
    await store.getById("abc");
    await store.delete("abc");
    
    • Changed stored agent tools field from string[] to Record<string, { description?: string }> to allow per-tool description overrides
    • When a stored agent specifies a custom description for a tool, the override is applied at resolution time
    • Updated server API schemas, client SDK types, and editor resolution logic accordingly
  • Breaking: Removed cloneAgent() from the Agent class. Agent cloning is now handled by the editor package via editor.agent.clone(). (#12904)

    If you were calling agent.cloneAgent() directly, use the editor's agent namespace instead:

    // Before
    const result = await agent.cloneAgent({ newId: "my-clone" });
    
    // After
    const editor = mastra.getEditor();
    const result = await editor.agent.clone(agent, { newId: "my-clone" });
    

    Why: The Agent class should not be responsible for storage serialization. The editor package already handles converting between runtime agents and stored configurations, so cloning belongs there.

    Added getConfiguredProcessorIds() to the Agent class, which returns raw input/output processor IDs for the agent's configuration.

Patch Changes

@mastra/evals@1.1.1

Patch Changes

@mastra/mcp@1.0.1

Patch Changes

@mastra/mcp-docs-server@1.1.1

Patch Changes

@mastra/memory@1.2.0

Minor Changes

  • Async buffering for observational memory is now enabled by default. Observations are pre-computed in the background as conversations grow — when the context window fills up, buffered observations activate instantly with no blocking LLM call. This keeps agents responsive during long conversations. (#12939)

    Default settings:

    • observation.bufferTokens: 0.2 — buffer every 20% of messageTokens (~6k tokens with the default 30k threshold)
    • observation.bufferActivation: 0.8 — on activation, retain 20% of the message window
    • reflection.bufferActivation: 0.5 — start background reflection at 50% of the observation threshold

    Disabling async buffering:

    Set observation.bufferTokens: false to disable async buffering for both observations and reflections:

    const memory = new Memory({
      options: {
        observationalMemory: {
          model: "google/gemini-2.5-flash",
          observation: {
            bufferTokens: false
          }
        }
      }
    });
    

    Model is now required when passing an observational memory config object. Use observationalMemory: true for the default (google/gemini-2.5-flash), or set a model explicitly:

    // Uses default model (google/gemini-2.5-flash)
    observationalMemory: true
    
    // Explicit model
    observationalMemory: {
      model: "google/gemini-2.5-flash",
    }
    

    shareTokenBudget requires bufferTokens: false (temporary limitation). If you use shareTokenBudget: true, you must explicitly disable async buffering:

    observationalMemory: {
      model: "google/gemini-2.5-flash",
      shareTokenBudget: true,
      observation: { bufferTokens: false },
    }
    

    New streaming event: data-om-status replaces data-om-progress with a structured status object containing active window usage, buffered observation/reflection state, and projected activation impact.

    Buffering markers: New data-om-buffering-start, data-om-buffering-end, and data-om-buffering-failed streaming events for UI feedback during background operations.

Patch Changes

@mastra/playground-ui@10.0.0

Minor Changes

  • Added new agent creation page with CMS-style layout featuring Identity, Capabilities, and Revisions tabs. The page includes a prompt editor with Handlebars template support, partials management, and instruction diff viewing for revisions. (#12569)

  • Added support for request context presets in Mastra Studio. You can now define a JSON file with named requestContext presets and pass it via the --request-context-presets CLI flag to both mastra dev and mastra studio commands. A dropdown selector appears in the Studio Playground, allowing you to quickly switch between preset configurations. (#12501)

    Usage:

    mastra dev --request-context-presets ./presets.json
    mastra studio --request-context-presets ./presets.json
    

    Presets file format:

    {
      "development": { "userId": "dev-user", "env": "development" },
      "production": { "userId": "prod-user", "env": "production" }
    }
    

    When presets are loaded, a dropdown appears above the JSON editor on the Request Context page. Selecting a preset populates the editor, and manually editing the JSON automatically switches back to "Custom".

  • Added multi-block instruction editing for agents. Instructions can now be split into separate blocks that are reorderable via drag-and-drop, each with optional conditional display rules based on agent variables. Includes a preview dialog to test how blocks compile with different variable values. (#12759)

  • Update peer dependencies to match core package version bump (1.1.0) (#12508)

Patch Changes

@mastra/server@1.3.0

Minor Changes

  • Added stored scorer CRUD API and updated editor namespace calls (#12846)

    • Added server routes for stored scorer definitions: create, read, update, delete, list, and list resolved
    • Added StoredScorer resource to the client SDK with full CRUD support
    • Updated all server handlers to use the new editor namespace pattern (editor.agent.getById, editor.agent.list, editor.prompt.preview) and generic storage domain methods (store.create, store.getById, store.delete)
  • Update peer dependencies to match core package version bump (1.1.0) (#12508)

    • Changed stored agent tools field from string[] to Record<string, { description?: string }> to allow per-tool description overrides
    • When a stored agent specifies a custom description for a tool, the override is applied at resolution time
    • Updated server API schemas, client SDK types, and editor resolution logic accordingly

Patch Changes

  • Fixed observational memory progress bars resetting to zero after agent responses finish. (#12934)

  • Fixed issues with stored agents (#12790)

  • Added requestContextSchema and conditional field validation to stored agent API schemas. The stored agent create, update, and version endpoints now accept conditional variants for dynamically-configurable fields (tools, model, workflows, agents, memory, scorers, inputProcessors, outputProcessors, defaultOptions). (#12896)

  • Fix stored agents functionality: (#12704)

    • Fixed auto-versioning bug where activeVersionId wasn't being updated when creating new versions
    • Added GET /vectors endpoint to list available vector stores
    • Added GET /embedders endpoint to list available embedding models
    • Added validation for memory configuration when semantic recall is enabled
    • Fixed version comparison in handleAutoVersioning to use the active version instead of latest
    • Added proper cache clearing after agent updates
  • Added POST /stored/agents/preview-instructions endpoint for resolving instruction blocks against a request context. This enables UI previews of how agent instructions will render with specific variables and rule conditions. Updated Zod schemas to support the new AgentInstructionBlock union type (text, prompt_block_ref, inline prompt_block) in agent version and stored agent responses. (#12776)

  • Improved workspace lookup performance while keeping backwards compatibility. (#12607)

    The workspace handlers now use Mastra's workspace registry (getWorkspaceById()) for faster lookup when available, and fall back to iterating through agents for older @mastra/core versions.

    This change is backwards compatible - newer @mastra/server works with both older and newer @mastra/core versions.

  • Route server errors through Mastra logger instead of console.error (#12888)

    Server adapter errors (handler errors, parsing errors, auth errors) now use the configured Mastra logger instead of console.error. This ensures errors are properly
    formatted as structured logs and sent to configured transports like HttpTransport.

  • Fixed Swagger UI not including the API prefix (e.g., /api) in request URLs. The OpenAPI spec now includes a servers field with the configured prefix, so Swagger UI correctly generates URLs like http://localhost:4111/api/agents instead of http://localhost:4111/agents. (#12847)

  • Fixed sort direction parameters being silently ignored in Thread Messages API when using bracket notation query params (e.g., orderBy[field]=createdAt&orderBy[direction]=DESC). The normalizeQueryParams function now reconstructs nested objects from bracket-notation keys, so both JSON format and bracket notation work correctly for orderBy, filter, metadata, and other complex query parameters. (Fixes #12816) (#12832)

  • Supporting work to enable workflow step metadata (#12508)

  • Made description and instructions required fields in the scorer edit form (#12897)

  • Added mount metadata to the workspace file listing response. File entries now include provider, icon, display name, and description for mounted filesystems. (#12851)

  • Added source repository info to workspace skill listings so clients can distinguish identically named skills installed from different repos. (#12678)

  • Improved error messages when skill installation fails, now showing the actual error instead of a generic message. (#12605)

  • Breaking: Removed cloneAgent() from the Agent class. Agent cloning is now handled by the editor package via editor.agent.clone(). (#12904)

    If you were calling agent.cloneAgent() directly, use the editor's agent namespace instead:

    // Before
    const result = await agent.cloneAgent({ newId: "my-clone" });
    
    // After
    const editor = mastra.getEditor();
    const result = await editor.agent.clone(agent, { newId: "my-clone" });
    

    Why: The Agent class should not be responsible for storage serialization. The editor package already handles converting between runtime agents and stored configurations, so cloning belongs there.

    Added getConfiguredProcessorIds() to the Agent class, which returns raw input/output processor IDs for the agent's configuration.

  • Updated dependencies [717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:

    • @mastra/core@1.3.0

Don't miss a new mastra release

NewReleases is sending notifications on new releases.