Highlights
Background Tasks (Async Tool Execution + APIs + Storage Support)
Agents can now dispatch slow tool calls as background tasks while the main conversation keeps streaming, then inject results back into the loop when they finish. This comes with new /api/background-tasks endpoints (list/get/SSE stream), client methods (listBackgroundTasks, getBackgroundTask, streamBackgroundTasks), and new BackgroundTasksStorage domain implementations across major storage adapters.
New Redis Storage Adapter (@mastra/redis)
Introduces @mastra/redis, a Redis-backed Mastra storage provider (memory/workflows/scores) using the official node-redis client, with flexible connection options including connection strings or injected preconfigured clients.
Netlify Edge Deployment Target
NetlifyDeployer adds a target: 'edge' option to deploy as Netlify Edge Functions (Deno at the edge) with CPU-time limits instead of hard wall-clock timeouts—better suited for longer-running AI workflows than 60s serverless limits.
Observability: RAG Runs in Traces + Lightweight Trace/Span Fetching
RAG ingestion runs now appear in observability traces alongside agents/workflows, and traces can be filtered by traceId. New lightweight schemas and endpoints (including GET /observability/traces/:traceId/light and storage getTraceLight) reduce timeline payloads dramatically by omitting heavy span fields until details are requested.
Security & Governance for Telemetry (Credential Leak Fix + Per-request Redaction/Tags)
Span serialization is hardened to prevent LLM/API credentials and auth headers from leaking into telemetry across routers, gateways, and model wrappers. Additionally, server calls can now set tracingOptions (tags, hideInput, hideOutput) per request to control span labeling and redaction.
Breaking Changes
- None called out in the provided changelog (no consolidated breaking-change section for these versions).
Changelog
@mastra/core@1.26.0
Minor Changes
-
RAG ingestion runs now appear in observability traces, next to your agents, workflows, and scorers. (#15512)
You can now filter traces by
traceIdwhen listing them.Added lightweight span and trace schemas (
LightSpanRecord,GetTraceLightResponse) that exclude heavy fields likeinput,output,attributes, andmetadata— reducing per-span payload by ~97% for timeline rendering. -
Fixed potential credential leakage in observability spans. LLM API keys, authentication headers, and gateway tokens could previously appear in span input or output data sent to telemetry backends. (#15489)
What's fixed
The model router, AI SDK model wrappers (v4 legacy, v5, v6), built-in gateways (Mastra, Netlify, Models.dev, Azure OpenAI), and the voice provider base class now restrict what they expose to spans. Only public identity fields — model ID, provider, gateway ID, voice name — are included. Private configuration such as API keys,
Authorizationheaders, OAuth tokens, and proxy credentials is no longer serialized into spans.Legacy AI SDK v4 models passed to
resolveModelConfigwere previously returned unwrapped. They are now wrapped inAISDKV4LegacyLanguageModel, which applies the sameserializeForSpan()safety as the v5/v6 wrappers while preserving theLanguageModelV1interface so existing consumers continue to work.The
SensitiveDataFilterspan output processor already redacted values under common field names (apiKey,token,authorization, etc.) when enabled. This fix closes the gap for users who did not have it configured, and for cases where credentials were nested under custom field names that the filter's exact-match list did not cover.Recommended action
- Review existing telemetry data for leaked credentials and rotate any keys that may have been captured.
- Custom gateways extending
MastraModelGatewayand custom voice providers extendingMastraVoiceare automatically covered — they inherit the new safe default. OverrideserializeForSpan()only if you want to expose additional non-sensitive fields. - For any other class you pass into a span (e.g. as
input,output,attributes, ormetadata) that holds enumerable fields with credentials or other sensitive state, add aserializeForSpan()method. TypeScript-privateproperties are still walked by span serialization becauseprivateis compile-time only.
class MyServiceClient { constructor(private config: { apiKey: string; endpoint: string }) {} // Without this, spans carrying a MyServiceClient instance would // serialize `config.apiKey` through every enumerable property. serializeForSpan() { return { endpoint: this.config.endpoint }; } }
-
Added support for sub-agent version overrides in core execution. Global defaults can be set on the Mastra instance and overridden per
generate()/stream()call, with cascading propagation via requestContext. (#15373) -
Added per-entry
modelSettings,providerOptions, andheadersto agent model fallback arrays. Each entry can now specify its own temperature, topP, provider-specific options, and HTTP headers — either statically or as a function ofrequestContext. Closes #15421. (#15429)Example
const agent = new Agent({ model: [ { model: 'google/gemini-2.5-flash', maxRetries: 2, modelSettings: { temperature: 0.3 }, providerOptions: { google: { thinkingConfig: { thinkingBudget: 0 } } }, }, { model: 'openai/gpt-5-mini', maxRetries: 2, modelSettings: { temperature: 0.7 }, providerOptions: { openai: { reasoningEffort: 'low' } }, }, ], });
Precedence:
modelSettingsandproviderOptions: per-fallback entry > call-timestream()/generate()options > agentdefaultOptions.modelSettingsshallow-merges by key;providerOptionsdeep-merges recursively, preserving sibling and nested keys.headers: call-timemodelSettings.headers> per-fallbackheaders> model-router-extracted headers. This preserves the existing Mastra contract from #11275, where runtime headers (typically tracing, auth, tenancy) intentionally override model-level headers.
-
Added activateAfterIdle setting for observational memory so buffered observations can activate after idle time before the next prompt. (#15365)
Example
Set
activateAfterIdle: 300_000(or"5m") on theobservationalMemoryconfig to activate buffered context after 5 minutes of inactivity.This helps long-running threads reuse compressed context after prompt cache TTLs expire instead of sending a larger raw message window on the next request.
-
You can now opt into parent-agent reuse for the separate structured-output pass with
structuredOutput: { schema, model, useAgent: true }, which lets the structuring request reuse the parent agent config, including memory. (#15318) -
Added unique IDs (
logId,metricId,scoreId,feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(),metrics.emit(),addScore(),addFeedback()) are unchanged. (#15242)For existing ClickHouse and DuckDB observability signal tables, run
npx mastra migratebefore initializing the store so the new signal-ID schema is applied. -
Processor traces now store hook-specific inputs and only include changed outputs, reducing payload size while keeping traces more replayable. If you consume
PROCESSOR_RUNpayloads directly, update any dashboards or parsers that depend on the previous shape. (#15493)
Patch Changes
-
Fixed
CompositeAuthtypes so typed auth providers, such asSimpleAuth<MyUser>orMastraAuthClerk, can be combined without casts. (#15556) -
Update provider registry and model documentation with latest models and providers (
3d83d06) -
Fixed browser context reminders breaking prompt cache. Browser reminders are now added as new user messages instead of modifying existing message history. (#15417)
-
Fixed Harness subagent tracing so delegated runs keep the parent tracing context and show up in the same trace in observability exporters. Fixes #15461. (#15473)
-
Refactored how assistant messages are constructed during streaming. Messages are now built from the complete chunk sequence after each step instead of being assembled mid-stream. This fixes duplicate OpenAI item IDs (
rs_*,msg_*), eliminates empty text parts from streaming artifacts, and ensures provider metadata is correctly attributed. (#15454) -
Fixed nested workflows dropping
resourceIdwhen executed as a step of a parent workflow. Child workflow snapshots now preserve the parent run's resource association, so tenant-scoped persistence works end-to-end. Closes #15246. (#15447)const run = await parent.createRun({ runId: 'run-1', resourceId: 'workspace-1', }); await run.start({ inputData: { ok: true } }); // Before: child snapshots persisted with resourceId: undefined // After: child snapshots persisted with resourceId: 'workspace-1'
-
Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
-
Fixed messages not being persisted when multiple memory processors are used together. Processor state is now correctly passed between chained workflow steps, ensuring all messages are saved. (#14884)
-
Fix prototype pollution in
setNestedValue(@mastra/core/utils) andgenerateOpenAPIDocument(@mastra/server). (#15565)setNestedValuenow rejects dot-path segments named__proto__,constructor, orprototype, preventing attacker-controlled field paths passed toselectFieldsfrom pollutingObject.prototype.generateOpenAPIDocumentbuilds itspathsmap withObject.create(null)so a route path of__proto__cannot poison the prototype chain. -
Fixed assistant model attribution so provider and model information is preserved more reliably in stored assistant messages. (#15462)
Loop runs now keep the resolved model on the first
step-start, already-attributedstep-startparts are left alone, and post-tool assistant continuations preserve their incoming metadata when they merge into an existing assistant message.This keeps downstream features working with the correct model identity instead of falling back to incomplete metadata or losing it during merge.
-
Fixed channel webhook handling in Node.js when no execution context is available. (#15441)
-
Recalled V4 messages now preserve
data-*message parts (e.g.data-tool-call-suspended) after a page refresh, so suspended HITL workflows can resume correctly. (#14211) -
Fixed structured output to keep persisted assistant text behavior aligned with existing memory recall paths. (#15318)
-
Fixed processOutputStep not receiving token usage data. Output processors now receive usage (inputTokens, outputTokens, totalTokens) for the current LLM step, enabling per-step cost tracking and token budget enforcement. (#15068)
-
Fixed
requireApprovalon tools to accept a function in addition to a boolean. Previously, passing a function forrequireApprovalon a tool created withcreateToolwas silently ignored and approval was never required. (#15346)import { createTool } from '@mastra/core/tools'; import { z } from 'zod'; createTool({ id: 'delete-file', description: 'Delete a file', inputSchema: z.object({ path: z.string() }), // Now works: only require approval for paths outside /tmp requireApproval: input => !input.path.startsWith('/tmp/'), execute: async ({ context }) => { // ... }, });
-
Fixed resume errors for suspended agent runs:
resumeStream()andresumeGenerate()now return a clear message when storage is missing or therunIdis invalid. (#15514) -
Fixed OpenAI tool strict mode when requests pass through the model router.
strict: trueon function tools now survives compatibility prep, so OpenAI Responses models receive strict tool definitions instead of silently downgrading them to non-strict. (#15397) -
Added multi-select choices to the Harness ask_user tool. (#15485)
-
Fixed noisy browser reminders being added to non-browser turns. Browser reminders are now added only when browser context exists (for example, current page URL or title). (#15416)
-
Fixed
dataset.startExperimenthanging forever whentargetTypeis'workflow'. Workflow experiments now complete normally, honouritemTimeout, and surface failures. Fixes #15453. (#15570) -
Fixed PrefillErrorHandler to recover from Qwen/llama.cpp prefill rejections with enable_thinking, so agents retry with a continue reminder instead of failing after skill/tool turns. (#15518)
-
Add background task execution for agents. Agents can dispatch slow tool calls to run asynchronously while the conversation keeps streaming, and results are injected back into the loop when they complete. (#15307)
-
Fixed fallback model attribution in agent traces. When an agent fell back after the primary model failed, token usage and cost were reported against the primary model instead of the fallback that actually served the response (e.g. in Langfuse). Fixes #13547. (#15503)
-
Fixed agent stream errors when providers end a stream without an error payload. (#15435)
-
Fixed provider-defined tools with custom execute callbacks (e.g. openai.tools.applyPatch) being incorrectly skipped during execution. Previously, all provider-defined tools were assumed to be provider-executed, which meant user-supplied execute functions were never called. Now, provider tools with a custom execute are correctly identified as client-executed. (#14819)
-
Added model metadata to step-start parts so model changes can be detected across steps, including within a single assistant message. (#15420)
-
Fixed message serialization to preserve millisecond precision in createdAt timestamps. (#15500)
@mastra/ai-sdk@1.4.1
Patch Changes
-
Fixed workflow streaming in @mastra/ai-sdk so intermediate
data-workflowparts stop repeating every completed step output. Addeddata-workflow-stepparts with the full payload for the step that just changed, which reduces stream size for long-running workflows while preserving final workflow outputs. (#15218)If your UI reads live step outputs during workflow execution, it should now consume
data-workflow-stepparts in addition todata-workflow. Final workflow snapshots still include the full step outputs. -
Fix AI SDK v6 approval replay so ordinary user follow-up turns do not resume stale approval responses. (#15480)
-
Fixed tool call approvals in AI SDK v6:
handleChatStreamnow automatically routes toresumeStreamwhen the AI SDK v6 native approval flow is used on the client (no extra server-side wiring required). The v6 stream now emits nativetool-approval-requestparts souseChatcan surface approval UI and calladdToolApprovalResponse(), while also emitting the existingdata-tool-call-approvalchunk for backwards compatibility. (#15345) -
Fixed AI SDK v6 tool approval streams so requireApproval works with handleChatStream and AssistantChatTransport. (#15345)
@mastra/arthur@0.2.4
Patch Changes
- Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
@mastra/clickhouse@1.5.0
Minor Changes
-
Added unique IDs (
logId,metricId,scoreId,feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(),metrics.emit(),addScore(),addFeedback()) are unchanged. (#15242)For existing ClickHouse and DuckDB observability signal tables, run
npx mastra migratebefore initializing the store so the new signal-ID schema is applied.
Patch Changes
-
Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307) -
Added
getTraceLightmethod to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields likeinput,output,attributes, andmetadatawhen they are not needed. (#15574)
@mastra/client-js@1.14.0
Minor Changes
-
Added
forEachIndexoption torun.resume(),run.resumeAsync(), andrun.resumeStream(). Use it to resume a single iteration of a suspended.foreach()step while leaving the other iterations suspended. (#15563)await client .getWorkflow('myWorkflow') .createRun(runId) .resume({ step: 'approve', resumeData: { ok: true }, forEachIndex: 1, // only resume the second iteration });
Patch Changes
-
Add
/api/background-tasksroutes (SSE stream, list with filters + pagination, get by ID) and matchingMastraClientmethods (listBackgroundTasks,getBackgroundTask,streamBackgroundTasks). (#15307) -
Fixed @mastra/client-js to re-export RequestContext so client SDK users can import it from @mastra/client-js. (#15413)
-
Added
observabilityRuntimeStrategytoGetSystemPackagesResponseso clients can read the active observability tracing strategy (realtime,batch-with-updates,insert-only, orevent-sourced) reported by the server. (#15512)
@mastra/cloudflare@1.3.2
Patch Changes
- Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307)
@mastra/cloudflare-d1@1.0.5
Patch Changes
- Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307)
@mastra/convex@1.0.8
Patch Changes
- Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307)
@mastra/deployer-netlify@1.1.0
Minor Changes
-
Added
targetoption toNetlifyDeployerfor deploying as Netlify Edge Functions. (#13103)export const mastra = new Mastra({ deployer: new NetlifyDeployer({ target: 'edge', }), });
Edge functions run on Deno at the network edge, closer to users, with no hard execution timeout (only a CPU time limit). This makes them a better fit for longer-running AI workflows that may exceed the 60s serverless function timeout.
The default target remains
'serverless', so existing usage is unaffected.
Patch Changes
@mastra/docker@0.1.0
Minor Changes
-
Added @mastra/docker, a Docker container sandbox provider for Mastra workspaces. Executes commands inside local Docker containers using long-lived containers with
docker exec. Supports bind mounts, environment variables, container reconnection by label, custom images, and network configuration. Targets local development, CI/CD, air-gapped deployments, and cost-sensitive scenarios where cloud sandboxes are unnecessary. (#14500)Usage
import { Agent } from '@mastra/core/agent'; import { Workspace } from '@mastra/core/workspace'; import { DockerSandbox } from '@mastra/docker'; const workspace = new Workspace({ sandbox: new DockerSandbox({ image: 'node:22-slim', timeout: 60_000, }), }); const agent = new Agent({ name: 'dev-agent', model: 'anthropic/claude-opus-4-6', workspace, });
Patch Changes
- Fixed process kill to target the entire process group (negative PID) with fallback, ensuring child processes spawned inside the container are properly cleaned up. Tracked process handles are now cleared after container stop or destroy to prevent stale references. (#14500)
@mastra/duckdb@1.2.0
Minor Changes
-
Added unique IDs (
logId,metricId,scoreId,feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(),metrics.emit(),addScore(),addFeedback()) are unchanged. (#15242)For existing ClickHouse and DuckDB observability signal tables, run
npx mastra migratebefore initializing the store so the new signal-ID schema is applied.
Patch Changes
-
Fixed DuckDB installs by using a resolvable @duckdb/node-api version range. (#15419)
-
Added
getTraceLightmethod to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields likeinput,output,attributes, andmetadatawhen they are not needed. (#15574)
@mastra/dynamodb@1.0.4
Patch Changes
- Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307)
@mastra/google-cloud-pubsub@1.0.3
Patch Changes
- Add
nacksupport anddeliveryAttempttracking on the subscriber callback, and enable exactly-once delivery on grouped subscriptions. (#15307)
@mastra/laminar@1.0.17
Patch Changes
- Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
@mastra/lance@1.0.5
Patch Changes
- Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307)
@mastra/langfuse@1.2.0
Minor Changes
-
Added new attribute mappings to the Langfuse exporter so more Mastra attributes are filterable in Langfuse's UI. (#15445)
Observation-level metadata —
gen_ai.agent.id,gen_ai.agent.name,mastra.span.type, andgen_ai.operation.nameare now mapped tolangfuse.observation.metadata.*, making them top-level filterable keys on each observation. This lets you scope Langfuse evaluators to specific agents or span types.Trace-level attributes —
mastra.metadata.traceNameandmastra.metadata.versionare now mapped tolangfuse.trace.nameandlangfuse.trace.version, enabling custom trace names and version-based filtering.
Patch Changes
-
Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
-
Improved Langfuse trace batching for streamed runs by adding
flushAtandflushIntervalcontrols. (#15460)
@mastra/libsql@1.9.0
Minor Changes
-
Use DiskANN vector_top_k() index for faster vector queries when available (#14913)
LibSQLVector.query() now automatically uses the existing DiskANN index for approximate nearest neighbor search instead of brute-force full table scans, providing 10-25x query speedups on larger datasets. Falls back to brute-force when no index exists.
Patch Changes
-
Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307) -
Added
getTraceLightmethod to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields likeinput,output,attributes, andmetadatawhen they are not needed. (#15574)
@mastra/mcp@1.5.1
Patch Changes
-
Fixed MCP tool strict mode propagation. MCP servers now expose Mastra tool strictness in MCP metadata, and the MCP client restores that flag when rebuilding tools so strict OpenAI tool calling works for MCP-backed tools too. (#15397)
-
Fixed MCP tools with recursive JSON Schema refs so they stay serializable when loaded. (#15400)
@mastra/memory@1.16.0
Minor Changes
-
Added activateAfterIdle setting for observational memory so buffered observations can activate after idle time before the next prompt. (#15365)
Example
Set
activateAfterIdle: 300_000(or"5m") on theobservationalMemoryconfig to activate buffered context after 5 minutes of inactivity.This helps long-running threads reuse compressed context after prompt cache TTLs expire instead of sending a larger raw message window on the next request.
-
Added
activateOnProviderChangeso observational memory can activate buffered observations and reflections before switching to a different provider or model. (#15420)const memory = new Memory({ options: { observationalMemory: { model: 'google/gemini-2.5-flash', activateOnProviderChange: true, }, }, });
This helps keep prompt-cache savings when the next step cannot reuse the previous provider's cache.
Patch Changes
-
Fixed early observational memory activations so buffered reflections are only activated when they still leave a healthy active observation set. (#15462)
Before this change, idle-timeout (
activateAfterIdle) and model/provider-change (activateOnProviderChange) activations could swap in a buffered reflection too early. In bad cases, that replaced a large raw observation tail with a much smaller mostly-compressed result, which hurt reflection quality.Early activations now stay buffered unless both of these checks pass:
- The unreflected observation tail is at least as large as the buffered reflection, so the activated result is not dominated by compressed content.
- The combined post-activation size is at least 75% of what a normal threshold activation would produce, so early activations do not cliff far below the regular target.
This update also fixes false
provider_changeactivations when older persisted messages only contain a bare model id likegpt-5.4while newer turns use the fully qualifiedprovider/modelIdform. -
Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
-
Fixed other-thread context filtering falling back to the observational memory record timestamp when thread metadata is missing. (#15269)
@mastra/mongodb@1.7.2
Patch Changes
-
Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307) -
Added
getTraceLightmethod to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields likeinput,output,attributes, andmetadatawhen they are not needed. (#15574)
@mastra/mssql@1.2.1
Patch Changes
-
Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307) -
Added
getTraceLightmethod to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields likeinput,output,attributes, andmetadatawhen they are not needed. (#15574)
@mastra/observability@1.10.0
Minor Changes
-
Changed
MODEL_CHUNKtool-resultspanoutputhandling. (#15495)What changed
MODEL_CHUNKspans fortool-resultnow omitoutputfor locally executed tools.TOOL_CALLremains the canonical span for locally executed tool result payloads.MODEL_CHUNKspans for provider-executedtool-resultchunks still includeoutput.MODEL_CHUNKmetadata still includestoolCallId,toolName, andproviderExecuted.
Why
This reduces duplicate tool result payloads in traces without dropping provider-emitted tool results that may not have a matchingTOOL_CALLspan. -
Added unique IDs (
logId,metricId,scoreId,feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(),metrics.emit(),addScore(),addFeedback()) are unchanged. (#15242)For existing ClickHouse and DuckDB observability signal tables, run
npx mastra migratebefore initializing the store so the new signal-ID schema is applied.
Patch Changes
-
Fixed span serialization replacing tool parameter JSON schemas with lossy summaries like
"unknown (required)". JSON schemas in span data are now preserved as-is, keeping full type information for debugging in observability tools like Datadog. Also fixed MODEL_STEP span input showing only a keys summary instead of actual messages for AI SDK v5 providers. (#15404) -
Fixed CloudExporter to default to observability.mastra.ai for Mastra platform exports. (#15418)
-
Improved tracing overhead when filtering spans. Spans dropped by
excludeSpanTypesor the internal-span filter (includeInternalSpans: false) now skip payload serialization and retention entirely instead of paying the cost and discarding at export time. (#15487)
@mastra/otel-bridge@1.0.17
Patch Changes
- Return
undefinedfromOtelBridge.createSpanwhen no OpenTelemetry SDK is registered, so core generates valid span/trace IDs instead of reusing the OTEL no-op all-zero IDs. This prevents downstream trace exporters from dropping spans and stops the infinite-loop CPU spike in parent-matching. Fixes #15589. (#15591)
@mastra/pg@1.9.2
Patch Changes
-
Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307) -
Added
getTraceLightmethod to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields likeinput,output,attributes, andmetadatawhen they are not needed. (#15574)
@mastra/playground-ui@23.0.0
Minor Changes
-
Added
ErrorBoundarycomponent to catch and display runtime errors in the studio. Wraps routes in the local playground so a crash on one page (e.g. an agent editor referencing an unresolved workspace skill) surfaces a friendly recovery UI with Try again (in-place React reset), Reload page (full browser refresh), and Report issue (opens the Mastra GitHub issues page in a new tab) actions, plus a collapsible stack trace — instead of a blank screen. (#15561)The fallback is spatially aware: it fills its parent and the icon, heading, and body text scale up on wider containers via Tailwind container queries. Scope the boundary to a single widget to keep the rest of the UI interactive while one panel fails.
Usage
import { ErrorBoundary } from '@mastra/playground-ui'; import { useLocation } from 'react-router'; // Route-level: wrap the router outlet, reset when the path changes function Layout({ children }) { const { pathname } = useLocation(); return <ErrorBoundary resetKeys={[pathname]}>{children}</ErrorBoundary>; } // Scoped: contain the crash to one panel, leave the rest of the tree alone <ErrorBoundary variant="inline" title="The editor failed to render"> <AgentEditor /> </ErrorBoundary>;
Props:
fallback(node or render prop with{ error, errorInfo, reset }),onErrorfor reporting,resetKeysfor automatic reset,variant('section'— fills available space, default;'inline'— stays compact), andtitle/descriptionoverrides. -
Added BrandLoader, a branded pulse-wave loader component for brand moments like app boot or agent thinking. Complements Spinner, which remains the inline utility loader. (#15490)
-
Added new Logo component to the playground-ui design system. Supports two sizes (sm, md), uses currentColor for theming, and includes an optional outline-on-hover animation that respects prefers-reduced-motion. (#15513)
Patch Changes
-
Added a dedicated trace details page at
/traces/:traceId, plus the design-system changes that support it: (#15392)Button: newlinkvariant (inline, no padding/background/border).DataKeysAndValues:numOfColnow accepts3.DataPanel.Header: minimum height so heading-only headers match the height of ones with button actions.
-
Fix unhandled
TypeErroringetFileContentTypewhen the URL is relative (#15433)
or malformed. Thecatchblock now falls back to inferring the MIME type
from the raw string's file extension and strips query/hash fragments so
inputs like/files/report.pdf,https://x.dev/a.pdf?token=1, and
/files/report.pdf#page=2all resolve toapplication/pdfinstead of
rejecting.Closes #15432.
-
Refactored DataKeysAndValues.ValueLink to use the standard
asprop for custom link components, replacing the previousLinkComponentprop (#15391) -
Added a Foundations/Tokens page to the @mastra/playground-ui Storybook so you can browse all typography, color, spacing, radius, shadow, and animation tokens in one place. (#15475)
-
New filter UX on the studio's Traces and Logs pages. Click + Add Filter to pick a property and narrow by value; active filters render as editable pills. Filter state lives in the URL so filtered views survive reloads and can be shared by link. Save filters for next time remembers a default; Clear and Remove all filters are one click away. (#15512)
-
Align BrandLoader geometry with the Mastra logo: match disk positions to the logo path, introduce per-size stroke widths and bubble radii (sm/md/lg), and rebalance the gooey filter for rounder ridge↔disk fillets. Shift the size scale so sm stays, md is now w-8, lg is now w-10, and the old w-16 size is removed. (#15531)
-
Added
ScoresDataListfor rendering lists of score evaluation results. (#15339) -
Updated PageHeader.Description styling to use text color (neutral2) and simplified top margin (#15389)
-
Improved visual consistency across Chip, DropdownMenu, Notification, Popover, and toast components — unified radius and border scale. Deduplicated dropdown menu item classes and added max-height scroll handling for long menus. (#15440)
@mastra/rag@2.2.1
Patch Changes
- Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
@mastra/redis@1.0.1
Patch Changes
-
Add Redis storage provider (#11795)
Introduces
@mastra/redis, a Redis-backed storage implementation for Mastra built on the officialredis(node-redis) client.Includes support for the core storage domains (memory, workflows, scores) and multiple connection options:
connectionString,host/port/db/password, or injecting a pre-configured client for advanced setups (e.g. custom socket/retry settings, Sentinel/Cluster via custom client).
@mastra/schema-compat@1.2.9
Patch Changes
-
Fixed MCP tool validation failures when tools use JSON Schema draft 2020-12. Tools from providers like Firecrawl that declare
$schema: "https://json-schema.org/draft/2020-12/schema"now validate correctly instead of throwing "no schema with key or ref" errors. (#14530) -
Fixed MCP tools with recursive JSON Schema refs so they stay serializable when loaded. (#15400)
@mastra/server@1.26.0
Minor Changes
-
You can now tag spans and redact sensitive input or output per request by passing
tags,hideInput, orhideOutputintracingOptionswhen calling an agent or workflow. (#15512)Added a lightweight trace endpoint (
GET /observability/traces/:traceId/light) that returns only timeline-relevant span fields, dramatically reducing payload size when rendering trace timelines. Also added a dedicated span endpoint (GET /observability/traces/:traceId/spans/:spanId) to fetch full span details on demand. -
Added
forEachIndexto the workflow resume request body schema. The/workflows/:workflowId/resume,/resume-async, and/resume-streamendpoints (including their agent-builder equivalents) now accept an optional zero-basedforEachIndexso clients can target a specific iteration of a suspended.foreach()step. (#15563)// POST /workflows/:workflowId/resume // body { step: 'approve', resumeData: { ok: true }, forEachIndex: 1, // resume only the second iteration; others stay suspended }
Patch Changes
-
Add
/api/background-tasksroutes (SSE stream, list with filters + pagination, get by ID) and matchingMastraClientmethods (listBackgroundTasks,getBackgroundTask,streamBackgroundTasks). (#15307) -
Added support for
versionsfield in agent generate and stream request bodies, enabling per-request sub-agent version overrides that propagate through delegation. (#15373) -
Fix prototype pollution in
setNestedValue(@mastra/core/utils) andgenerateOpenAPIDocument(@mastra/server). (#15565)setNestedValuenow rejects dot-path segments named__proto__,constructor, orprototype, preventing attacker-controlled field paths passed toselectFieldsfrom pollutingObject.prototype.generateOpenAPIDocumentbuilds itspathsmap withObject.create(null)so a route path of__proto__cannot poison the prototype chain. -
Fixed noisy 'Background task manager not available' error log in studio when background tasks are not enabled. The list endpoint now returns an empty list, the get-by-id endpoint returns 404, and the SSE stream endpoint returns an empty stream that closes on disconnect — instead of throwing an HTTP 400 that gets logged as an error. (#15600)
-
Added unique IDs (
logId,metricId,scoreId,feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(),metrics.emit(),addScore(),addFeedback()) are unchanged. (#15242)For existing ClickHouse and DuckDB observability signal tables, run
npx mastra migratebefore initializing the store so the new signal-ID schema is applied.
@mastra/tavily@1.0.0
Major Changes
- Added the
@mastra/tavilyintegration with first-class Mastra tools for Tavily web search, extract, crawl, and map APIs, and migratedmastracode's web search tools to use it. (#15448)
Patch Changes
@mastra/upstash@1.0.5
Patch Changes
-
Add
BackgroundTasksStoragedomain implementation so@mastra/corebackground task execution works with any storage adapter. (#15307) -
Fixed slow Upstash message saves by using the message index and treating unindexed messages as new, avoiding full database scans. Also adds index-first lookups to updateMessages. Addresses #15386. (#15393)
Other updated packages
The following packages were updated with dependency changes only:
- @mastra/agent-builder@1.0.27
- @mastra/arize@1.0.18
- @mastra/braintrust@1.0.18
- @mastra/datadog@1.0.18
- @mastra/deployer@1.26.0
- @mastra/deployer-cloud@1.26.0
- @mastra/deployer-cloudflare@1.1.24
- @mastra/deployer-vercel@1.1.18
- @mastra/editor@0.7.17
- @mastra/express@1.3.10
- @mastra/fastify@1.3.10
- @mastra/hono@1.4.5
- @mastra/inngest@1.2.2
- @mastra/koa@1.4.10
- @mastra/langsmith@1.1.15
- @mastra/longmemeval@1.0.29
- @mastra/mcp-docs-server@1.1.26
- @mastra/opencode@0.0.26
- @mastra/otel-exporter@1.0.17
- @mastra/posthog@1.0.18
- @mastra/react@0.2.27
- @mastra/s3@0.3.1
- @mastra/s3vectors@1.0.3
- @mastra/sentry@1.0.17
- @mastra/voice-google-gemini-live@0.11.4
- @mastra/voice-openai-realtime@0.12.3