Changelog
@mastra/agent-builder
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/ai-sdk
- Pass original messages in chatRoute to fix uiMessages duplication #8830 (#8904)
- network routing agent text delta ai-sdk streaming (#8979)
- Support writing custom top level stream chunks (#8922)
- Refactor workflowstream into workflow output with fullStream property (#9048)
- Update peerdeps to 0.23.0-0 (#9043)
- Fix streaming of custom chunks, workflow & network support (#9109)
@mastra/arize
-
feat(otel-exporter): Add customizable 'exporter' constructor parameter
You can now pass in an instantiated
TraceExporterinheriting class intoOtelExporter.
This will circumvent the default package detection, no longer instantiating aTraceExporter
automatically if one is instead passed in to theOtelExporterconstructor.feat(arize): Initial release of @mastra/arize observability package
The
@mastra/arizepackage exports anArizeExporterclass that can be used to easily send AI
traces from Mastra to Arize AX, Arize Phoenix, or any OpenInference compatible collector.
It sends traces usesBatchSpanProcessorover OTLP connections.
It leverages the@mastra/otel-exporterpackage, reusingOtelExporterfor transmission and
span management.
See the README inobservability/arize/README.mdfor more details (#8827) -
fix(observability): Add ParentSpanContext to MastraSpan's with parentage (#9085)
@mastra/astra
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/braintrust
-
Update peerdeps to 0.23.0-0 (#9043)
-
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION→AISpanType.MODEL_GENERATION
-
AISpanType.LLM_STEP→AISpanType.MODEL_STEP -
AISpanType.LLM_CHUNK→AISpanType.MODEL_CHUNK -
LLMGenerationAttributes→ModelGenerationAttributes -
LLMStepAttributes→ModelStepAttributes -
LLMChunkAttributes→ModelChunkAttributes -
InternalSpans.LLM→InternalSpans.MODELThis change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
-
Update all imports:
import { ModelGenerationAttributes } from '@mastra/core/ai-tracing' -
Update span type references:
AISpanType.MODEL_GENERATION -
Update InternalSpans usage:
InternalSpans.MODEL(#9105)
@mastra/chroma
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/clickhouse
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/client-js
- Add tool call approval (#8649)
- Fix error handling and serialization in agent streaming to ensure errors are consistently exposed and preserved. (#9192)
@mastra/cloud
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/cloudflare
- Support for custom resume labels mapping to step to be resumed (#8941)
- Update peer dependencies to match core package version bump (0.21.2) (#8941)
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/cloudflare-d1
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/core
-
Update provider registry and model documentation with latest models and providers (c67ca32)
-
Update provider registry and model documentation with latest models and providers (efb5ed9)
-
Add deprecation warnings for format:ai-sdk (#9018)
-
network routing agent text delta ai-sdk streaming (#8979)
-
Support writing custom top level stream chunks (#8922)
-
Consolidate streamVNext logic into stream, move old stream function into streamLegacy (#9092)
-
Fix incorrect type assertions in Tool class. Created
MastraToolInvocationOptionstype to properly extend AI SDK'sToolInvocationOptionswith Mastra-specific properties (suspend,resumeData,writableStream). Removed unsafe type assertions from tool execution code. (#8510) -
fix(core): Fix Gemini message ordering validation errors (#7287, #8053)
Fixes Gemini API "single turn requests" validation error by ensuring the first non-system message is from the user role. This resolves errors when:
-
Messages start with assistant role (e.g., from memory truncation)
-
Tool-call sequences begin with assistant messages
Breaking Change: Empty or system-only message lists now throw an error instead of adding a placeholder user message, preventing confusing LLM responses.
This fix handles both issue #7287 (tool-call ordering) and #8053 (single-turn validation) by inserting a placeholder user message when needed. (#7287)
-
Add support for external trace and parent span IDs in TracingOptions. This enables integration with external tracing systems by allowing new AI traces to be started with existing traceId and parentSpanId values. The implementation includes OpenTelemetry-compatible ID validation (32 hex chars for trace IDs, 16 hex chars for span IDs). (#9053)
-
Updated
watchandwatchAsyncmethods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048) -
Fix tracing context propagation to agent steps in workflows
When creating a workflow step from an agent using
createStep(myAgent), the tracing context was not being passed to the agent'sstream()andstreamLegacy()methods. This caused tracing spans to break in the workflow chain.This fix ensures that
tracingContextis properly propagated to both agent.stream() and agent.streamLegacy() calls, matching the behavior of tool steps which already propagate tracingContext correctly. (#9074) -
Fixes how reasoning chunks are stored in memory to prevent data loss and ensure they are consolidated as single message parts rather than split into word-level fragments. (#9041)
-
fixes an issue where input processors couldn't add system or assistant messages. Previously all messages from input processors were forced to be user messages, causing an error when trying to add other role types. (#8835)
-
fix(core): Validate structured output at text-end instead of flush
Fixes structured output validation for Bedrock and LMStudio by moving validation from
flush()totext-endchunk. EliminatesfinishReasonheuristics, adds special token extraction for LMStudio, and validates at the correct point in stream lifecycle. (#8934) -
fix model.loop.test.ts tests to use structuredOutput.schema and add assertions (#8926)
-
Add
initialStateas an option to.streamVNext()(#9071) -
added resourceId and runId to workflow_run metadata in ai tracing (#9031)
-
When using OpenAI models with JSON response format, automatically enable strict schema validation. (#8924)
-
Fix custom metadata preservation in UIMessages when loading threads. The
getMessagesHandlernow convertsmessagesV2(V2 format with metadata) instead ofmessages(V1 format without metadata) to AIV5.UI format. Also updates the abstractMastraMemory.query()return type to includemessagesV2for proper type safety. (#8938) -
Fix TypeScript type errors when using provider-defined tools from external AI SDK packages.
Agents can now accept provider tools like
google.tools.googleSearch()without type errors. Creates new@internal/external-typespackage to centralize AI SDK type re-exports and addsProviderDefinedToolstructural type to handle tools from different package versions/instances due to TypeScript's module path discrimination. (#8940) -
feat(ai-tracing): Add automatic metadata extraction from RuntimeContext to spans
Enables automatic extraction of RuntimeContext values as metadata for AI tracing spans across entire traces.
Key features:
-
Configure
runtimeContextKeysin TracingConfig to extract specific keys from RuntimeContext -
Add per-request keys via
tracingOptions.runtimeContextKeysfor trace-specific additions -
Supports dot notation for nested values (e.g., 'user.id', 'session.data.experimentId')
-
TraceState computed once at root span and inherited by all child spans
-
Explicit metadata in span options takes precedence over extracted metadata
Example:
const mastra = new Mastra({
observability: {
configs: {
default: {
runtimeContextKeys: ['userId', 'environment', 'tenantId']
}
}
}
});
await agent.generate({
messages,
runtimeContext,
tracingOptions: {
runtimeContextKeys: ['experimentId'] // Adds to configured keys
}
});(#9072)
-
Fix provider tools for popular providers and add support for anthropic/claude skills. (#9038)
-
Refactor workflowstream into workflow output with fullStream property (#9048)
-
Added the ability to use model router configs for embedders (eg "openai/text-embedding-ada-002") (#8992)
-
Always set supportsStructuredOutputs true for openai compatible provider. (#8933)
-
Support for custom resume labels mapping to step to be resumed (#8941)
-
added tracing of LLM steps & chunks (#9058)
-
Fixed an issue where a custom URL in model router still validated unknown providers against the known providers list. Custom URL means we don't necessarily know the provider. This allows local providers like Ollama to work properly (#8989)
-
Show agent tool output better in playground (#9021)
-
feat: inject schema context into main agent for processor mode structured output (#8886)
-
Added providerOptions types to generate/stream for main builtin model router providers (openai/anthropic/google/xai) (#8995)
-
Generate a title for Agent.network() threads (#8853)
-
Fix nested workflow events and networks (#9132)
-
Update provider registry and model documentation with latest models and providers (f743dbb)
-
Add tool call approval (#8649)
-
Fix error handling and serialization in agent streaming to ensure errors are consistently exposed and preserved. (#9192)
-
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION→AISpanType.MODEL_GENERATION
-
AISpanType.LLM_STEP→AISpanType.MODEL_STEP -
AISpanType.LLM_CHUNK→AISpanType.MODEL_CHUNK -
LLMGenerationAttributes→ModelGenerationAttributes -
LLMStepAttributes→ModelStepAttributes -
LLMChunkAttributes→ModelChunkAttributes -
InternalSpans.LLM→InternalSpans.MODELThis change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
-
Update all imports:
import { ModelGenerationAttributes } from '@mastra/core/ai-tracing' -
Update span type references:
AISpanType.MODEL_GENERATION -
Update InternalSpans usage:
InternalSpans.MODEL(#9105)
@mastra/couchbase
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/deployer
- use mastra logger in error handler (#9037)
- Consolidate streamVNext logic into stream, move old stream function into streamLegacy (#9092)
- Fix edge case bug around transitive dependencies in monorepos (#8977)
- Update peer dependencies to match core package version bump (0.22.0) (#9092)
- Improve error related to finding possible binary dependencies (#9056)
- Update peerdeps to 0.23.0-0 (#9043)
- Update peer dependencies to match core package version bump (0.22.1) (#8649)
- Add tool call approval (#8649)
- Fix error handling and serialization in agent streaming to ensure errors are consistently exposed and preserved. (#9192)
- Update peer dependencies to match core package version bump (0.22.3) (#9192)
@mastra/deployer-cloud
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/deployer-cloudflare
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/deployer-netlify
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/deployer-vercel
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/dynamodb
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/evals
- Updates the types for scorers to use the new MastraModelConfig type. Also updates relevant docs to reference this type, as well as the new router model signature# An empty message aborts the changeset. (#8932)
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/google-cloud-pubsub
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/inngest
- Consolidate streamVNext logic into stream, move old stream function into streamLegacy (#9092)
- Updated
watchandwatchAsyncmethods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048) - Update peer dependencies to match core package version bump (0.22.0) (#9092)
- Support for custom resume labels mapping to step to be resumed (#8941)
- Update peer dependencies to match core package version bump (0.21.2) (#8941)
- Update peerdeps to 0.23.0-0 (#9043)
- Update peer dependencies to match core package version bump (0.22.1) (#8649)
- Add tool call approval (#8649)
@mastra/lance
- Update peerdeps to 0.23.0-0 (#9043)## @mastra/lance
- dependencies updates:
- Updated dependency
@lancedb/lancedb@^0.22.2↗︎ (from^0.21.2, independencies) (#8693)
- Updated dependency
@mastra/langfuse
-
Update peerdeps to 0.23.0-0 (#9043)
-
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION→AISpanType.MODEL_GENERATION
-
AISpanType.LLM_STEP→AISpanType.MODEL_STEP -
AISpanType.LLM_CHUNK→AISpanType.MODEL_CHUNK -
LLMGenerationAttributes→ModelGenerationAttributes -
LLMStepAttributes→ModelStepAttributes -
LLMChunkAttributes→ModelChunkAttributes -
InternalSpans.LLM→InternalSpans.MODELThis change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
-
Update all imports:
import { ModelGenerationAttributes } from '@mastra/core/ai-tracing' -
Update span type references:
AISpanType.MODEL_GENERATION -
Update InternalSpans usage:
InternalSpans.MODEL(#9105)
@mastra/langsmith
-
Update peerdeps to 0.23.0-0 (#9043)
-
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION→AISpanType.MODEL_GENERATION
-
AISpanType.LLM_STEP→AISpanType.MODEL_STEP -
AISpanType.LLM_CHUNK→AISpanType.MODEL_CHUNK -
LLMGenerationAttributes→ModelGenerationAttributes -
LLMStepAttributes→ModelStepAttributes -
LLMChunkAttributes→ModelChunkAttributes -
InternalSpans.LLM→InternalSpans.MODELThis change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
-
Update all imports:
import { ModelGenerationAttributes } from '@mastra/core/ai-tracing' -
Update span type references:
AISpanType.MODEL_GENERATION -
Update InternalSpans usage:
InternalSpans.MODEL(#9105)
@mastra/libsql
- Support for custom resume labels mapping to step to be resumed (#8941)
- Update peer dependencies to match core package version bump (0.21.2) (#8941)
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/loggers
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/mcp
- Remove deprecated mcp options MastraMCPClient/MCPConfigurationOptions/MCPConfiguration (#9084)
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/mcp-registry-registry
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/memory
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/mongodb
- Adds MongoDB Observability support, and MongoDB Storage documentation, examples, and telemetry. (#8426)
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/mssql
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/observability
- Create new @mastra/observability package at version 0.0.1. This empty package serves as a placeholder for AI tracing and scorer code that will be migrated from other packages, allowing users to add it as a dependency before the code migration. (#9051)
@mastra/opensearch
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/otel-exporter
-
feat(otel-exporter): Add customizable 'exporter' constructor parameter
You can now pass in an instantiated
TraceExporterinheriting class intoOtelExporter.
This will circumvent the default package detection, no longer instantiating aTraceExporter
automatically if one is instead passed in to theOtelExporterconstructor.feat(arize): Initial release of @mastra/arize observability package
The
@mastra/arizepackage exports anArizeExporterclass that can be used to easily send AI
traces from Mastra to Arize AX, Arize Phoenix, or any OpenInference compatible collector.
It sends traces usesBatchSpanProcessorover OTLP connections.
It leverages the@mastra/otel-exporterpackage, reusingOtelExporterfor transmission and
span management.
See the README inobservability/arize/README.mdfor more details (#8827) -
fix(observability): Add ParentSpanContext to MastraSpan's with parentage (#9085)
-
Update peerdeps to 0.23.0-0 (#9043)
-
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION→AISpanType.MODEL_GENERATION
-
AISpanType.LLM_STEP→AISpanType.MODEL_STEP -
AISpanType.LLM_CHUNK→AISpanType.MODEL_CHUNK -
LLMGenerationAttributes→ModelGenerationAttributes -
LLMStepAttributes→ModelStepAttributes -
LLMChunkAttributes→ModelChunkAttributes -
InternalSpans.LLM→InternalSpans.MODELThis change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
-
Update all imports:
import { ModelGenerationAttributes } from '@mastra/core/ai-tracing' -
Update span type references:
AISpanType.MODEL_GENERATION -
Update InternalSpans usage:
InternalSpans.MODEL(#9105)
@mastra/pg
- Use the tz version of the timestamp column in when fetching messages. (#8944)
- Avoid conflicts in pg fn naming when creating new tables on storage.init() (#8946)
- Update peerdeps to 0.23.0-0 (#9043)
- Fixes "invalid input syntax for type json" error in AI tracing with PostgreSQL. (#9181)
@mastra/pinecone
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/playground-ui
- Handle nested optional objects in dynamic form (#9059)
- Threads are not refreshing correctly after generate / stream / network (#9015)
- Update peer dependencies to match core package version bump (0.21.2) (#9021)
- Move "Playground" to "Studio" in UI only (#9052)
- fix template background image overflow (#9011)
- Show agent tool output better in playground (#9021)
- Update peerdeps to 0.23.0-0 (#9043)
- Move all the fetching hooks that should be shared with cloud into playground-ui (#9133)
- Update peer dependencies to match core package version bump (0.22.1) (#8649)
- Add tool call approval (#8649)
@mastra/qdrant
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/rag
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/react
- Fix perf issue: removed flush sync (#9014)
- Fix tool result in playground (#9087)
- Show agent tool output better in playground (#9021)
- Add tool call approval (#8649)
@mastra/s3vectors
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/server
- Consolidate streamVNext logic into stream, move old stream function into streamLegacy (#9092)
- Updated
watchandwatchAsyncmethods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048) - Update peer dependencies to match core package version bump (0.21.2) (#9038)
- Generate a title for Agent.network() threads (#8853)
- Fix custom metadata preservation in UIMessages when loading threads. The
getMessagesHandlernow convertsmessagesV2(V2 format with metadata) instead ofmessages(V1 format without metadata) to AIV5.UI format. Also updates the abstractMastraMemory.query()return type to includemessagesV2for proper type safety. (#8938) - Fix provider tools for popular providers and add support for anthropic/claude skills. (#9038)
- Update peer dependencies to match core package version bump (0.22.0) (#9092)
- Update peer dependencies to match core package version bump (0.21.2) (#9021)
- Show agent tool output better in playground (#9021)
- Update peerdeps to 0.23.0-0 (#9043)
- Update peer dependencies to match core package version bump (0.22.1) (#8649)
- Add tool call approval (#8649)
@mastra/turbopuffer
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/upstash
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/vectorize
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-azure
@mastra/voice-cloudflare
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-deepgram
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-elevenlabs
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-gladia
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-google
- dependencies updates:
- Updated dependency
@google-cloud/text-to-speech@^6.3.1↗︎ (from^6.3.0, independencies) (#8936)
- Updated dependency
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-google-gemini-live
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-murf
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-openai
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-openai-realtime
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-playai
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-sarvam
- Update peerdeps to 0.23.0-0 (#9043)
@mastra/voice-speechify
- Update peerdeps to 0.23.0-0 (#9043)
create-mastra
- Add scorers to the default weather agent in the create command. (#9042)
- Fix tool result in playground (#9087)