Highlights
Stream nested execution context from Workflows and Networks to your UI
Agent responses now stream live through workflows and networks, with complete execution metadata flowing to your UI.
In workflows, pipe agent streams directly through steps:
const planActivities = createStep({
execute: async ({ mastra, writer }) => {
const agent = mastra?.getAgent('weatherAgent');
const response = await agent.stream('Plan activities');
await response.fullStream.pipeTo(writer);
return { activities: await response.text };
}
});In networks, each step now tracks properly—unique IDs, iteration counts, task info, and agent handoffs all flow through with correct sequencing. No more duplicated steps or missing metadata.
Both surface text chunks, tool calls, and results as they happen, so users see progress in real time instead of waiting for the full response.
AI-SDK voice models are now supported
CompositeVoice now accepts AI SDK voice models directly—use OpenAI for transcription, ElevenLabs for speech, or any combination you want.
import { CompositeVoice } from "@mastra/core/voice";
import { openai } from "@ai-sdk/openai";
import { elevenlabs } from "@ai-sdk/elevenlabs";
const voice = new CompositeVoice({
input: openai.transcription('whisper-1'),
output: elevenlabs.speech('eleven_turbo_v2'),
});
const audio = await voice.speak("Hello from AI SDK!");
const transcript = await voice.listen(audio);Works with OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and more. AI SDK models are automatically wrapped, so you can swap providers without changing your code.
Changelog
@mastra/ai-sdk
-
Support streaming agent text chunks from workflow-step-output
Adds support for streaming text and tool call chunks from agents running inside workflows via the workflow-step-output event. When you pipe an agent's stream into a workflow step's writer, the text chunks, tool calls, and other streaming events are automatically included in the workflow stream and converted to UI messages.
Features:
- Added
includeTextStreamPartsoption toWorkflowStreamToAISDKTransformer(defaults totrue) - Added
isMastraTextStreamChunktype guard to identify Mastra chunks with text streaming data - Support for streaming text chunks:
text-start,text-delta,text-end - Support for streaming tool calls:
tool-call,tool-result - Comprehensive test coverage in
transformers.test.ts - Updated documentation for workflow streaming and
workflowRoute()
Example:
const planActivities = createStep({
execute: async ({ mastra, writer }) => {
const agent = mastra?.getAgent('weatherAgent');
const response = await agent.stream('Plan activities');
await response.fullStream.pipeTo(writer);
return { activities: await response.text };
}
});When served via workflowRoute(), the UI receives incremental text updates as the agent generates its response, providing a smooth streaming experience. (#10568)
-
Fix chat route to use agent ID instead of agent name for resolution. The
/chat/:agentIdendpoint now correctly resolves agents by their ID property (e.g.,weather-agent) instead of requiring the camelCase variable name (e.g.,weatherAgent). This fixes issue #10469 where URLs like/chat/weather-agentwould return 404 errors. (#10565) -
Fixes propagation of custom data chunks from nested workflows in branches to the root stream when using
toAISdkV5Streamwith{from: 'workflow'}.Previously, when a nested workflow within a branch used
writer.custom()to write data-* chunks, those chunks were wrapped inworkflow-step-outputevents and not extracted, causing them to be dropped from the root stream.Changes:
- Added handling for
workflow-step-outputchunks intransformWorkflow()to extract and propagate data-* chunks
- Added handling for
-
When a
workflow-step-outputchunk contains a data-* chunk in itspayload.output, the transformer now extracts it and returns it directly to the root stream -
Added comprehensive test coverage for nested workflows with branches and custom data propagation
This ensures that custom data chunks written via
writer.custom()in nested workflows (especially those within branches) are properly propagated to the root stream, allowing consumers to receive progress updates, metrics, and other custom data from nested workflow steps. (#10447) -
Fix network data step formatting in AI SDK stream transformation
Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
Changes:
- Enhanced step tracking in
AgentNetworkToAISDKTransformerto properly maintain step state throughout execution lifecycle
- Enhanced step tracking in
-
Steps are now identified by unique IDs and updated in place rather than creating duplicates
-
Added proper iteration and task metadata to each step in the network execution flow
-
Fixed agent, workflow, and tool execution events to correctly populate step data
-
Updated network stream event types to include
networkId,workflowId, and consistentrunIdtracking -
Added test coverage for network custom data chunks with comprehensive validation
This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#10432)
-
[0.x] Make workflowRoute includeTextStreamParts option default to false (#10574)
-
Add support for tool-call-approval and tool-call-suspended events in chatRoute (#10205)
-
Backports the
messageMetadataandonErrorsupport from PR #10313 to the 0.x branch, adding these features totoAISdkFormatfunction.- Added
messageMetadataparameter totoAISdkFormatoptions - Function receives the current stream part and returns metadata to attach to start and finish chunks
- Metadata is included in
startandfinishchunks when provided
- Added
-
Added
onErrorparameter totoAISdkFormatoptions- Allows custom error handling during stream conversion
- Falls back to
safeParseErrorObjectutility when not provided
-
Added
safeParseErrorObjectutility function for error parsing -
Updated
AgentStreamToAISDKTransformerto accept and usemessageMetadataandonError -
Updated JSDoc documentation with parameter descriptions and usage examples
-
Added comprehensive test suite for
messageMetadatafunctionality (6 test cases) -
Fixed existing test file to use
toAISdkFormatinstead of removedtoAISdkV5Stream- All existing tests pass (14 tests across 3 test files)
-
New tests verify:
-
messageMetadatais called with correct part structure -
Metadata is included in start and finish chunks
-
Proper handling when
messageMetadatais not provided or returns null/undefined -
Function is called for each relevant part in the stream
-
-
Fixed workflow routes to properly receive request context from middleware. This aligns the behavior of
workflowRoutewithchatRoute, ensuring that context set in middleware is consistently forwarded to workflows.When both middleware and request body provide a request context, the middleware value now takes precedence, and a warning is emitted to help identify potential conflicts.
@mastra/astra
@mastra/auth-clerk
- remove organization requirement from default authorization (#10551)
@mastra/chroma
@mastra/clickhouse
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/cloudflare
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in listMessages when using semantic recall (include parameter). Messages are now always sorted by createdAt instead of storage order, ensuring correct chronological ordering of conversation history. (#10545)
@mastra/cloudflare-d1
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/core
-
Fix base64 encoded images with threads - issue #10480
Fixed "Invalid URL" error when using base64 encoded images (without
data:prefix) in agent calls with threads and resources. Raw base64 strings are now automatically converted to proper data URIs before being processed.Changes:
-
Updated
attachments-to-parts.tsto detect and convert raw base64 strings to data URIs -
Fixed
MessageListimage processing to handle raw base64 in two locations:- Image part conversion in
aiV4CoreMessageToV1PromptMessage - File part to experimental_attachments conversion in
mastraDBMessageToAIV4UIMessage
- Image part conversion in
-
Added comprehensive tests for base64 images, data URIs, and HTTP URLs with threads
Breaking Change: None - this is a bug fix that maintains backward compatibility while adding support for raw base64 strings. (#10483)
-
SimpleAuth and improved CloudAuth (#10569)
-
Fixed OpenAI schema compatibility when using
agent.generate()oragent.stream()withstructuredOutput.Changes
- Automatic transformation: Zod schemas are now automatically transformed for OpenAI strict mode compatibility when using OpenAI models (including reasoning models like o1, o3, o4)
-
Optional field handling:
.optional()fields are converted to.nullable()with a transform that convertsnull→undefined, preserving optional semantics while satisfying OpenAI's strict mode requirements -
Preserves nullable fields: Intentionally
.nullable()fields remain unchanged -
Deep transformation: Handles
.optional()fields at any nesting level (objects, arrays, unions, etc.) -
JSON Schema objects: Not transformed, only Zod schemas
Example
const agent = new Agent({
name: 'data-extractor',
model: { provider: 'openai', modelId: 'gpt-4o' },
instructions: 'Extract user information',
});
const schema = z.object({
name: z.string(),
age: z.number().optional(),
deletedAt: z.date().nullable(),
});
// Schema is automatically transformed for OpenAI compatibility
const result = await agent.generate('Extract: John, deleted yesterday', {
structuredOutput: { schema },
});
// Result: { name: 'John', age: undefined, deletedAt: null }(#10454)
-
deleteVectors, deleteFilter when upserting, updateVector filter (#10244) (#10244)
-
Fix generateTitle model type to accept AI SDK LanguageModelV2
Updated the
generateTitle.modelconfig option to acceptMastraModelConfiginstead ofMastraLanguageModel. This allows users to pass raw AI SDKLanguageModelV2models (e.g.,anthropic.languageModel('claude-3-5-haiku-20241022')) directly without type errors.Previously, passing a standard
LanguageModelV2would fail becauseMastraLanguageModelV2has differentdoGenerate/doStreamreturn types. NowMastraModelConfigis used consistently across: -
memory/types.ts-generateTitle.modelconfig -
agent.ts-genTitle,generateTitleFromUserMessage,resolveTitleGenerationConfig -
agent-legacy.ts-AgentLegacyCapabilitiesinterface (#10567) -
Fix message metadata not persisting when using simple message format. Previously, custom metadata passed in messages (e.g.,
{role: 'user', content: 'text', metadata: {userId: '123'}}) was not being saved to the database. This occurred because the CoreMessage conversion path didn't preserve metadata fields.Now metadata is properly preserved for all message input formats:
-
Simple CoreMessage format:
{role, content, metadata} -
Full UIMessage format:
{role, content, parts, metadata} -
AI SDK v5 ModelMessage format with metadata
-
feat: Composite auth implementation (#10359)
-
Fix requireApproval property being ignored for tools passed via toolsets, clientTools, and memoryTools parameters. The requireApproval flag now correctly propagates through all tool conversion paths, ensuring tools requiring approval will properly request user approval before execution. (#10562)
-
Fix Azure Foundry rate limit handling for -1 values (#10409)
-
Fix model headers not being passed through gateway system
Previously, custom headers specified in
MastraModelConfigwere not being passed through the gateway system to model providers. This affected: -
OpenRouter (preventing activity tracking with
HTTP-RefererandX-Title) -
Custom providers using custom URLs (headers not passed to
createOpenAICompatible) -
Custom gateway implementations (headers not available in
resolveLanguageModel)Now headers are correctly passed through the entire gateway system:
-
Base
MastraModelGatewayinterface updated to accept headers -
ModelRouterLanguageModelpasses headers from config to all gateways -
OpenRouter receives headers for activity tracking
-
Custom URL providers receive headers via
createOpenAICompatible -
Custom gateways can access headers in their
resolveLanguageModelimplementationExample usage:
// Works with OpenRouter
const agent = new Agent({
name: 'my-agent',
instructions: 'You are a helpful assistant.',
model: {
id: 'openrouter/anthropic/claude-3-5-sonnet',
headers: {
'HTTP-Referer': 'https://myapp.com',
'X-Title': 'My Application',
},
},
});
// Also works with custom providers
const customAgent = new Agent({
name: 'custom-agent',
instructions: 'You are a helpful assistant.',
model: {
id: 'custom-provider/model',
url: 'https://api.custom.com/v1',
apiKey: 'key',
headers: {
'X-Custom-Header': 'custom-value',
},
},
});-
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
-
Add assistant messages to messageList immediately after LLM execution
-
Flush messages synchronously before suspension to persist state
-
Create thread if it doesn't exist before flushing
-
Add metadata helpers to persist and remove tool approval state
-
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
-
Extract runId from pending approvals to enable resumption after refresh
-
Convert
pendingToolApprovals(DB format) torequireApprovalMetadata(runtime format) -
Handle both
dynamic-toolandtool-{NAME}part types for approval state -
Change runId from hardcoded
agentIdto uniqueuuid()UI changes (@mastra/playground-ui):
-
Handle tool calls awaiting approval in message initialization
-
Convert approval metadata format when loading initial messages
-
Fix race condition in parallel tool stream writes
Introduces a write queue to ToolStream to serialize access to the underlying stream, preventing writer locked errors (#10463)
-
Remove unneeded console warning when flushing messages and no threadId or saveQueueManager is found. (#10369)
-
Fixes GPT-5 reasoning which was failing on subsequent tool calls with the error:
Item 'fc_xxx' of type 'function_call' was provided without its required 'reasoning' item: 'rs_xxx'
(#10489)
-
Add optional includeRawChunks parameter to agent execution options,
allowing users to include raw chunks in stream output where supported
by the model provider. (#10456) -
When
mastra devruns, multiple processes can write toprovider-registry.jsonconcurrently (auto-refresh, syncGateways, syncGlobalCacheToLocal). This causes file corruption where the end of the JSON appears twice, making it unparseable.The fix uses atomic writes via the write-to-temp-then-rename pattern. Instead of:
fs.writeFileSync(filePath, content, 'utf-8');We now do:
const tempPath = `${filePath}.${process.pid}.${Date.now()}.${randomSuffix}.tmp`;
fs.writeFileSync(tempPath, content, 'utf-8');
fs.renameSync(tempPath, filePath); // atomic on POSIXfs.rename() is atomic on POSIX systems when both paths are on the same filesystem, so concurrent writes will each complete fully rather than interleaving. (#10529)
-
Ensures that data chunks written via
writer.custom()always bubble up directly to the top-level stream, even when nested in sub-agents. This allows tools to emit custom progress updates, metrics, and other data that can be consumed at any level of the agent hierarchy.-
Added bubbling logic in sub-agent execution: When sub-agents execute, data chunks (chunks with type starting with
data-) are detected and written viawriter.custom()instead ofwriter.write(), ensuring they bubble up directly without being wrapped intool-outputchunks. -
Added comprehensive tests:
-
Test for
writer.custom()with direct tool execution -
Test for
writer.custom()with sub-agent tools (nested execution) -
Test for mixed usage of
writer.write()andwriter.custom()in the same toolWhen a sub-agent's tool uses
writer.custom()to write data chunks, those chunks appear in the sub-agent's stream. The parent agent's execution logic now detects these chunks and useswriter.custom()to bubble them up directly, preserving their structure and making them accessible at the top level.This ensures that:
- Data chunks from tools always appear directly in the stream (not wrapped)
-
-
Data chunks bubble up correctly through nested agent hierarchies
-
Regular chunks continue to be wrapped in
tool-outputas expected (#10309) -
Adds ability to create custom
MastraModelGateway's that can be added to theMastraclass instance under thegatewaysproperty. Giving you typescript autocompletion in any model picker string.
import { MastraModelGateway, type ProviderConfig } from '@mastra/core/llm';
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
import type { LanguageModelV2 } from '@ai-sdk/provider';
class MyCustomGateway extends MastraModelGateway {
readonly id = 'custom';
readonly name = 'My Custom Gateway';
async fetchProviders(): Promise<Record<string, ProviderConfig>> {
return {
'my-provider': {
name: 'My Provider',
models: ['model-1', 'model-2'],
apiKeyEnvVar: 'MY_API_KEY',
gateway: this.id,
},
};
}
buildUrl(modelId: string, envVars?: Record<string, string>): string {
return 'https://api.my-provider.com/v1';
}
async getApiKey(modelId: string): Promise<string> {
const apiKey = process.env.MY_API_KEY;
if (!apiKey) throw new Error('MY_API_KEY not set');
return apiKey;
}
async resolveLanguageModel({
modelId,
providerId,
apiKey,
}: {
modelId: string;
providerId: string;
apiKey: string;
}): Promise<LanguageModelV2> {
const baseURL = this.buildUrl(`${providerId}/${modelId}`);
return createOpenAICompatible({
name: providerId,
apiKey,
baseURL,
}).chatModel(modelId);
}
}
new Mastra({
gateways: {
myGateway: new MyCustomGateway(),
},
});(#10535)
-
Support AI SDK voice models
Mastra now supports AI SDK's transcription and speech models directly in
CompositeVoice, enabling seamless integration with a wide range of voice providers through the AI SDK ecosystem. This allows you to use models from OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and many more for both speech-to-text (transcription) and text-to-speech capabilities.AI SDK models are automatically wrapped when passed to
CompositeVoice, so you can mix and match AI SDK models with existing Mastra voice providers for maximum flexibility.Usage Example
import { CompositeVoice } from "@mastra/core/voice";
import { openai } from "@ai-sdk/openai";
import { elevenlabs } from "@ai-sdk/elevenlabs";
// Use AI SDK models directly with CompositeVoice
const voice = new CompositeVoice({
input: openai.transcription('whisper-1'), // AI SDK transcription model
output: elevenlabs.speech('eleven_turbo_v2'), // AI SDK speech model
});
// Convert text to speech
const audioStream = await voice.speak("Hello from AI SDK!");
// Convert speech to text
const transcript = await voice.listen(audioStream);
console.log(transcript);-
Fix network data step formatting in AI SDK stream transformation
Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
Changes:
- Enhanced step tracking in
AgentNetworkToAISDKTransformerto properly maintain step state throughout execution lifecycle
- Enhanced step tracking in
-
Steps are now identified by unique IDs and updated in place rather than creating duplicates
-
Added proper iteration and task metadata to each step in the network execution flow
-
Fixed agent, workflow, and tool execution events to correctly populate step data
-
Updated network stream event types to include
networkId,workflowId, and consistentrunIdtracking -
Added test coverage for network custom data chunks with comprehensive validation
This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#10432)
-
Fix generating provider-registry.json (#10535)
-
Fix message-list conversion issues when persisting messages before tool suspension: filter internal metadata fields (
__originalContent) from UI messages, keep reasoning field empty for consistent cache keys during message deduplication, and only include providerMetadata on parts when defined. (#10552) -
Fix agent.generate() to use model's doGenerate method instead of doStream
When calling
agent.generate(), the model'sdoGeneratemethod is now correctly invoked instead of always usingdoStream. This aligns the non-streaming generation path with the intended behavior where providers can implement optimized non-streaming responses. (#10572)
@mastra/couchbase
@mastra/deployer
- Rename "Playground" to "Studio" (#10443)
- Fixed a bug where imports that were not used in the main entry point were tree-shaken during analysis, causing bundling errors. Tree-shaking now only runs during the bundling step. (#10470)
@mastra/deployer-cloud
- SimpleAuth and improved CloudAuth (#10569)
- Do not initialize local storage when using mastra cloud storage instead (#10495)
@mastra/dynamodb
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/inngest
- Emit workflow-step-result and workflow-step-finish when step fails in inngest workflow (#10555)
@mastra/lance
- deleteVectors, deleteFilter when upserting, updateVector filter (#10244) (#10244)
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
@mastra/libsql
- deleteVectors, deleteFilter when upserting, updateVector filter (#10244) (#10244)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/mcp
- Fix MCP client to return structuredContent directly when tools define an outputSchema, ensuring output validation works correctly instead of failing with "expected X, received undefined" errors. (#10442)
@mastra/mcp-docs-server
- Ensure changelog truncation includes at least 2 versions before cutting off (#10496)
@mastra/mongodb
- deleteVectors, deleteFilter when upserting, updateVector filter (#10244) (#10244)
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/mssql
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/opensearch
@mastra/pg
- deleteVectors, deleteFilter when upserting, updateVector filter (#10244) (#10244)
- fix: ensure score responses match saved payloads for Mastra Stores. (#10570)
- Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#10573)
@mastra/playground-ui
-
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
-
Add assistant messages to messageList immediately after LLM execution
-
Flush messages synchronously before suspension to persist state
-
Create thread if it doesn't exist before flushing
-
Add metadata helpers to persist and remove tool approval state
-
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
-
Extract runId from pending approvals to enable resumption after refresh
-
Convert
pendingToolApprovals(DB format) torequireApprovalMetadata(runtime format) -
Handle both
dynamic-toolandtool-{NAME}part types for approval state -
Change runId from hardcoded
agentIdto uniqueuuid()UI changes (@mastra/playground-ui):
-
Handle tool calls awaiting approval in message initialization
-
Convert approval metadata format when loading initial messages
@mastra/rag
@mastra/react
-
Configurable resourceId in react useChat (#10561)
-
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
-
Add assistant messages to messageList immediately after LLM execution
-
Flush messages synchronously before suspension to persist state
-
Create thread if it doesn't exist before flushing
-
Add metadata helpers to persist and remove tool approval state
-
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
-
Extract runId from pending approvals to enable resumption after refresh
-
Convert
pendingToolApprovals(DB format) torequireApprovalMetadata(runtime format) -
Handle both
dynamic-toolandtool-{NAME}part types for approval state -
Change runId from hardcoded
agentIdto uniqueuuid()UI changes (@mastra/playground-ui):
-
Handle tool calls awaiting approval in message initialization
-
Convert approval metadata format when loading initial messages
@mastra/schema-compat
-
Fixed OpenAI schema compatibility when using
agent.generate()oragent.stream()withstructuredOutput.Changes
- Automatic transformation: Zod schemas are now automatically transformed for OpenAI strict mode compatibility when using OpenAI models (including reasoning models like o1, o3, o4)
-
Optional field handling:
.optional()fields are converted to.nullable()with a transform that convertsnull→undefined, preserving optional semantics while satisfying OpenAI's strict mode requirements -
Preserves nullable fields: Intentionally
.nullable()fields remain unchanged -
Deep transformation: Handles
.optional()fields at any nesting level (objects, arrays, unions, etc.) -
JSON Schema objects: Not transformed, only Zod schemas
Example
const agent = new Agent({
name: 'data-extractor',
model: { provider: 'openai', modelId: 'gpt-4o' },
instructions: 'Extract user information',
});
const schema = z.object({
name: z.string(),
age: z.number().optional(),
deletedAt: z.date().nullable(),
});
// Schema is automatically transformed for OpenAI compatibility
const result = await agent.generate('Extract: John, deleted yesterday', {
structuredOutput: { schema },
});
// Result: { name: 'John', age: undefined, deletedAt: null }(#10454)
@mastra/upstash
- Fix message sorting in listMessages when using semantic recall (include parameter). Messages are now always sorted by createdAt instead of storage order, ensuring correct chronological ordering of conversation history. (#10545)
@mastra/voice-deepgram
- feat(voice-deepgram): add speaker diarization support for STT (#10536)
mastra
- Rename "Playground" to "Studio" (#10443)