Patch Changes
-
#967
c128447Thanks @threepointone! - Follow-up to #956. AllowaddToolOutputto work with tools inapproval-requestedandapproval-respondedstates, not justinput-available. Also adds support forstate: "output-error"anderrorTextfields, enabling custom denial messages when rejecting tool approvals (addresses remaining items from #955).Additionally, tool approval rejections (
approved: false) now auto-continue the conversation whenautoContinueis set, so the LLM sees the denial and can respond naturally (e.g. suggest alternatives).This enables the Vercel AI SDK recommended pattern for client-side tool denial:
addToolOutput({ toolCallId: invocation.toolCallId, state: "output-error", errorText: "User declined: insufficient permissions", });
-
#958
f70a8b9Thanks @whoiskatrin! - Fix duplicate assistant message persistence when clients resend full history with local assistant IDs that differ from server IDs.AIChatAgent.persistMessages()now reconciles non-tool assistant messages against existing server history by content and order, reusing the server ID instead of inserting duplicate rows. -
#977
5426b6fThanks @dmmulroy! - ExposerequestIdinOnChatMessageOptionsso handlers can send properly-tagged error responses for pre-stream failures.Also fix
saveMessages()to pass the full options object (requestId,abortSignal,clientTools,body) toonChatMessageand use a consistent request ID for_reply. -
#973
969fbffThanks @threepointone! - Update dependencies -
#983
2785f10Thanks @threepointone! - Fix abort/cancel support for streaming responses. The framework now properly cancels the reader loop when the abort signal fires and sends a done signal to the client. Added a warning log when cancellation arrives but the stream has not closed (indicating the user forgot to passabortSignalto their LLM call). Also fixed vitest project configs to scope test file discovery and prevent e2e/react tests from being picked up by the wrong runner. -
#979
23c90eaThanks @mattzcarey! - Fix jsonSchema not initialized error when calling getAITools() in onChatMessage -
#980
00c576dThanks @threepointone! - Fix_sanitizeMessageForPersistencestripping Anthropicredacted_thinkingblocks. The sanitizer now strips OpenAI ephemeral metadata first, then filters out only reasoning parts that are truly empty (no text and no remainingproviderMetadata). This preserves Anthropic'sredacted_thinkingblocks (stored as empty-text reasoning parts withproviderMetadata.anthropic.redactedData) while still removing OpenAI placeholders. Fixes #978. -
#953
bd22d60Thanks @mattzcarey! - Moved/get-messagesendpoint handling from a prototypeoverride onRequest()method to a constructor wrapper. This ensures the endpoint always works, even when users overrideonRequestwithout callingsuper.onRequest(). -
#956
ab401a0Thanks @whoiskatrin! - Fix denied tool approvals (CF_AGENT_TOOL_APPROVALwithapproved: false) to transition tool parts tooutput-deniedinstead ofapproval-responded.This ensures
convertToModelMessagesemits atool_resultfor denied approvals, which is required by providers like Anthropic.Also adds regression tests for denied approval behavior, including rejection from
approval-requestedstate. -
#982
5a851beThanks @threepointone! - Undeprecate client tool APIs (createToolsFromClientSchemas,clientTools,AITool,extractClientToolSchemas, and thetoolsoption onuseAgentChat) for SDK/platform use cases where tools are defined dynamically at runtime. Fix spuriousdetectToolsRequiringConfirmationdeprecation warning when using thetoolsoption.