4.146.0
Minor Changes
-
#4865
d9e65feThanks @kevinvandijk! - Include changes from Roo Code v3.36.7-v3.38.3- Feat: Add option in Context settings to recursively load
.kilocode/rulesandAGENTS.mdfrom subdirectories (PR #10446 by @mrubens) - Fix: Stop frequent Claude Code sign-ins by hardening OAuth refresh token handling (PR #10410 by @hannesrudolph)
- Fix: Add
maxConcurrentFileReadslimit to nativeread_filetool schema (PR #10449 by @app/roomote) - Fix: Add type check for
lastMessage.textin TTS useEffect to prevent runtime errors (PR #10431 by @app/roomote) - Align skills system with Agent Skills specification (PR #10409 by @hannesrudolph)
- Prevent write_to_file from creating files at truncated paths (PR #10415 by @mrubens and @daniel-lxs)
- Fix rate limit wait display (PR #10389 by @hannesrudolph)
- Remove human-relay provider (PR #10388 by @hannesrudolph)
- Fix: Flush pending tool results before condensing context (PR #10379 by @daniel-lxs)
- Fix: Revert mergeToolResultText for OpenAI-compatible providers (PR #10381 by @hannesrudolph)
- Fix: Enforce maxConcurrentFileReads limit in read_file tool (PR #10363 by @roomote)
- Fix: Improve feedback message when read_file is used on a directory (PR #10371 by @roomote)
- Fix: Handle custom tool use similarly to MCP tools for IPC schema purposes (PR #10364 by @jr)
- Add support for npm packages and .env files to custom tools, allowing custom tools to import dependencies and access environment variables (PR #10336 by @cte)
- Remove simpleReadFileTool feature, streamlining the file reading experience (PR #10254 by @app/roomote)
- Remove OpenRouter Transforms feature (PR #10341 by @app/roomote)
- Fix: Send native tool definitions by default for OpenAI to ensure proper tool usage (PR #10314 by @hannesrudolph)
- Fix: Preserve reasoning_details shape to prevent malformed responses when processing model output (PR #10313 by @hannesrudolph)
- Fix: Drain queued messages while waiting for ask to prevent message loss (PR #10315 by @hannesrudolph)
- Feat: Add grace retry for empty assistant messages to improve reliability (PR #10297 by @hannesrudolph)
- Feat: Enable mergeToolResultText for all OpenAI-compatible providers for better tool result handling (PR #10299 by @hannesrudolph)
- Feat: Strengthen native tool-use guidance in prompts for improved model behavior (PR #10311 by @hannesrudolph)
- Add MiniMax M2.1 and improve environment_details handling for Minimax thinking models (PR #10284 by @hannesrudolph)
- Add GLM-4.7 model with thinking mode support for Zai provider (PR #10282 by @hannesrudolph)
- Add experimental custom tool calling - define custom tools that integrate seamlessly with your AI workflow (PR #10083 by @cte)
- Deprecate XML tool protocol selection and force native tool format for new tasks (PR #10281 by @daniel-lxs)
- Fix: Emit tool_call_end events in OpenAI handler when streaming ends (#10275 by @torxeon, PR #10280 by @daniel-lxs)
- Fix: Emit tool_call_end events in BaseOpenAiCompatibleProvider (PR #10293 by @hannesrudolph)
- Fix: Disable strict mode for MCP tools to preserve optional parameters (PR #10220 by @daniel-lxs)
- Fix: Move array-specific properties into anyOf variant in normalizeToolSchema (PR #10276 by @daniel-lxs)
- Fix: Add graceful fallback for model parsing in Chutes provider (PR #10279 by @hannesrudolph)
- Fix: Enable Requesty refresh models with credentials (PR #10273 by @daniel-lxs)
- Fix: Improve reasoning_details accumulation and serialization (PR #10285 by @hannesrudolph)
- Fix: Preserve reasoning_content in condense summary for DeepSeek-reasoner (PR #10292 by @hannesrudolph)
- Refactor Zai provider to merge environment_details into tool result instead of system message (PR #10289 by @hannesrudolph)
- Remove parallel_tool_calls parameter from litellm provider (PR #10274 by @roomote)
- Fix: Normalize tool schemas for VS Code LM API to resolve error 400 when using VS Code Language Model API providers (PR #10221 by @hannesrudolph)
- Add 1M context window beta support for Claude Sonnet 4 on Vertex AI, enabling significantly larger context for complex tasks (PR #10209 by @hannesrudolph)
- Add native tool call defaults for OpenAI-compatible providers, expanding native function calling across more configurations (PR #10213 by @hannesrudolph)
- Enable native tool calls for Requesty provider (PR #10211 by @daniel-lxs)
- Improve API error handling and visibility with clearer error messages and better user feedback (PR #10204 by @brunobergher)
- Add downloadable error diagnostics from chat errors, making it easier to troubleshoot and report issues (PR #10188 by @brunobergher)
- Fix refresh models button not properly flushing the cache, ensuring model lists update correctly (#9682 by @tl-hbk, PR #9870 by @pdecat)
- Fix additionalProperties handling for strict mode compatibility, resolving schema validation issues with certain providers (PR #10210 by @daniel-lxs)
- Add native tool calling support for Claude models on Vertex AI, enabling more efficient and reliable tool interactions (PR #10197 by @hannesrudolph)
- Fix JSON Schema format value stripping for OpenAI compatibility, resolving issues with unsupported format values (PR #10198 by @daniel-lxs)
- Improve "no tools used" error handling with graceful retry mechanism for better reliability when tools fail to execute (PR #10196 by @hannesrudolph)
- Change default tool protocol from XML to native for improved reliability and performance (PR #10186 by @mrubens)
- Add native tool support for VS Code Language Model API providers (PR #10191 by @daniel-lxs)
- Lock task tool protocol for consistent task resumption, ensuring tasks resume with the same protocol they started with (PR #10192 by @daniel-lxs)
- Replace edit_file tool alias with actual edit_file tool for improved diff editing capabilities (PR #9983 by @hannesrudolph)
- Fix LiteLLM router models by merging default model info for native tool calling support (PR #10187 by @daniel-lxs)
- Fix: Add userAgentAppId to Bedrock embedder for code indexing (#10165 by @jackrein, PR #10166 by @roomote)
- Update OpenAI and Gemini tool preferences for improved model behavior (PR #10170 by @hannesrudolph)
- Add support for Claude Code Provider native tool calling, improving tool execution performance and reliability (PR #10077 by @hannesrudolph)
- Enable native tool calling by default for Z.ai models for better model compatibility (PR #10158 by @app/roomote)
- Enable native tools by default for OpenAI compatible provider to improve tool calling support (PR #10159 by @daniel-lxs)
- Fix: Normalize MCP tool schemas for Bedrock and OpenAI strict mode to ensure proper tool compatibility (PR #10148 by @daniel-lxs)
- Fix: Remove dots and colons from MCP tool names for Bedrock compatibility (PR #10152 by @daniel-lxs)
- Fix: Convert tool_result to XML text when native tools disabled for Bedrock (PR #10155 by @daniel-lxs)
- Fix: Support AWS GovCloud and China region ARNs in Bedrock provider for expanded regional support (PR #10157 by @app/roomote)
- Implement interleaved thinking mode for DeepSeek Reasoner, enabling streaming reasoning output (PR #9969 by @hannesrudolph)
- Fix: Preserve reasoning_content during tool call sequences in DeepSeek (PR #10141 by @hannesrudolph)
- Fix: Correct token counting for context truncation display (PR #9961 by @hannesrudolph)
- Fix: Normalize tool call IDs for cross-provider compatibility via OpenRouter, ensuring consistent handling across different AI providers (PR #10102 by @daniel-lxs)
- Fix: Add additionalProperties: false to nested MCP tool schemas, improving schema validation and preventing unexpected properties (PR #10109 by @daniel-lxs)
- Fix: Validate tool_result IDs in delegation resume flow, preventing errors when resuming delegated tasks (PR #10135 by @daniel-lxs)
- Feat: Add full error details to streaming failure dialog, providing more comprehensive information for debugging streaming issues (PR #10131 by @roomote)
- Implement incremental token-budgeted file reading for smarter, more efficient file content retrieval (PR #10052 by @jr)
- Enable native tools by default for multiple providers including OpenAI, Azure, Google, Vertex, and more (PR #10059 by @daniel-lxs)
- Enable native tools by default for Anthropic and add telemetry tracking for tool format usage (PR #10021 by @daniel-lxs)
- Fix: Prevent race condition from deleting wrong API messages during streaming (PR #10113 by @hannesrudolph)
- Fix: Prevent duplicate MCP tools error by deduplicating servers at source (PR #10096 by @daniel-lxs)
- Remove strict ARN validation for Bedrock custom ARN users allowing more flexibility (#10108 by @wisestmumbler, PR #10110 by @roomote)
- Add metadata to error details dialog for improved debugging (PR #10050 by @roomote)
- Remove description from Bedrock service tiers for cleaner UI (PR #10118 by @mrubens)
- Improve tool configuration for OpenAI models in OpenRouter (PR #10082 by @hannesrudolph)
- Capture more detailed provider-specific error information from OpenRouter for better debugging (PR #10073 by @jr)
- Add Amazon Nova 2 Lite model to Bedrock provider (#9802 by @Smartsheet-JB-Brown, PR #9830 by @roomote)
- Add AWS Bedrock service tier support (#9874 by @Smartsheet-JB-Brown, PR #9955 by @roomote)
- Remove auto-approve toggles for to-do and retry actions to simplify the approval workflow (PR #10062 by @hannesrudolph)
- Move isToolAllowedForMode out of shared directory for better code organization (PR #10089 by @cte)
- Feat: Add option in Context settings to recursively load
Patch Changes
-
#4950
4b31180Thanks @markijbema! - Fix chat autocomplete to only show suggestions when textarea has focus, text hasn't changed, and clear suggestions on paste -
#4995
95e9b6dThanks @kevinvandijk! - fix: use correct api url for some endpoints -
#5008
a86cd0cThanks @markijbema! - Minor improvement to markdown autocomplete suggestions -
#4445
91f9aa3Thanks @chriscool! - fix: configure husky hooks for reliable execution