5.9.0
Minor Changes
- #5560
84ab615Thanks @marciepeters! - Add Poe as a supported API provider
Patch Changes
-
#5860
baeb3f4Thanks @Neonsy! - Fix OpenAI Responses Azure URL normalization so Azure v1 endpoints avoid unsupportedapi-versionparameters. -
#5968
053775eThanks @Olusammytee! - Normalize legacy Claude Code model IDs so dated aliases resolve to canonical model metadata and preserve capabilities (including image support). -
#5995
a2c7c49Thanks @mujtaba93! - Fix: Prevent terminal focus stealing in Agent Manager (fixes #5946) -
#5990
7d23f2cThanks @hdcodedev! - Fix OpenAI-compatible Responses fallback requests when custom base URLs already include/v1(#5979). -
#5916
8cceb67Thanks @Githubguy132010! - Fix blank messages and UI not updating when canceling a task in Agent Manager -
#5634
be40801Thanks @Patel230! - Fix context condensing prompt not saving properly -
#5864
c92c6b1Thanks @Githubguy132010! - Fixed organization selector overlapping with "Recent" text in chat pane header -
#5267
1467783Thanks @maywzh! - fix: preserve extra_content for Gemini 3 thought_signature support -
#5569
30eb061Thanks @romeoscript! - Retry Amazon Bedrock network connection lost errors up till 3 times -
#5992
23a083aThanks @shssoichiro! - fix: allow dropdowns in Modes modal to be changed -
#5648
af395f8Thanks @DDU1222! - Feature: add new provider AIHubmix -
#5993
bedf59dThanks @saneroen! - Use OpenAI Codex OAuth credentials in Agent Manager so ChatGPT Plus/Pro works in agent mode -
#5942
d266500Thanks @Githubguy132010! - Fixed resumed agent runtime orchestrator tasks so previous task history is preserved. -
#5893
e29306dThanks @evanjacobson! - Fix duplicate text output when using OpenAI-compatible providers with streaming disabled. -
#5969
902a33fThanks @Olusammytee! - Normalize Vertex Claude Opus 4.6 legacy aliases to the canonical model ID to prevent invalid API calls and keep UI/runtime model capabilities consistent. -
#5568
82ba1a8Thanks @romeoscript! - fix: override context window for MiniMax/Kimi free models -
#5885
c7d5865Thanks @Olusammytee! - fix: cap qwen3-max-thinking max_tokens to provider limit