0.6.0 (2026-04-22)
Features
- add model caching and benchmarking utilities (#671) (2b15e16)
- add thinking token extraction (#798) (268c039)
- api: compress old tool_result content for small-context providers (#801) (a6a3de5)
- api: improve local provider reliability with readiness and self-healing (#738) (4cb963e)
- api: smart model routing primitive (cheap-for-simple, strong-for-hard) (#785) (e908864)
- enable 15 additional feature flags in open build (#667) (6a62e3f)
- native Anthropic API mode for Claude models on GitHub Copilot (#579) (fdef4a1)
- provider: expose Atomic Chat in /provider picker with autodetect (#810) (ee19159)
- provider: zero-config autodetection primitive (#784) (a5bfcbb)
Bug Fixes
- api: ensure strict role sequence and filter empty assistant messages after interruption (#745 regression) (#794) (06e7684)
- Collapse all-text arrays to string for DeepSeek compatibility (#806) (761924d)
- model: codex/nvidia-nim/minimax now read OPENAI_MODEL env (#815) (4581208)
- provider: saved profile ignored when stale CLAUDE_CODE_USE_* in shell (#807) (13de4e8)
- rename .claude.json to .openclaude.json with legacy fallback (#582) (4d4fb28)
- replace discontinued gemini-2.5-pro-preview-03-25 with stable gemini-2.5-pro (#802) (64582c1), closes #398
- security: harden project settings trust boundary + MCP sanitization (#789) (ae3b723)
- test: autoCompact floor assertion is flag-sensitive (#816) (c13842e)
- ui: prevent provider manager lag by deferring sync I/O (#803) (85eab27)