New Features
- Extension terminal input interception via
terminal_input, allowing extensions to consume or transform raw input before normal TUI handling. See docs/extensions.md. - Expanded CLI model selection:
--modelnow supportsprovider/id, fuzzy matching, and:<thinking>suffixes. See README.md and docs/models.md. - Safer package source handling with stricter git source parsing and improved local path normalization. See docs/packages.md.
- New built-in model definition
gpt-5.3-codex-sparkfor OpenAI and OpenAI Codex providers. - Improved OpenAI stream robustness for malformed trailing tool-call JSON in partial chunks.
- Added built-in GLM-5 model support via z.ai and OpenRouter provider catalogs.
Breaking Changes
ContextUsage.tokensandContextUsage.percentare nownumber | null. After compaction, context token count is unknown until the next LLM response, so these fields returnnull. Extensions that readContextUsagemust handle thenullcase. RemovedusageTokens,trailingTokens, andlastUsageIndexfields fromContextUsage(implementation details that should not have been public) (#1382 by @ferologics)- Git source parsing is now strict without
git:prefix: only protocol URLs are treated as git (https://,http://,ssh://,git://). Shorthand sources likegithub.com/org/repoandgit@github.com:org/reponow require thegit:prefix. (#1426)
Added
- Added extension event forwarding for message and tool execution lifecycles (
message_start,message_update,message_end,tool_execution_start,tool_execution_update,tool_execution_end) (#1375 by @sumeet) - Added
terminal_inputextension event to intercept, consume, or transform raw terminal input before normal TUI handling. - Added
gpt-5.3-codex-sparkmodel definition for OpenAI and OpenAI Codex providers (research preview).
Changed
- Routed GitHub Copilot Claude 4.x models through Anthropic Messages API, with updated Copilot header handling for Claude model requests.
Fixed
- Fixed context usage percentage in footer showing stale pre-compaction values. After compaction the footer now shows
?/200kuntil the next LLM response provides accurate usage (#1382 by @ferologics) - Fixed
_checkCompaction()using the first compaction entry instead of the latest, which could cause incorrect overflow detection with multiple compactions (#1382 by @ferologics) --modelnow works without--provider, supportsprovider/idsyntax, fuzzy matching, and:<thinking>suffix (e.g.,--model sonnet:high,--model openai/gpt-4o) (#1350 by @mitsuhiko)- Fixed local package path normalization for extension sources while tightening git source parsing rules (#1426)
- Fixed extension terminal input listeners not being cleared during session resets, which could leave stale handlers active.
- Fixed Termux bootstrap package name for
fdinstallation (#1433) - Fixed
@file autocomplete fuzzy matching to prioritize path-prefix and segment matches for nested paths (#1423) - Fixed OpenAI streaming tool-call parsing to tolerate malformed trailing JSON in partial chunks (#1424)