New Features
- Searchable auth provider login flow: the
/loginprovider selector now supports fuzzy search/filtering, making it faster to find providers when many are configured. See docs/providers.md. (#3572 by @mitsuhiko) - GPT-5.5 Codex support:
openai-codex/gpt-5.5is available as a model option, includingxhighreasoning support and corrected priority-tier pricing. - Terminal progress indicators are now opt-in: OSC 9;4 progress reporting during streaming/compaction is off by default and can be toggled via
terminal.showTerminalProgressin/settings(#3588) --no-builtin-tools/createAgentSession({ noTools: "builtin" })now correctly disables only built-in tools while keeping extension tools active. See docs/extensions.md and README.md (#3592)
Breaking Changes
- Disabled OSC 9;4 terminal progress indicators by default. Set
terminal.showTerminalProgresstotruein/settingsto re-enable (#3588)
Added
- Added searchable auth provider login flow with fuzzy filtering in the provider selector (#3572 by @mitsuhiko)
- Added GPT-5.5 Codex model
- Added auth source labels in
/loginso provider entries can show when auth comes from--api-key, an environment variable, or custom provider fallback without exposing secrets.
Changed
- Updated default model selection across providers to current recommended models.
- Improved stale extension context errors after session replacement or reload to tell extension authors to avoid captured
pi/commandctxand usewithSessionfor post-replacement work.
Fixed
- Fixed
/modelselector cancellation to request render instead of incorrectly triggering login selector. - Changed login, OAuth, and extension selectors for more consistent styling.
- Added Amazon Bedrock setup guidance to
/loginand updated/modelcopy to refer to configured providers instead of only API keys. - Improved no-model and missing-auth warnings to point users to
/loginfor OAuth or API key setup. - Fixed
/quitshutdown ordering to stop the TUI before extension UI teardown can repaint, preserving the final rendered frame while still emittingsession_shutdownbefore process exit. - Fixed
SettingsManager.inMemory()initial settings being lost after reloads triggered by SDK resource loading (#3616) - Fixed
models.jsonprovider compatibility to acceptcompat.supportsLongCacheRetention, allowing proxies to opt out of long-retention cache fields when needed while long retention is enabled by default when requested (#3543) - Fixed
--thinking xhighforopenai-codexgpt-5.5so it is no longer downgraded tohigh. - Fixed git package installs with custom
npmCommandvalues such aspnpmby avoiding npm-specific production flags in that compatibility path (#3604) - Fixed first user messages rendering without spacing after existing notices such as compaction summaries or status messages (#3613)
- Fixed the handoff extension example to use the replacement-session context after creating a new session, avoiding stale
ctxerrors when it installs the generated prompt (#3606) - Fixed session replacement and
/quitteardown ordering to run host-owned extension UI cleanup synchronously aftersession_shutdownhandlers complete but before invalidating the old extension context, preventing stale extension UI from rendering against a disposed session (#3597 by @vegarsti) - Fixed crash on
/quitwhen an extension registers a custom footer whoserender()accessesctx, by tearing down extension-provided UI before invalidating the extension runner during shutdown (#3595) - Fixed auto-retry to treat Bedrock/Smithy HTTP/2 transport failures like
http2 request did not get a responseas transient errors, so the agent retries automatically instead of waiting for a manual nudge (#3594) - Fixed the CLI/SDK tool-selection split so
--no-builtin-toolsandcreateAgentSession({ noTools: "builtin" })disable only built-in default tools while keeping extension/custom tools enabled, instead of falling through to the same "disable everything" path as--no-tools(#3592) - Fixed remaining hardcoded
pi/.pibranding to route throughAPP_NAMEandCONFIG_DIR_NAMEextension points, so SDK rebrands get consistent naming in/quitdescription,process.title, and the project-local extensions directory (#3583 by @jlaneve) - Fixed
pi-coding-agentshippinguuid@11, which triggerednpm auditmoderate vulnerability reports for downstream installs; the package now depends onuuid@14(#3577) - Fixed
openai-completionsstreamed tool-call assembly to coalesce deltas by stable tool index when OpenAI-compatible gateways mutate tool call IDs mid-stream, preventing malformed Kimi K2.6/OpenCode tool streams from splitting one call into multiple bogus tool calls (#3576) - Fixed
ctx.ui.setWorkingMessage()to persist across loader recreation, matching the behavior ofctx.ui.setWorkingIndicator()(#3566) - Fixed coding-agent
fs.watcherror handling for theme and git-footer watchers to retry after transient watcher failures such asEMFILE, avoiding startup crashes in large repos (#3564) - Fixed built-in
kimi-codingmodel generation to attach the expectedUser-Agentheader so direct Kimi Coding requests use the provider's expected client identity (#3586) - Fixed extension shortcut conflict diagnostics to display at startup instead of only on reload, so extension authors discover reserved keybinding conflicts immediately rather than discovering them later through user feedback (#3617)
- Fixed
models.jsonAnthropic-compatible provider configuration to acceptcompat.supportsEagerToolInputStreaming, allowing proxies that reject per-tooleager_input_streamingto use the legacy fine-grained tool streaming beta header instead (#3575) - Fixed startup banner extension labels to strip trailing
index.js/index.tssuffixes (#3596 by @aliou) - Fixed OSC 9;4 terminal progress updates to stay alive in terminals such as Ghostty during long-running agent work (#3610)
- Fixed OpenAI-compatible completion usage parsing to avoid double-counting reasoning tokens already included in
completion_tokens(#3581) - Fixed
openai-responsescompatibility for strict OpenAI-compatible proxies by allowingmodels.jsonto disable the underscore-containingsession_idheader withcompat.sendSessionIdHeader: false(#3579) - Fixed GPT-5.5 Codex capability handling to clamp unsupported minimal reasoning to
lowand apply the model's 2.5x priority service-tier pricing multiplier (#3618 by @markusylisiurunen)