New Features
- Hugging Face provider: Access Hugging Face models via OpenAI-compatible Inference Router. Set
HF_TOKENenvironment variable. See README.md#hugging-face. - Extended prompt caching:
PI_CACHE_RETENTION=longenables 1-hour caching for Anthropic (vs 5min default) and 24-hour for OpenAI (vs in-memory default). Only applies to direct API calls. See README.md#prompt-caching. - Configurable autocomplete height:
autocompleteMaxVisiblesetting (3-20 items, default 5) controls dropdown size. Adjust via/settingsorsettings.json. - Shell-style keybindings:
alt+b/alt+ffor word navigation,ctrl+dfor delete character forward. See docs/keybindings.md. - RPC
get_commands: Headless clients can now list available commands programmatically. See docs/rpc.md.
Added
- Added Hugging Face provider support via OpenAI-compatible Inference Router (#994)
- Added
PI_CACHE_RETENTIONenvironment variable to control cache TTL for Anthropic (5m vs 1h) and OpenAI (in-memory vs 24h). Set tolongfor extended retention. (#967) - Added
autocompleteMaxVisiblesetting for configurable autocomplete dropdown height (3-20 items, default 5) (#972 by @masonc15) - Added
/filescommand to list all file operations (read, write, edit) in the current session - Added shell-style keybindings:
alt+b/alt+ffor word navigation,ctrl+dfor delete character forward (when editor has text) (#1043 by @jasonish) - Added
get_commandsRPC method for headless clients to list available commands (#995 by @dnouri)
Changed
- Improved
extractCursorPositionperformance in TUI: scans lines in reverse order, early-outs when cursor is above viewport (#1004 by @can1357) - Autocomplete improvements: better handling of partial matches and edge cases (#1024 by @Perlence)
Fixed
- External edits to
settings.jsonare now preserved when pi reloads or saves unrelated settings. Previously, editing settings.json directly (e.g., removing a package frompackagesarray) would be silently reverted on next pi startup when automatic setters likesetLastChangelogVersion()triggered a save. - Fixed custom header not displaying correctly with
quietStartupenabled (#1039 by @tudoroancea) - Empty array in package filter now disables all resources instead of falling back to manifest defaults (#1044)
- Auto-retry counter now resets after each successful LLM response instead of accumulating across tool-use turns (#1019)
- Fixed incorrect
.mdfile names in warning messages (#1041 by @llimllib) - Fixed provider name hidden in footer when terminal is narrow (#981 by @Perlence)
- Fixed backslash input buffering causing delayed character display in editor (#1037 by @Perlence)
- Fixed markdown table rendering with proper row dividers and minimum column width (#997 by @tmustier)
- Fixed OpenAI completions
toolChoicehandling (#998 by @williamtwomey) - Fixed cross-provider handoff failing when switching from OpenAI Responses API providers due to pipe-separated tool call IDs (#1022)
- Fixed 429 rate limit errors incorrectly triggering auto-compaction instead of retry with backoff (#1038)
- Fixed Anthropic provider to handle
sensitivestop_reason returned by API (#978) - Fixed DeepSeek API compatibility by detecting
deepseek.comURLs and disabling unsupporteddeveloperrole (#1048) - Fixed Anthropic provider to preserve input token counts when proxies omit them in
message_deltaevents (#1045) - Fixed
autocompleteMaxVisiblesetting not persisting tosettings.json