[v4.105.0]
- #3005
b87ae9c
Thanks @kevinvandijk! - Improve the edit chat area to allow context and file drag and drop when editing messages. Align more with upstream edit functionality
Patch Changes
-
#2983
93e8243
Thanks @jrf0110! - Adds project usage tracking for Teams and Enterprise customers. Organization members can view and filter usage by project. Project identifier is automatically inferred from.git/config
. It can be overwritten by writing a.kilocode/config.json
file with the following contents:{ "project": { "id": "my-project-id" } }
-
#3057
69f5a18
Thanks @chrarnoldus! - Thanks Roo, support for Claude Haiku 4.5 to Anthropic, Bedrock and Vertex providers was added -
#3046
1bd934f
Thanks @chrarnoldus! - A warning is now shown when the webview memory usage crosses 90% of the limit (gray screen territory) -
#2885
a34dab0
Thanks @shameez-struggles-to-commit! - Update VS Code Language Model API provider metadata to reflect current model limits:- Align context windows, prompt/input limits, and max output tokens with the latest provider data for matching models: gpt-3.5-turbo, gpt-4o-mini, gpt-4, gpt-4-0125-preview, gpt-4o, o3-mini, claude-3.5-sonnet, claude-sonnet-4, gemini-2.0-flash-001, gemini-2.5-pro, o4-mini-2025-04-16, gpt-4.1, gpt-5-mini, gpt-5.
- Fixes an issue where a default 128k context was assumed for all models.
- Notable: GPT-5 family now uses 264k context; o3-mini/o4-mini, Gemini, Claude, and 4o families have updated output and image support flags. GPT-5-mini max output explicitly set to 127,805.
This ensures Kilo Code correctly enforces model token budgets with the VS Code LM integration.