✨ Minor Changes
- 74f6fb3: Add
GET /api/v1/public/agent-tokenspublic endpoint. Mirrors the shape of/provider-tokensbut groups daily-token usage by(agent_category, agent_platform)instead of by LLM provider, so the marketing site can show per-agent (OpenClaw, Claude Code, OpenAI SDK, etc.) charts alongside the existing per-provider ones. Excludes theotherplatform bucket andcustom:*models server-side. Gated byMANIFEST_PUBLIC_STATSand cached for 24h, same posture as the rest of the public-stats endpoints.
🐛 Patch Changes
-
c1fe19a: Fix GitHub Copilot routing for GPT-5 Codex models. Copilot serves Codex variants (
gpt-5-codex,gpt-5.2-codex,gpt-5.3-codex) only via/responses, so chat-completions requests now swap to that endpoint instead of returning "Unsupported API for model". Also rewritesmax_tokenstomax_completion_tokensfor the GPT-5 / o-series family on Copilot, fixing the "Unsupported parameter: 'max_tokens'" error reported alongside. -
786dd76: Preserve Responses stream classification during stream warm-up.
-
ae56a30: fix(proxy): preserve Anthropic server tools through /v1/messages double-conversion (#1886)
Claude Code requests routed through
POST /v1/messagesto an Anthropic upstream
failed withtools.N.custom.input_schema: Field requiredbecause server tools
(web_search, bash, text_editor, computer, code_execution) lost theirtypetag
during the Anthropic → OpenAI → Anthropic translation and were re-emitted as
custom tools missing the requiredinput_schema. Server tools are now stashed
on the translated body and re-emitted unchanged when the upstream is Anthropic. -
d25320a: Preserve DeepSeek
reasoning_contenton every follow-up turn, regardless of which provider proxies it (OpenCode Go, custom providers, future aggregators). Fixes a hard failure on OpenCode Go'sdeepseek-v4-pro("The reasoning_content in the thinking mode must be passed back to the API") — issue #1862. -
e7cdfa1: Strip the non-standard
refJSON Schema keyword (no$prefix) from Google Gemini tool parameters. Some tool emitters drop the$prefix because Protobuf and similar parsers reject dollar-prefixed field names; without this fix Manifest forwardedrefverbatim and Google rejected the request withInvalid JSON payload received. Unknown name "ref". -
f21584a: Fix prompt-caching token counters on
/v1/messages.cache_controlmarkers always reached Anthropic (caching was working server-side), but the chat → Anthropic-Messages conversion intoAnthropicUsagehardcodedcache_creation_input_tokens: 0, and theparseUsageObjectAnthropic branch read cache reads from the wrong key. Result: client responses lost cache creation counts, andagent_messagesrows recorded0for both cache creation and cache reads even when Anthropic actually hit the cache.Also fixes the recorder's duplicate-write detector, which summed
input_tokens + cache_read_tokens + cache_creation_tokenswhen computing a row's total prompt tokens —input_tokensalready stored the chat-shape total, so the sum double-counted caches and caused legitimate duplicates to bypass dedup. AndtoAnthropicUsagenow reads OpenAI-compat nestedprompt_tokens_details.cached_tokensas a fallback so/v1/messagesrequests routed to OpenAI / DeepSeek / Z.AI / MiniMax / Mistral surface their cached-input counts too. -
9f64594: Update MiniMax "Where to get an API key" link to point to the actual key page (
/user-center/basic-information/interface-key) instead of the API docs overview.