Verify Docker Image Signature
All LiteLLM Docker images are signed with cosign. Every release is signed with the same key introduced in commit 0112e53.
Verify using the pinned commit hash (recommended):
A commit hash is cryptographically immutable, so this is the strongest way to ensure you are using the original signing key:
cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/0112e53046018d726492c814b3644b7d376029d0/cosign.pub \
ghcr.io/berriai/litellm:v1.83.14.rc.1Verify using the release tag (convenience):
Tags are protected in this repository and resolve to the same key. This option is easier to read but relies on tag protection rules:
cosign verify \
--key https://raw.githubusercontent.com/BerriAI/litellm/v1.83.14.rc.1/cosign.pub \
ghcr.io/berriai/litellm:v1.83.14.rc.1Expected output:
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
What's Changed
- fix: preserve tool_use input args in Anthropic adapter streaming by @Chesars in #24355
- fix: preserve role='assistant' in Azure streaming with include_usage by @Chesars in #24354
- fix: map Zhipu GLM non-standard finish_reason values by @Chesars in #24373
- fix(responses-api): apply GPT-5 temperature validation by @Chesars in #24371
- fix(bedrock): sort assistant content blocks so text precedes toolUse by @Chesars in #24368
- fix(gemini): filter params from embedding requests by @Chesars in #24370
- fix(gemini): read web search cost from model_info instead of hardcode by @Chesars in #24372
- fix(gemini): include DOCUMENT modality tokens in cost calculation by @Chesars in #24410
- docs: add missing observability integrations to View All page by @Chesars in #24420
- fix(vertex_ai): forward dimensions parameter in multimodalembedding requests by @Chesars in #24415
- refactor(responses): extract shared format mapping between Responses API and Chat Completions bridges by @Chesars in #24417
- fix(model-prices): migrate 38 models from legacy max_tokens to max_input_tokens/max_output_tokens by @Chesars in #24422
- feat(bedrock): add GLM-5 and Minimax M2.5 with regional aliases by @Chesars in #24423
- fix: update bedrock claude sonnet/opus 4.6 above 200k token pricing and sonnet 4.6 max_input_tokens to 1M by @dongyu-turo in #24164
- merge litellm_internal_staging by @Sameerlite in #25942
- merge litellm_internal_staging by @Sameerlite in #25945
- Sync litellm_staging_03_22_2026 with litellm_internal_staging by @Chesars in #26283
- merge main by @Sameerlite in #26301
- merge main by @Sameerlite in #26303
- fix(router): restore BYOK key injection for vector store endpoints with team-scoped deployments by @shivamrawat1 in #25746
- [Infra] Remove CCI/GHA test duplication and semantically shard proxy DB tests by @yuneng-berri in #26356
- merge main by @Sameerlite in #26381
- Split MCP routes into inference vs management (unblock Admin UI on DISABLE_LLM_API_ENDPOINTS nodes) by @ryan-crabbe-berri in #26367
- feat(responses): add use_chat_completions_api flag for openai/ models with custom api_base by @Sameerlite in #25346
- fix(team_endpoints): auto-add SSO team members to org on move (proxy admin only) by @ishaan-berri in #26377
- sync litellm_staging_03_22_2026 with litellm_internal_staging by @Chesars in #26440
- fix(proxy): respect object-level permissions for managed vector store endpoints by @shivamrawat1 in #26351
- feat(pricing): gemini-embedding-2 GA cost map, blog, and test by @Sameerlite in #26391
- fix(responses): normalize bridged object field by @Sameerlite in #26327
- feat(models): add versioned GPT-5.4 mini/nano snapshots by @Sameerlite in #26115
- fix(proxy): preserve anthropic_messages call type for /v1/messages logging by @Sameerlite in #26248
- feat(responses): strip custom_tool_call namespace for all providers by @Sameerlite in #26221
- fix(anthropic): strip Gemini thought suffix from streaming tool_use id by @Sameerlite in #25935
- feat(docs): align fenced code block padding on blog and doc pages by @Sameerlite in #25932
- docs(gemini): Gemini 3 thinking_level defaults and release note by @Sameerlite in #25842
- docs(proxy): clarify x-litellm-model-group vs provider model id by @Sameerlite in #25497
- [Fix] Tests - Proxy: Isolate master_key/prisma_client module globals between tests by @yuneng-berri in #26362
- feat(openai): add route_all_chat_openai_to_responses global flag by @Sameerlite in #25359
- Litellm staging 03 22 2026 by @Chesars in #24374
- chore(packaging): declare MIT license in litellm-proxy-extras metadata by @stuxf in #26369
- chore(deps): bump vulnerable dependencies by @stuxf in #26365
- fix(auth): centralize common_checks to close authorization bypass by @stuxf in #26279
- fix(mcp): harden OAuth authorize/token endpoints (BYOK + discoverable) by @stuxf in #26274
- [Feat] Day-0 support for GPT-5.5 and GPT-5.5 Pro by @mateo-berri in #26449
- [Infra] Remove docs/my-website, point contributors to litellm-docs repo by @yuneng-berri in #26454
- fix(vertex passthrough): log :embedContent and :batchEmbedContents responses by @ishaan-berri in #26146
- fix(jwt-auth): apply team TPM/RPM + attribution for admins using x-litellm-team-id by @ryan-crabbe-berri in #26438
- [Infra] Declare proprietary license in litellm-enterprise metadata by @yuneng-berri in #26457
- feat(guardrails): LLM-as-a-Judge guardrail by @ishaan-berri in #26360
- [Fix] Guardrail param handling in list and submission endpoints by @yuneng-berri in #26390
- [Feature] UI - Users: Add Send Invitation Email Toggle by @yuneng-berri in #25808
- [Refactor] Proxy: move projects management to enterprise package by @yuneng-berri in #25677
- fix(proxy): single-team DB fallback when JWT has no team_id by @milan-berri in #26418
- [Fix] Harden team metadata handling in /team/new and /team/update by @yuneng-berri in #26464
- [Feat] Add azure/gpt-5.5 + azure/gpt-5.5-pro entries (+ dated variants) by @mateo-berri in #26361
- feat(proxy): add /v1/memory CRUD endpoints by @krrish-berri-2 in #26218
- [Fix] Harden pass-through target URL construction by @yuneng-berri in #26467
- [Fix] Tighten caller-permission checks on key route fields by @yuneng-berri in #26492
- [Fix] Extend caller-permission checks to service-account + tighten raw-body acceptance by @yuneng-berri in #26493
- feat: UI setting to disable /key/generate for org admins by @ryan-crabbe-berri in #26442
- fix(ui): stop injecting $0 cost on model edit by @ryan-crabbe-berri in #26001
- fix: preserve service_account_id in metadata on /key/update by @ryan-crabbe-berri in #26004
- [Feature] UI - Spend Logs: sortable Model and TTFT columns by @yuneng-berri in #26488
- [Fix] Restrict /global/spend/* routes to admin roles by @yuneng-berri in #26490
- [Infra] Merge dev branch by @yuneng-berri in #26496
- Sync litellm_staging_03_23_2026 with litellm_internal_staging by @Chesars in #26510
- Litellm staging 03 23 2026 by @Chesars in #24428
- ci: add supply-chain guard to block fork PRs that modify dependencies by @krrish-berri-2 in #26511
- [Fix] Align MCP OAuth proxy endpoints with per-server access policy by @yuneng-berri in #26516
- feat(mcp): resolve team/key MCP permissions by name or alias by @ryan-crabbe-berri in #26338
- [Fix] bind RAG ingestion config to stored credential values by @yuneng-berri in #26512
- fix(key_management): enforce upperbound_key_generate_params on /key/regenerate by @michelligabriele in #26340
- [Fix] Harden /model/info redaction for plural credential field names by @yuneng-berri in #26513
- fix(content_filter): log guardrail_information on streaming post-call by @michelligabriele in #26448
- fix(model_management): refresh router after POST /model/update by @michelligabriele in #26427
- [Infra] Merge dev branch by @yuneng-berri in #26522
- fix(bedrock guardrail): dedupe post-call log entry when only post_call is configured by @shivamrawat1 in #26474
- fix(guardrails): team-level guardrails and global policy guardrails can run together by @shivamrawat1 in #26466
- [Feat] Add "My User" tab to team info page by @ryan-crabbe-berri in #26520
- [Fix] broaden RAG ingestion credential cleanup to AWS endpoint/identity fields by @yuneng-berri in #26525
- [Fix] Reseed enforcement read path from DB on counter miss by @Michael-RZ-Berri in #26459
- fix(proxy): suppress deferred success log when post-call guardrail blocks by @shivamrawat1 in #26528
- [Infra] Build UI by @yuneng-berri in #26532
- fix(memory): jsonify metadata before Prisma writes on /v1/memory by @krrish-berri-2 in #26536
- Litellm memory improvements v2 by @krrish-berri-2 in #26541
- [Infra] Rebuild UI by @yuneng-berri in #26542
- [Infra] Bump Versions by @yuneng-berri in #26543
- [Infra] Promote Internal Staging to main by @yuneng-berri in #26545
New Contributors
- @dongyu-turo made their first contribution in #24164
Full Changelog: v1.83.13-nightly...v1.83.14.rc.1