github BerriAI/litellm v1.73.6.rc-draft

latest releases: v1.76.2-nightly, v1.76.1.rc.2, v1.76.1.rc.1...
pre-release2 months ago

What's Changed

  • Fix SambaNova 'created' field validation error - handle float timestamps by @neubig in #11971
  • Docs - Add Recommended Machine Specifications by @ishaan-jaff in #11980
  • fix: make response api support Azure Authentication method by @hsuyuming in #11941
  • feat: add Last Success column to health check table by @colesmcintosh in #11903
  • Add GitHub Actions workflow for LLM translation testing artifacts by @colesmcintosh in #11780
  • Fix markdown table not rendering properly by @mukesh-dream11 in #11969
  • [Fix] - Check HTTP_PROXY vars in networking requests by @ishaan-jaff in #11947
  • Proxy UI MCP Auth passthrough by @wagnerjt in #11968
  • fix unrecognised parameter reasoning_effort by @Shankyg in #11838
  • Fixing watsonx error: 'model_id' or 'model' cannot be specified in the request body for models in a deployment space by @cbjuan in #11854
  • [Bug Fix] Perplexity - LiteLLM doesn't support 'web_search_options' for Perplexity' Sonar Pro model by @ishaan-jaff in #11983
  • feat: implement Perplexity citation tokens and search queries cost calculation by @colesmcintosh in #11938
  • [Feat] Enterprise - Allow dynamically disabling callbacks in request headers by @ishaan-jaff in #11985
  • Add Mistral 3.2 24B to model mapping by @colesmcintosh in #11926
  • [Feat] Add List Callbacks API Endpoint by @ishaan-jaff in #11987
  • fix: fix test_get_azure_ad_token_with_oidc_token testcase issue by @hsuyuming in #11988
  • [Bug Fix] Bedrock Guardrail - Don't raise exception on intervene action by @ishaan-jaff in #11875
  • VertexAI Anthropic passthrough cost calc fixes + Filter litellm params from request sent to passthrough endpoint by @krrishdholakia in #11992
  • Fix custom pricing logging + Gemini - only use accepted format values + Gemini - cache tools if passing alongside cached content by @krrishdholakia in #11989
  • Fix unpack_defs handling of nested $ref inside anyOf items by @colesmcintosh in #11964
  • #response_format NVIDIA-NIM add response_format to OpenAI parameters … by @shagunb-acn in #12003
  • Add Azure o3-pro Pricing by @marty-sullivan in #11990
  • [Bug Fix] SCIM - Ensure new user roles are applied by @ishaan-jaff in #12015
  • [Fix] Magistral small system prompt diverges too much from the official recommendation by @ishaan-jaff in #12007
  • Refactor unpack_defs to use iterative approach instead of recursion by @colesmcintosh in #12017
  • [Feat] Add OpenAI Search Vector Store Operation by @ishaan-jaff in #12018
  • [Feat] OpenAI/Azure OpenAI - Add support for creating vector stores on LiteLLM by @ishaan-jaff in #12021
  • docs(CLAUDE.md): add development guidance and architecture overview for Claude Code by @colesmcintosh in #12011
  • Teams - Support default key expiry + UI - support enforcing access for members of specific SSO Group by @krrishdholakia in #12023
  • Anthropic /v1/messages - Custom LLM Server support + Azure Responses api via chat completion support by @krrishdholakia in #12016
  • Update mistral 'supports_response_schema' field + Fix ollama embedding by @krrishdholakia in #12024
  • [Fix] Router - cooldown time, allow using dynamic cooldown time for a specific deployment by @ishaan-jaff in #12037
  • Usage Page: Aggregate the data across all pages by @NANDINI-star in #12033
  • [Feat] Add initial endpoints for using Gemini SDK (gemini-cli) with LiteLLM by @ishaan-jaff in #12040
  • Add Elasticsearch Logging Tutorial by @colesmcintosh in #11761
  • [Feat] Add Support for calling Gemini/Vertex models in their native format by @ishaan-jaff in #12046
  • [Feat] Add gemini-cli support - call VertexAI models through LiteLLM Native gemini routes by @ishaan-jaff in #12053
  • Managed Files + Batches - filter deployments to only those where file was written + save all model file id mappings in DB (prev just 1st one) by @krrishdholakia in #12048
  • Filter team-only models from routing logic for non-team calls + Support List Batches with target model name specified by @krrishdholakia in #12049
  • [Feat] gemini-cli integration - Add Logging + Cost tracking for stream + non-stream Vertex / Google AI Studio routes by @ishaan-jaff in #12058
  • Fix Elasticsearch tutorial image rendering by @colesmcintosh in #12050
  • [Fix] Allow using HTTP_ Proxy settings with trust_env by @ishaan-jaff in #12066
  • fix(proxy): Fix test_mock_create_audio_file by adding managed_files hook by @colesmcintosh in #12072
  • Enhance CircleCI integration in LLM translation testing workflow by @colesmcintosh in #12041
  • Inkeep searchbar and chat added to the Docs by @NANDINI-star in #12030
  • [Fix] Redis - Add better debugging to see what variables are set by @ishaan-jaff in #12073
  • Fix today selector date mutation bug in dashboard components by @colesmcintosh in #12042
  • Responses API - Add reasoning content support for non-OpenAI providers by @ryan-castner in #12055
  • Litellm dev 06 26 2025 p1 by @krrishdholakia in #12087
  • Refactor: bedrock passthrough fixes - migrate to Passthrough SDK by @krrishdholakia in #12089
  • Fix Azure-OpenAI Vision API Compliance by @davis-featherstone in #12075
  • [Bug Fix] Bedrock Guardrails - Ensure PII Masking is applied on response streaming or non streaming content when using post call by @ishaan-jaff in #12086
  • fix(docs): Remove unused dotenv dependency from docusaurus config by @colesmcintosh in #12102
  • [Fix] MCP - Ensure internal users can access /mcp and /mcp/ routes by @ishaan-jaff in #12106
  • fix: handle provider_config type error in passthrough error handler by @colesmcintosh in #12101
  • Add o3 and o4-mini deep research models by @krrishdholakia in #12109
  • [Bug Fix] Anthropic - Token Usage Null Handling in calculate_usage by @Gum-Joe in #12068
  • fix: change cost calculation logs from INFO to DEBUG level by @colesmcintosh in #12112
  • fix: set logger levels based on LITELLM_LOG environment variable by @colesmcintosh in #12111
  • [Feat] Add Bridge from generateContent <> /chat/completions by @ishaan-jaff in #12081
  • [Docs] - Show how to use fallbacks with audio transcriptions endpoints by @ishaan-jaff in #12115
  • [Bug Fix] Fix handling str, bool types formock_testing_fallbacks on router using /audio endpoints by @ishaan-jaff in #12117
  • Adding Feature: Palo Alto Networks Prisma AIRS Guardrail by @jroberts2600 in #12116
  • [Bug Fix] Exception mapping for context window exceeded - should catch anthropic exceptions by @ishaan-jaff in #12113
  • docs(GEMINI.md): add development guidelines and architecture overview by @colesmcintosh in #12035
  • [Bug fix] Router - handle cooldown_time = 0 for deployments by @ishaan-jaff in #12108
  • [Feat] Add Eleven Labs - Speech To Text Support on LiteLLM by @ishaan-jaff in #12119
  • Revert "fix: set logger levels based on LITELLM_LOG environment variable" by @ishaan-jaff in #12122
  • Fix Braintrust integration: Adds model to metadata to calculate cost and corrects docs by @ohmeow in #12022
  • [Fix] Change Message init type annotation to support other roles by @amarrella in #11942
  • Add "Get Code" Feature by @NANDINI-star in #11629
  • Bedrock Passthrough cost tracking (/invoke + /converse routes - streaming + non-streaming) by @krrishdholakia in #12123
  • feat: add local LLM translation testing with artifact generation by @colesmcintosh in #12120
  • [Feat] introduce new environment variable NO_REDOC to opt-out Redoc by @zhangyoufu in #12092
  • Fix user-team association issues in LiteLLM proxy by @colesmcintosh in #12082
  • feat: enhance redaction functionality for EmbeddingResponse by @bougou in #12088
  • De-duplicate models in team settings dropdown by @NANDINI-star in #12074
  • Add Azure OpenAI assistant features cost tracking by @colesmcintosh in #12045
  • Remove duplicated entry in logs on key cache update by @Mte90 in #12032
  • Update model_prices_and_context_window.json by @codeugar in #11972
  • Litellm batch api background cost calc by @krrishdholakia in #12125
  • Selecting 'test connect' resets the public model name when selecting an azure model by @NANDINI-star in #11713
  • [Bug Fix] Invite links email should contain the correct invite id by @ishaan-jaff in #12130
  • fix example config.yaml in claude code tutorial by @glgh in #12133

New Contributors

Full Changelog: v1.73.0-stable...v1.73.6.rc-draft

Don't miss a new litellm release

NewReleases is sending notifications on new releases.