github BerriAI/litellm v1.80.15.dev1

4 hours ago

What's Changed

  • [test] stabilize db_migration_disable_update_check test log check by @uc4w6c in #18882
  • [fix] respect pangea guardrail default_on during initialization by @uc4w6c in #18912
  • [fix] prevent duplicate MCP reload scheduler registration by @uc4w6c in #18934
  • fix: correct pricing for openrouter/openai/gpt-oss-20b by @Chesars in #18899
  • fix: missing completion_tokens_details in gemini 3 flash when reasoning_effort is not used (#18896) by @yogeshwaran10 in #18898
  • fix: add created_at/updated_at fields to LiteLLM_ProxyModelTable by @theonlypal in #18937
  • fix(google_genai): forward extra_headers in generateContent adapter by @jonmagic in #18935
  • fix(guardrails): fix SerializationIterator error and pass tools to guardrail by @eagle-p in #18932
  • fix(anthropic): prevent dropping thinking when any message has thinking_blocks by @rsp2k in #18929
  • docs: add Redis requirement warning for high-traffic deployments by @Harshit28j in #18892
  • doc: update load balancing and routing with enable_pre_call_checks by @Harshit28j in #18888
  • doc: updated pass_through with guided param by @Harshit28j in #18886
  • add better err handling for antropic by @Harshit28j in #18955
  • fix: include IMAGE token count in cost calculation for Gemini models by @Chesars in #18876
  • fix(gemini): fix negative text_tokens when using cache with images by @Chesars in #18768
  • fix: sync Helm chart versioning with production standards and Docker versions by @Chesars in #18868
  • fix(oci): handle OpenAI-style image_url object in multimodal messages by @Chesars in #18272
  • docs: update message content types link and add content types table by @Chesars in #18209
  • fix(gemini): add presence_penalty support for Google AI Studio by @Chesars in #18154
  • fix(anthropic): preserve web_fetch_tool_result in multi-turn conversations by @Chesars in #18142
  • feat(bedrock): add OpenAI-compatible service_tier parameter translation by @Chesars in #18091
  • fix(text_completion): support token IDs (list of integers) as prompt by @Chesars in #18011
  • [Bug]: Add Custom CA certificates to boto3 clients by @Sameerlite in #18942
  • Fix : model id encoding for bedrock passthrough by @Sameerlite in #18944
  • Fix: respect max_completion_tokens in thinking feat by @Sameerlite in #18946
  • Fix: [Bug]: Gemini Image Generation: imageConfig parameters by @Sameerlite in #18948
  • [Feat] Add steps-to-reproduce section in bug report by @Sameerlite in #18949
  • [Feat] Add all chat replicate models support by @Sameerlite in #18954
  • Fix: gaurdrail moderation support with responses API by @Sameerlite in #18957
  • Litellm staging 01 12 2026 by @krrishdholakia in #18956
  • Fix: Header forwarding for embeddings endpoint by @Sameerlite in #18960
  • [fix] include proxy/prisma_migration.py in non root by @ishaan-jaff in #18971
  • Add: missing anthropic tool results in response by @Sameerlite in #18945
  • fix: add allowClear to dropdown components for better UX by @Jetemple in #18778

New Contributors

Full Changelog: v1.80.15-nightly...v1.80.15.dev1

Don't miss a new litellm release

NewReleases is sending notifications on new releases.