github BerriAI/litellm v1.73.2-nightly

2 months ago

What's Changed

  • VertexAI Anthropic passthrough cost calc fixes + Filter litellm params from request sent to passthrough endpoint by @krrishdholakia in #11992
  • Fix custom pricing logging + Gemini - only use accepted format values + Gemini - cache tools if passing alongside cached content by @krrishdholakia in #11989
  • Fix unpack_defs handling of nested $ref inside anyOf items by @colesmcintosh in #11964
  • #response_format NVIDIA-NIM add response_format to OpenAI parameters … by @shagunb-acn in #12003
  • Add Azure o3-pro Pricing by @marty-sullivan in #11990
  • [Bug Fix] SCIM - Ensure new user roles are applied by @ishaan-jaff in #12015
  • [Fix] Magistral small system prompt diverges too much from the official recommendation by @ishaan-jaff in #12007
  • Refactor unpack_defs to use iterative approach instead of recursion by @colesmcintosh in #12017
  • [Feat] Add OpenAI Search Vector Store Operation by @ishaan-jaff in #12018
  • [Feat] OpenAI/Azure OpenAI - Add support for creating vector stores on LiteLLM by @ishaan-jaff in #12021
  • docs(CLAUDE.md): add development guidance and architecture overview for Claude Code by @colesmcintosh in #12011
  • Teams - Support default key expiry + UI - support enforcing access for members of specific SSO Group by @krrishdholakia in #12023
  • Anthropic /v1/messages - Custom LLM Server support + Azure Responses api via chat completion support by @krrishdholakia in #12016
  • Update mistral 'supports_response_schema' field + Fix ollama embedding by @krrishdholakia in #12024
  • [Fix] Router - cooldown time, allow using dynamic cooldown time for a specific deployment by @ishaan-jaff in #12037
  • Usage Page: Aggregate the data across all pages by @NANDINI-star in #12033
  • [Feat] Add initial endpoints for using Gemini SDK (gemini-cli) with LiteLLM by @ishaan-jaff in #12040
  • Add Elasticsearch Logging Tutorial by @colesmcintosh in #11761
  • [Feat] Add Support for calling Gemini/Vertex models in their native format by @ishaan-jaff in #12046
  • [Feat] Add gemini-cli support - call VertexAI models through LiteLLM Native gemini routes by @ishaan-jaff in #12053
  • Managed Files + Batches - filter deployments to only those where file was written + save all model file id mappings in DB (prev just 1st one) by @krrishdholakia in #12048
  • Filter team-only models from routing logic for non-team calls + Support List Batches with target model name specified by @krrishdholakia in #12049
  • [Feat] gemini-cli integration - Add Logging + Cost tracking for stream + non-stream Vertex / Google AI Studio routes by @ishaan-jaff in #12058
  • Fix Elasticsearch tutorial image rendering by @colesmcintosh in #12050
  • [Fix] Allow using HTTP_ Proxy settings with trust_env by @ishaan-jaff in #12066

Full Changelog: v1.73.1-nightly...v1.73.2-nightly

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.73.2-nightly

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 227.43264090448105 6.195122505342355 0.0 1853 0 184.98149800007013 1558.9818880000053
Aggregated Passed ✅ 210.0 227.43264090448105 6.195122505342355 0.0 1853 0 184.98149800007013 1558.9818880000053

Don't miss a new litellm release

NewReleases is sending notifications on new releases.