github BerriAI/litellm v1.37.5

latest releases: 1.41.11.dev5, v1.41.11.dev1, v1.41.8.dev2...
one month ago

What's Changed

  • add additional models from openrouter by @Merlinvt in #3545
  • Initial OIDC support (Google/GitHub/CircleCI -> Amazon Bedrock & Azure OpenAI) by @Manouchehri in #3507
  • Fix tool calls tracking with Lunary by @vincelwt in #3424
  • ✨ feat: Add Azure Content-Safety Proxy hooks by @Lunik in #3407
  • fix(exceptions.py): import openai Exceptions by @nobu007 in #3399
  • Clarifai-LiteLLM : Added clarifai as LLM Provider. by @mogith-pn in #3369
  • (fix) Fixed linting and other bugs with watsonx provider by @simonsanvil in #3561
  • feat(router.py): allow setting model_region in litellm_params by @krrishdholakia in #3582
  • [UI] Show Token ID/Hash on Admin UI by @ishaan-jaff in #3583
  • [Litellm Proxy + litellm.Router] - Pass the same message/prompt to N models by @ishaan-jaff in #3585
  • [Feat] - log metadata on traces + allow users to log metadata when existing_trace_id exists by @ishaan-jaff in #3581
  • Set fake env vars for client_no_auth fixture by @msabramo in #3588
  • [Feat] Proxy + Router - Retry on RateLimitErrors when fallbacks, other deployments exists by @ishaan-jaff in #3590
  • Make test_load_router_config pass by @msabramo in #3589
  • feat(bedrock_httpx.py): Make Bedrock-Cohere calls Async + Command-R support by @krrishdholakia in #3586

New Contributors

Full Changelog: v1.37.3-stable...v1.37.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.37.5

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Don't miss a new litellm release

NewReleases is sending notifications on new releases.