github BerriAI/litellm v1.48.16-stable

one month ago

What's Changed

  • (feat) add azure o1 models to model cost map by @ishaan-jaff in #6075
  • (feat) add cost tracking for OpenAI prompt caching by @ishaan-jaff in #6055
  • (docs) add links / sections for router settings, general settings on proxy config.yaml by @ishaan-jaff in #6078
  • (feat) add azure openai cost tracking for prompt caching by @ishaan-jaff in #6077
  • openrouter/openai's litellm_provider should be openrouter, not openai by @GTonehour in #6079
  • (code clean up) use a folder for gcs bucket logging + add readme in folder by @ishaan-jaff in #6080

New Contributors

Full Changelog: v1.48.15...v1.48.16-stable

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.16-stable

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 150.0 278.77820193060205 6.113357772864492 0.003340632662767482 1830 1 89.60288500003344 38293.597436000025
Aggregated Passed ✅ 150.0 278.77820193060205 6.113357772864492 0.003340632662767482 1830 1 89.60288500003344 38293.597436000025

Don't miss a new litellm release

NewReleases is sending notifications on new releases.