github BerriAI/litellm v1.27.7

latest releases: v1.47.0, v1.46.8, v1.46.7...
6 months ago

Use ClickhouseDB for low latency LLM Analytics / Spend Reports

(sub 1s analytics, with 100M logs)

Getting started with ClickHouse DB + LiteLLM Proxy

Docs + Docker compose for getting started with clickhouse: https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---clickhouse

Step 1: Create a config.yaml file and set litellm_settings: success_callback

model_list:
 - model_name: gpt-3.5-turbo
    litellm_params:
      model: gpt-3.5-turbo
litellm_settings:
  success_callback: ["clickhouse"]

Step 2: Set Required env variables for clickhouse

Env Variables for self hosted click house

CLICKHOUSE_HOST = "localhost"
CLICKHOUSE_PORT = "8123"
CLICKHOUSE_USERNAME = "admin"
CLICKHOUSE_PASSWORD = "admin"

Step 3: Start the proxy, make a test request

New Models

Mistral on Azure AI Studio

Sample Usage

Ensure you have the /v1 in your api_base

from litellm import completion
import os

response = completion(
    model="mistral/Mistral-large-dfgfj", 
    api_base="https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1",
    api_key = "JGbKodRcTp****"
    messages=[
       {"role": "user", "content": "hello from litellm"}
   ],
)
print(response)

[LiteLLM Proxy] Using Mistral Models

Set this on your litellm proxy config.yaml

Ensure you have the /v1 in your api_base

model_list:
  - model_name: mistral
    litellm_params:
      model: mistral/Mistral-large-dfgfj
      api_base: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1
      api_key: JGbKodRcTp****

What's Changed

Full Changelog: v1.27.6...v1.27.7

Don't miss a new litellm release

NewReleases is sending notifications on new releases.