Use ClickhouseDB for low latency LLM Analytics / Spend Reports
(sub 1s analytics, with 100M logs)
Getting started with ClickHouse DB + LiteLLM Proxy
Docs + Docker compose for getting started with clickhouse: https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---clickhouse
Step 1: Create a config.yaml
file and set litellm_settings
: success_callback
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
litellm_settings:
success_callback: ["clickhouse"]
Step 2: Set Required env variables for clickhouse
Env Variables for self hosted click house
CLICKHOUSE_HOST = "localhost"
CLICKHOUSE_PORT = "8123"
CLICKHOUSE_USERNAME = "admin"
CLICKHOUSE_PASSWORD = "admin"
Step 3: Start the proxy, make a test request
New Models
Mistral on Azure AI Studio
Sample Usage
Ensure you have the /v1
in your api_base
from litellm import completion
import os
response = completion(
model="mistral/Mistral-large-dfgfj",
api_base="https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1",
api_key = "JGbKodRcTp****"
messages=[
{"role": "user", "content": "hello from litellm"}
],
)
print(response)
[LiteLLM Proxy] Using Mistral Models
Set this on your litellm proxy config.yaml
Ensure you have the /v1
in your api_base
model_list:
- model_name: mistral
litellm_params:
model: mistral/Mistral-large-dfgfj
api_base: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1
api_key: JGbKodRcTp****
What's Changed
- [Docs] use azure ai studio + mistral large by @ishaan-jaff in #2205
- [Feat] Start Self hosted clickhouse server by @ishaan-jaff in #2206
- [FEAT] Admin UI - View /spend/logs from clickhouse data by @ishaan-jaff in #2210
- [Docs] Use Clickhouse DB + Docker compose by @ishaan-jaff in #2211
Full Changelog: v1.27.6...v1.27.7