github confident-ai/deepeval v0.20.27
Continuous Evaluation

latest releases: v3.5.1, v3.5.0, v2.4.8...
pre-release22 months ago

Automatically integrated with Confident AI for continous evaluation throughout the lifetime of your LLM (app):

-log evaluation results and analyze metrics pass / fails
-compare and pick the optimal hyperparameters (eg. prompt templates, chunk size, models used, etc.) based on evaluation results
-debug evaluation results via LLM traces
-manage evaluation test cases / datasets in one place
-track events to identify live LLM responses in production
-add production events to existing evaluation datasets to strength evals over time

Don't miss a new deepeval release

NewReleases is sending notifications on new releases.