✨ Native support for LLM prompt / response pairs
Schema(
tag_column_names=[
"bleu_score",
"rouge_score",
],
prompt_column_names=EmbeddingColumnNames(
vector_column_name="document_vector", raw_data_column_name="document"
),
response_column_names=EmbeddingColumnNames(
vector_column_name="summary_vector", raw_data_column_name="summary"
),
)Tutorials coming soon!
What's Changed
- feat(embeddings): support rendering grid previews of llm prompts and responses by @mikeldking in #599
- feat(embeddings): show prompt and response for LLMs in event details by @mikeldking in #600
- fix: incorrect actual label shown in UI by @mikeldking in #601
- feat(embeddings): display prompt response pairs in selection table by @mikeldking in #602
- feat: Enable generative prompt/response pair by @fjcasti1 in #598
- chore: llm summarization fixture by @axiomofjoy in #606
- feat(embeddings): prompt/response pairs on inference events by @mikeldking in #605
- fix: exclude bool from numeric by @RogerHYang in #607
- feat(embeddings): plumb through prompt / response pairs to the UI by @mikeldking in #608
- fix: handle prompt response pairs in feature discovery by @axiomofjoy in #609
Full Changelog: v0.0.13...v.0.0.14