✨ What's new ✨
LLM Traces
We now support reranker spans so you can view how your documents are getting reranked for RAG use-cases! (s/o @RogerHYang). We also add search and filtering. For example you can now search within the IO of your spans as well as find spans that have high token counts.
LLM Evals
We now have support for gpt-3.5-turbo-instruct (s/o @RogerHYang) as well as verbose mode for you evaluations so you can easily debug the progress of a run (s/o @anticorrelator).
What's Changed
- chore: v0.0.44 by @axiomofjoy in #1582
- docs: OpenAIInstrumentor tutorial notebook by @axiomofjoy in #1580
- chore: openai notebook clean by @axiomofjoy in #1583
- feat: Verbose evals by @anticorrelator in #1558
- fix: precision at k calculation in notebooks by @axiomofjoy in #1592
- fix(evals): Set client type in bedrock.py for constructor by @lou-k in #1595
- fix(trace): remove unnecessary exclusion of semantic conventions for filter validation by @RogerHYang in #1597
- docs: remove langchain from llama_index tutorial by @mikeldking in #1601
- fix: remove quiet flags by @axiomofjoy in #1602
- feat(tracing): trace and span filter / search field by @mikeldking in #1577
- feat(traces): add reranking span kind for document reranking in llama index by @RogerHYang in #1588
- docs: contributing guide by @mikeldking in #1605
- feat: Add a general purpose
LlamaIndexdebug callback handler by @anticorrelator in #1608 - feat: add computed values (e.g. latency) to filterable quantities by @RogerHYang in #1609
- fix:
LlamaIndexcallback handler should fail gracefully by @anticorrelator in #1606 - feat(eval): add gpt-3.5-turbo-instruct to OpenAIModel by @RogerHYang in #1613
- fix: Remove trivial warnings by @anticorrelator in #1616
Full Changelog: v0.0.44...v0.0.45