llama-index-core
[0.10.28]
- Support indented code block fences in markdown node parser (#12393)
- Pass in output parser to guideline evaluator (#12646)
- Added example of query pipeline + memory (#12654)
- Add missing node postprocessor in CondensePlusContextChatEngine async mode (#12663)
- Added
return_direct
option to tools /tool metadata (#12587) - Add retry for batch eval runner (#12647)
- Thread-safe instrumentation (#12638)
- Coroutine-safe instrumentation Spans #12589
- Add in-memory loading for non-default filesystems in PDFReader (#12659)
- Remove redundant tokenizer call in sentence splitter (#12655)
- Add SynthesizeComponent import to shortcut imports (#12655)
- Improved truncation in SimpleSummarize (#12655)
- adding err handling in eval_utils default_parser for correctness (#12624)
- Add async_postprocess_nodes at RankGPT Postprocessor Nodes (#12620)
- Fix MarkdownNodeParser ref_doc_id (#12615)
llama-index-embeddings-openvino
[0.1.5]
- Added initial support for openvino embeddings (#12643)
llama-index-llms-anthropic
[0.1.9]
- add anthropic tool calling (#12591)
llama-index-llms-ipex-llm
[0.1.1]
llama-index-llms-openllm
[0.1.4]
- Proper PrivateAttr usage in OpenLLM (#12655)
llama-index-multi-modal-llms-anthropic
[0.1.4]
- Bumped anthropic dep version (#12655)
llama-index-multi-modal-llms-gemini
[0.1.5]
- bump generativeai dep (#12645)
llama-index-packs-dense-x-retrieval
[0.1.4]
- Add streaming support for DenseXRetrievalPack (#12607)
llama-index-readers-mongodb
[0.1.4]
- Improve efficiency of MongoDB reader (#12664)
llama-index-readers-wikipedia
[0.1.4]
- Added multilingual support for the Wikipedia reader (#12616)
llama-index-storage-index-store-elasticsearch
[0.1.3]
- remove invalid chars from default collection name (#12672)