NOTE: Updating to v0.12.0 will require bumping every other llama-index-*
package! Every package has had a version bump. Only notable changes are below.
llama-index-core
[0.12.0]
llama-index-indices-managed-llama-cloud
[0.6.1]
- Add ID support for LlamaCloudIndex & update from_documents logic, modernize apis (#16927)
- allow skipping waiting for ingestion when uploading file (#16934)
- add support for files endpoints (#16933)
llama-index-indices-managed-vectara
[0.3.0]
- Add Custom Prompt Parameter (#16976)
llama-index-llms-bedrock
[0.3.0]
- minor fix for messages/completion to prompt (#15729)
llama-index-llms-bedrock-converse
[0.4.0]
- Fix async streaming with bedrock converse (#16942)
llama-index-multi-modal-llms-nvidia
[0.2.0]
- add vlm support (#16751)
llama-index-readers-confluence
[0.3.0]
- Permit passing params to Confluence client (#16961)
llama-index-readers-github
[0.5.0]
- Add base URL extraction method to GithubRepositoryReader (#16926)
llama-index-vector-stores-weaviate
[1.2.0]
- Allow passing in Weaviate vector store kwargs (#16954)