github run-llama/llama_index v0.12.0
2024-11-17 (v0.12.0)

latest release: v0.12.0.post1
13 hours ago

NOTE: Updating to v0.12.0 will require bumping every other llama-index-* package! Every package has had a version bump. Only notable changes are below.

llama-index-core [0.12.0]

  • Dropped python3.8 support, Unpinned numpy (#16973)
  • Kg/dynamic pg triplet retrieval limit (#16928)

llama-index-indices-managed-llama-cloud [0.6.1]

  • Add ID support for LlamaCloudIndex & update from_documents logic, modernize apis (#16927)
  • allow skipping waiting for ingestion when uploading file (#16934)
  • add support for files endpoints (#16933)

llama-index-indices-managed-vectara [0.3.0]

  • Add Custom Prompt Parameter (#16976)

llama-index-llms-bedrock [0.3.0]

  • minor fix for messages/completion to prompt (#15729)

llama-index-llms-bedrock-converse [0.4.0]

  • Fix async streaming with bedrock converse (#16942)

llama-index-multi-modal-llms-nvidia [0.2.0]

llama-index-readers-confluence [0.3.0]

  • Permit passing params to Confluence client (#16961)

llama-index-readers-github [0.5.0]

  • Add base URL extraction method to GithubRepositoryReader (#16926)

llama-index-vector-stores-weaviate [1.2.0]

  • Allow passing in Weaviate vector store kwargs (#16954)

Don't miss a new llama_index release

NewReleases is sending notifications on new releases.