What's Changed
- Bump to v0.11.1 by @rhatdan in #1726
- konflux: add pipelines for ramalama-vllm and layered images by @mikebonnet in #1717
- Don't override image when using rag if user specified it by @rhatdan in #1727
- Re-enable passing chat template to model by @engelmi in #1732
- No virglrenderer in RHEL by @ericcurtin in #1728
- Add stale githup workflow to maintain older issues and PRs. by @rhatdan in #1733
- konflux: build -rag images on bigger instances with large disks by @mikebonnet in #1737
- musa: upgrade musa sdk to rc4.2.0 by @yeahdongcn in #1697
- Remove GGUF version check when parsing by @engelmi in #1738
- Define image within container with full name by @rhatdan in #1734
- musa: disable build of whisper.cpp, and update llama.cpp by @mikebonnet in #1745
- Include mmproj mount in quadlet by @olliewalsh in #1742
- Adds docs site by @ieaves in #1736
- Fix listing models by @engelmi in #1748
- fix(deps): update dependency huggingface-hub to ~=0.34.0 by @renovate[bot] in #1747
- chore(deps): update dependency typescript to ~5.8.0 by @renovate[bot] in #1746
- Use blobs directory as context directory on convert by @engelmi in #1739
- konflux: push images to the quay.io/ramalama org after integration testing by @mikebonnet in #1743
- CUDA vLLM variant by @ericcurtin in #1741
- Add setuptools_scm by @ericcurtin in #1749
- Fixes docsite page linking by @ieaves in #1752
- Fix kube volumemount for hostpaths and add mmproj by @olliewalsh in #1751
- More cuda vLLM enablement by @ericcurtin in #1750
- Fix assembling URLs for big models by @engelmi in #1756
Full Changelog: v0.11.1...v0.11.2