github containers/ramalama v0.12.2

4 hours ago

What's Changed

  • Add Docker Compose generator by @abhibongale in #1839
  • Catch KeyError exceptions by @rhatdan in #1867
  • Fallback to default image when CUDA version is out of date by @rhatdan in #1871
  • Changed from google-chrome to firefox by @AlexonOliveiraRH in #1876
  • Revert back to ollama granite-code models by @olliewalsh in #1875
  • chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.6-1756799158 by @renovate[bot] in #1887
  • fix(deps): update dependency @mdx-js/react to v3.1.1 by @renovate[bot] in #1885
  • Don't print llama stack api endpoint info unless --debug is passed by @booxter in #1881
  • feat(script): add browser override and improve service startup flow by @AlexonOliveiraRH in #1879
  • tests: generate tmpdir store for ollama pull testcase by @booxter in #1891
  • chore(deps): update konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #1884
  • [skip-ci] Update actions/stale action to v10 by @renovate[bot] in #1896
  • chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.6-1756915113 by @red-hat-konflux-kflux-prd-rh03[bot] in #1895
  • Added the GGUF field tokenizer.chat_template for getting chat template by @engelmi in #1890
  • Suppress stderr when chatting without container by @booxter in #1880
  • konflux: stop building unnecessary images by @mikebonnet in #1897
  • Readme updates and python classifiers by @ieaves in #1894
  • Update versions of llama.cpp and whisper.cpp by @rhatdan in #1874
  • chore(deps): update konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #1906
  • Extended inspect command by --get option with auto-complete by @engelmi in #1889
  • Allow running ramalama without a GPU by @kpouget in #1909
  • Add tests for --device none by @kpouget in #1911
  • Bump to latest version of llama.cpp by @rhatdan in #1910
  • Initial model swap work by @engelmi in #1807
  • Fix ramalama run with prompt index error by @engelmi in #1913
  • Fix the application of codespell in "make validate". by @jwieleRH in #1904
  • Do not set the ctx-size by default by @rhatdan in #1915
  • Use Hugging Face models for tinylama and smollm:135 by @olliewalsh in #1916
  • build_rag.sh: install mistral-common for convert_hf_to_gguf.py by @mikebonnet in #1925
  • Bump to v0.12.2 by @rhatdan in #1912

New Contributors

Full Changelog: v0.12.1...v0.12.2

Don't miss a new ramalama release

NewReleases is sending notifications on new releases.