github containers/ramalama v0.14.0

one day ago

What's Changed

  • Docsite builds remove extraneous manpage number labels by @ieaves in #2037
  • Bump to latest llama.cpp and whisper.cpp by @rhatdan in #2039
  • Added inference specification files to info command by @engelmi in #2049
  • Update docusaurus monorepo to v3.9.2 by @red-hat-konflux-kflux-prd-rh03[bot] in #2055
  • Pin macos CI to python <3.14 until mlx is updated by @olliewalsh in #2051
  • Added --max-tokens to llama.cpp inference spec by @engelmi in #2057
  • Lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2056
  • Prefer the embedded chat template for ollama models by @olliewalsh in #2040
  • Set gguf quantization default to Q4_K_M by @engelmi in #2050
  • Update dependency huggingface-hub to ~=0.36.0 by @red-hat-konflux-kflux-prd-rh03[bot] in #2059
  • Update Konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #2044
  • Lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2060
  • docker: fix list command for oci images when running in a non-UTC timezone by @mikebonnet in #2067
  • Update dependency huggingface-hub to v1 by @renovate[bot] in #2066
  • Fix AMD GPU image selection on arm64 for issue #2045 by @rhatdan in #2048
  • run RAG operations in a separate container by @mikebonnet in #2053
  • konflux: merge before building/testing PRs by @mikebonnet in #2069
  • fix "ramalama rag" under docker by @mikebonnet in #2068
  • chore(deps): lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2070
  • Renaming huggingface-cli -> hf by @Yarboa in #2047
  • Added Speaking and Advocacy heading for CONTRIBUTING.md by @dominikkawka in #2073
  • Fix the rpm name in docs by @olliewalsh in #2083
  • Update SECURITY.md. Use github issues for security vulnerabilities by @rhatdan in #2077
  • Improving ramalama rag section in README.md by @jpodivin in #2076
  • chore(deps): lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2074
  • fix up type checking and add it to GitHub CI by @mikebonnet in #2075
  • konflux: disable builds on s390x by @mikebonnet in #2087
  • Bump llama.cpp and whisper.cpp by @rhatdan in #2071
  • chore(deps): lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2088
  • Add --port flag to ramalama run command by @rhatdan in #2082
  • rag: keep the versions of gguf and convert_hf_to_gguf.py in sync by @mikebonnet in #2092
  • Bump to v0.14.0 by @rhatdan in #2093

New Contributors

Full Changelog: v0.13.0...v0.14.0

Don't miss a new ramalama release

NewReleases is sending notifications on new releases.