github containers/ramalama v0.18.0

8 hours ago

What's Changed

  • Update Konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #2414
  • [skip-ci] Update step-security/harden-runner action to v2.14.2 by @renovate[bot] in #2413
  • Lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2415
  • Migrate to ruff and improve make command times by @ieaves in #2362
  • Handle listing models with missing created dates by @ieaves in #2361
  • Update dependency huggingface-hub to ~=1.4.1 by @renovate[bot] in #2416
  • Use "git check-ignore" to generate the cleanup list in the Makefile. by @jwieleRH in #2421
  • CI: run lint/format on minimum supported Python and cleanup make lint command by @ieaves in #2419
  • Restores previous llama.cpp jinja behavior by @ieaves in #2422
  • Fix test_help_command_flags on python 3.14.3 by @olliewalsh in #2423
  • e2e: install git-core by @mikebonnet in #2428
  • Lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2426
  • Update Konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #2424
  • Fixes API Transport regression resulting from eager call to _get_entry_model_path by @ieaves in #2430
  • chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.7-1771346757 by @red-hat-konflux-kflux-prd-rh03[bot] in #2435
  • Trigger ci jobs on all branches for forks by @olliewalsh in #2409
  • Nominating Oliver Walsh as a Maintainer by @mikebonnet in #2441
  • Nominating Michael Engel and Brian Mahabir as Reviewers by @mikebonnet in #2442
  • chore(deps): update konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #2443
  • chore(deps): update pre-commit hook pycqa/isort to v8 by @red-hat-konflux-kflux-prd-rh03[bot] in #2439
  • Add /clear command to reset conversation history by @rhatdan in #2417
  • chore(deps): lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2450
  • Fix to ruff preventing auto import sorting by @ieaves in #2429
  • Chore/ctx size docs by @ieaves in #2454
  • Adds ability to inspect shortnames by @ieaves in #2355
  • Fix vllm to omit ctx size by default allowing for auto detection by @ieaves in #2452
  • Fix formatting in README.md Diagram by @rhatdan in #2457
  • [skip-ci] Update step-security/harden-runner action to v2.15.0 by @renovate[bot] in #2461
  • Add missing XDG environment variable checks by @alyssais in #2445
  • Resolves mlx regression identified in issue #2465 by @ieaves in #2466
  • Redirect inference runtime std filehandles to null when nocontainer by @olliewalsh in #2464
  • Fix reading from a pipe on windows by @olliewalsh in #2458
  • Artifact Pulling by @ieaves in #2043
  • fixed ctx_size being wired to max_tokens in mlx by @ieaves in #2451
  • Revert "rebase" (PR #2043) by @olliewalsh in #2470
  • Add basic test for ramalama run with mlx by @olliewalsh in #2468
  • Update dependency huggingface-hub to ~=1.5.0 by @renovate[bot] in #2472
  • [skip-ci] Update GitHub Artifact Actions (major) by @renovate[bot] in #2474
  • Update Konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #2478
  • Update pre-commit hook pycqa/isort to v8.0.1 by @red-hat-konflux-kflux-prd-rh03[bot] in #2479
  • Lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2481
  • Resolves bad rebase for artifact pulling by @ieaves in #2473
  • README: fix description of "version" sub-command by @ktdreyer in #2460
  • Major refactor to use pluggable modules for inference runtimes by @olliewalsh in #2456
  • [skip-ci] Update step-security/harden-runner action to v2.15.1 by @renovate[bot] in #2490
  • Update pre-commit hook codespell-project/codespell to v2.4.2 by @red-hat-konflux-kflux-prd-rh03[bot] in #2489
  • Use hardlink for local model files to save disk space by @rhatdan in #2455
  • AI-Assisted Contributions guidance by @dominikkawka in #2484
  • Revert #2455 "Use hardlink for local model files to save disk space" by @olliewalsh in #2493
  • Fix selinux denial when building container images by @olliewalsh in #2492
  • two minor fixes to get CI running successfully again by @mikebonnet in #2502
  • Update Konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #2499
  • Lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2503
  • Update dependency huggingface-hub to ~=1.6.0 by @red-hat-konflux-kflux-prd-rh03[bot] in #2496
  • Bump llama.cpp version by @olliewalsh in #2507
  • Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.7-1773204657 by @red-hat-konflux-kflux-prd-rh03[bot] in #2512
  • Remove unmaintained bats tests by @mikebonnet in #2510
  • konflux: enable caching proxy, and adjust instance sizes by @mikebonnet in #2508
  • Resolves tighter mlx message validation for multiturn discussions by @ieaves in #2482
  • [skip-ci] Update actions/download-artifact action to v8.0.1 by @renovate[bot] in #2513
  • Merge all CDI configs so NVIDIA is found with multiple CDI files by @rhatdan in #2487
  • Adds compose generator for llama-stack by @mkristian in #2501
  • Fixes the llama-stack for ROCM based GPUs by @mkristian in #2448
  • Update dependency huggingface-hub to ~=1.7.1 by @red-hat-konflux-kflux-prd-rh03[bot] in #2514
  • Update Konflux references by @red-hat-konflux-kflux-prd-rh03[bot] in #2516
  • Lock file maintenance by @red-hat-konflux-kflux-prd-rh03[bot] in #2520
  • [skip-ci] Update step-security/harden-runner action to v2.16.0 by @renovate[bot] in #2522
  • container-images: update mesa version by @slp in #2531
  • Bump llama.cpp to b8401 by @olliewalsh in #2530
  • Support using the upstream llama.cpp container images by @olliewalsh in #2525
  • Update all dependencies in the -rag images to their latest versions by @mikebonnet in #2509
  • Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.7-1773895171 by @red-hat-konflux-kflux-prd-rh03[bot] in #2533
  • Bump version to v0.18.0 by @olliewalsh in #2535

New Contributors

Full Changelog: v0.17.1...v0.18.0

Don't miss a new ramalama release

NewReleases is sending notifications on new releases.