github containers/ramalama v0.6.3

latest releases: v0.12.1, v0.12.0, v0.11.3...
6 months ago

What's Changed

  • Check if terminal is compatible with emojis before using them by @ericcurtin in #878
  • Use vllm-openai upstream image by @ericcurtin in #880
  • The package available via dnf is in a good place by @ericcurtin in #879
  • Add Ollama to CI and system tests for its caching by @kush-gupt in #881
  • Moved pruning protocol from model to factory by @engelmi in #882
  • Remove emoji usage until linenoise.cpp and llama-run are compatible by @ericcurtin in #884
  • Inject config to cli functions by @engelmi in #889
  • Switch from tiny to smollm:135m by @ericcurtin in #891
  • benchmark failing because of lack of flag by @ericcurtin in #888
  • Update the README.md to point people at ramalama.ai web site by @rhatdan in #894
  • fix: handling of date with python 3.8/3.9/3.10 by @benoitf in #897
  • readme: fix artifactory link by @alaviss in #903
  • Added support for mac cpu and clear warning message by @bmahabirbu in #902
  • Use python variable instead of environment variable by @ericcurtin in #907
  • Update llama.cpp by @ericcurtin in #908
  • Build a non-kompute Vulkan container image by @ericcurtin in #910
  • Reintroduce emoji prompts by @ericcurtin in #913
  • Add new ramalama-*-core executables by @ericcurtin in #909
  • Detect & get info on hugging face repos, fix sizing of symlinked directories by @kush-gupt in #901
  • Add ramalama image built on Fedora using Fedora's rocm packages by @maxamillion in #596
  • Add new model store by @engelmi in #905
  • Add support for llama.cpp engine to use ascend NPU device by @leo-pony in #911
  • Extend make validate check to do more by @ericcurtin in #916
  • Modify GPU detection to match against env var value instead of prefix by @cgruver in #919
  • Add Intel ARC 155H to list of supported hardware by @cgruver in #920
  • Try to choose a free port on serve if default one is not available by @andreadecorte in #898
  • Add passing of environment variables to ramalama commands by @rhatdan in #922
  • Allow user to specify the images to use per hardware by @rhatdan in #921
  • fix: CHAT_FORMAT variable should be expanded by @benoitf in #926
  • Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1741600006 by @renovate in #928
  • Bump to v0.6.3 by @rhatdan in #931

New Contributors

Full Changelog: v0.6.2...v0.6.3

Don't miss a new ramalama release

NewReleases is sending notifications on new releases.