github containers/ramalama v0.5.5

latest releases: v0.12.1, v0.12.0, v0.11.3...
7 months ago

What's Changed

  • Add perplexity subcommand to RamaLama CLI by @ericcurtin in #637
  • throwing an exception with there is a failure in http_client.init by @jhjaggars in #647
  • Add container image to support Intel ARC GPU by @cgruver in #644
  • Guide users to install huggingface-cli to login to huggingface by @pbabinca in #645
  • Update intel-gpu Containerfile to reduce the size of the builder image by @cgruver in #657
  • Look for configs also in /usr/local/share/ramalama by @jistr in #672
  • remove ro as an option when mounting images by @kush-gupt in #676
  • Add generated man pages for section 7 into gitignore by @jistr in #673
  • Revert "Added --jinja to llama-run command" by @ericcurtin in #683
  • Pull the source model if it isn't already in local storage for the convert and push functions by @kush-gupt in #680
  • bump llama.cpp to latest release hash aa6fb13 by @maxamillion in #692
  • Introduce a mode so one call install from git by @ericcurtin in #690
  • Add ramalama gpu_detector by @dougsland in #670
  • Bump to v0.5.5 by @rhatdan in #701

New Contributors

Full Changelog: v0.5.4...v0.5.5

Don't miss a new ramalama release

NewReleases is sending notifications on new releases.