What's Changed
- Add perplexity subcommand to RamaLama CLI by @ericcurtin in #637
- throwing an exception with there is a failure in http_client.init by @jhjaggars in #647
- Add container image to support Intel ARC GPU by @cgruver in #644
- Guide users to install huggingface-cli to login to huggingface by @pbabinca in #645
- Update intel-gpu Containerfile to reduce the size of the builder image by @cgruver in #657
- Look for configs also in /usr/local/share/ramalama by @jistr in #672
- remove ro as an option when mounting images by @kush-gupt in #676
- Add generated man pages for section 7 into gitignore by @jistr in #673
- Revert "Added --jinja to llama-run command" by @ericcurtin in #683
- Pull the source model if it isn't already in local storage for the convert and push functions by @kush-gupt in #680
- bump llama.cpp to latest release hash aa6fb13 by @maxamillion in #692
- Introduce a mode so one call install from git by @ericcurtin in #690
- Add ramalama gpu_detector by @dougsland in #670
- Bump to v0.5.5 by @rhatdan in #701
New Contributors
- @cgruver made their first contribution in #644
- @pbabinca made their first contribution in #645
- @jistr made their first contribution in #672
- @kush-gupt made their first contribution in #676
- @maxamillion made their first contribution in #692
- @dougsland made their first contribution in #670
Full Changelog: v0.5.4...v0.5.5