What's Changed
- Shorten url in README.md by @ericcurtin in #1392
- Bump to 0.8.3 by @rhatdan in #1391
- This script is not macOS only by @ericcurtin in #1393
- Using perror in cli.main by @ieaves in #1395
- Fix builds by @ericcurtin in #1396
- Remove all path additions to this file by @ericcurtin in #1398
- Fix issues reported by pylint for cli.py by @sarroutbi in #1402
- Increase cli.py coverage by @sarroutbi in #1403
- Add minor CONTRIBUTING.md enhancements by @sarroutbi in #1404
- Include additional information in CONTRIBUTING.md by @sarroutbi in #1406
- Fix cuda builds installation of python3.11 by @rhatdan in #1399
- added a docling ocr flag ( text image recognition) flag to address RAM issue by @bmahabirbu in #1400
- fix: removed ocr print statement and updated ocr description by @bmahabirbu in #1408
- Support Moore Threads GPU #2 by @yeahdongcn in #1410
- Add more debug for non starting servers with "ramalama run" by @ericcurtin in #1415
- Small typo by @ericcurtin in #1418
- Multimodal/vision support by @olliewalsh in #1416
- Added host:container port mapping to quadlet generation by @engelmi in #1409
- Don't throw Exceptions, be more specific by @rhatdan in #1420
- Normalize hf repo quant/tag by @olliewalsh in #1422
- Support Moore Threads GPU #1 by @yeahdongcn in #1407
- Add smolvlm vision models by @ericcurtin in #1424
- Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.6-1747219013 by @renovate in #1423
- Bump llama.cpp to fix rocm bug by @afazekas in #1427
- Remove unused parameters from ollama_repo_utils.py by @sarroutbi in #1428
- Add support for Hugging Face token authentication by @olliewalsh in #1425
- split/big model support for llama.cpp by @afazekas in #1426
- Don't use jinja in the multimodal case by @ericcurtin in #1435
- Support Moore Threads GPU #3 by @yeahdongcn in #1436
New Contributors
- @olliewalsh made their first contribution in #1416
Full Changelog: v0.8.3...v0.8.5