What's Changed
- Removing git breaks rocm images by @rhatdan in #1148
- Exercise image detection in tests by @ueno in #1142
- Give "image" config option precedence over hardware-based defaults by @ueno in #1150
- feat: Add ramalama client command with basic implementation by @ericcurtin in #1151
- Scripts currently used for releasing images by @rhatdan in #1153
- Use -rag images when using --rag commands by @rhatdan in #1154
- chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.5-1744101466 by @renovate in #1149
- Docling on certain platforms needs accellerate package by @rhatdan in #1161
- Do not list OCI containers when running with nocontainer by @rhatdan in #1164
- Bump all images to f42 by @rhatdan in #1167
- Update llama.cpp add llama 4 by @ericcurtin in #1166
- Build images for llama-stack by @rhatdan in #1169
- docs: fix broken link in CONTRIBUTING.md by @nathan-weinberg in #1175
- docs: fix python version guidance in CONTRIBUTING.md by @nathan-weinberg in #1179
- github: add issue templates by @nathan-weinberg in #1177
- fix: add 'pipx' install to 'make install-requirements' by @nathan-weinberg in #1176
- Improve model store by @engelmi in #1180
- Add ability to pull via hf://user/repo:tag syntax by @edmcman in #1123
- Fix failover to OCI image on push by @rhatdan in #1171
- Improve performance for certain workloads by @rhatdan in #1173
- [Misc] update install script by @reidliu41 in #1182
- Disable ARM neon for now in cuda builds by @ericcurtin in #1184
- Add check for toolbox by @ericcurtin in #1174
- Fix cann build by @ericcurtin in #1185
- Bump version to v0.7.4 by @rhatdan in #1186
- More fixes to get release out by @rhatdan in #1191
- fix llama.cpp CANN backend x86 build failing issue by @leo-pony in #1194
- macOS tip to install homebrew by @ericcurtin in #1195
- Also hardcode version into version.py as fallback by @ericcurtin in #1160
- feat: add CTX_SIZE env config to container-images llama-server.sh by @manusa in #1193
- Setup /venv for running llama-stack by @rhatdan in #1197
- Add missing entrypoint.sh by @rhatdan in #1199
- Default to --pull=newer for ramalama rag command. by @rhatdan in #1196
- Add newver.sh script by @rhatdan in #1203
- Fix doc2rag warning by @rhatdan in #1201
- Quote strings with spaces in debug mode by @rhatdan in #1207
- Create openvino model server image and add it quay.io/ramalama by @bmahabirbu in #1183
- Update registry.access.redhat.com/ubi9/ubi Docker tag to v9.5-1744101466 by @renovate in #1210
- Only use nvidia-container-runtime if it is installed by @rhatdan in #1211
- Add --gguf option to convert Safetensors using llama.cpp scripts and functionality by @kush-gupt in #1209
- Ship nvidia and cann man pages by @rhatdan in #1208
- Fix link to ramalama-cuda by @Ferenc- in #1215
- rocm-ubi repo path fix by @afazekas in #1214
- llama-stack relative COPY by @afazekas in #1213
- Fix release scripts for openvino by @rhatdan in #1216
- Fixes for llama-stack image to build and install by @rhatdan in #1217
- Improve shell completions for all arguments by @rhatdan in #1218
- Tag images on push with digests, so they are permanent by @rhatdan in #1219
- Packit: use latest version for rpm by @lsm5 in #1223
- fix intel-gpu container build by @afazekas in #1224
- Optimized doc2rag for reduced ram and fixed batch size by @bmahabirbu in #1225
- Enable llama.cpp rpc feature in containers by @afazekas in #1227
- fix: intel-gpu-rag build by @afazekas in #1226
- Fix bug in login_cli and update huggingface or hf registry behavior by @melodyliu1986 in #1232
- Refactor exception handling of huggingface pull operation by @edmcman in #1230
- Switch all f41 container to f42 by @afazekas in #1231
New Contributors
- @ueno made their first contribution in #1142
- @reidliu41 made their first contribution in #1182
- @manusa made their first contribution in #1193
- @Ferenc- made their first contribution in #1215
- @afazekas made their first contribution in #1214
- @melodyliu1986 made their first contribution in #1232
Full Changelog: v0.7.3...v0.7.5