What's Changed (this repo branch)
- Sync to v0.12.6
- Enable early Vulkan backend for unsupported AMD APUs by ROCM. Bugs on the Vulkan backend are expected and warranted.
What's Changed (from Ollama)
What's Changed
- Ollama's app now supports searching when running DeepSeek-V3.1, Qwen3 and other models that support tool calling
- Flash attention is now enabled by default for Gemma 3, improving performance and memory utilization
- Fixed issue where Ollama would hang while generating responses
- Fixed issue where
qwen3-coderwould act in raw mode when using/api/generateorollama run qwen3-coder <prompt> - Fixed qwen3-embedding providing invalid results
- Ollama will now evict models correctly when
num_gpuis set - Fixed issue where
tool_indexwith a value of0would not be sent to the model
Experimental Vulkan Support
Experimental support for Vulkan is now available when you build locally from source. This will enable additional GPUs from AMD, and Intel which are not currently supported by Ollama. To build locally, install the Vulkan SDK and set VULKAN_SDK in your environment, then follow the developer instructions. In a future release, Vulkan support will be included in the binary release as well. Please file issues if you run into any problems.
New Contributors
- @yajianggroup made their first contribution in ollama#12377
- @inforithmics made their first contribution in ollama#11835
- @sbhavani made their first contribution in ollama#12619