github rjmalagon/ollama-linux-amd-apu v0.7.0

latest releases: v0.13.5, v0.13.3, v.0.13.1-rc0...
8 months ago

What's Changed (this repo branch)

Sync to Ollama main v0.7.0

What's Changed (from Ollama)

image

Ollama now supports multimodal models via Ollama’s new engine, starting with new vision multimodal models:

What's Changed

  • Ollama now supports providing WebP images as input to multimodal models
    -Fixed issue where a blank terminal window would appear when runnings models on Windows
  • Fixed error that would occur when running llama4 on NVIDIA GPUs
  • Reduced log level of key not found message
  • Ollama will now correct remove quotes from image paths when sending images as input with ollama run
  • Improved performance of importing safetensors models via ollama create
  • Improved prompt processing speeds of Qwen3 MoE on macOS
  • Fixed issue where providing large JSON schemas in structured output requests would result in an error
  • Ollama's API will now return code 405 instead of 404 for methods that are not allowed
  • Fixed issue where ollama processes would continue to run after a model was unloaded

New Contributors

Full Changelog: v0.6.8...v0.7.0

Don't miss a new ollama-linux-amd-apu release

NewReleases is sending notifications on new releases.