github rjmalagon/ollama-linux-amd-apu v0.11.11

latest releases: v0.13.5, v0.13.3, v.0.13.1-rc0...
3 months ago

What's Changed (this repo branch)

Sync to v0.11.11
Support for Ryzen 2200G Radeon Vega APU
Upgrade Ubuntu container base to 25.04

What's Changed (from Ollama)

What's Changed

  • Support for CUDA 13
  • Improved memory usage when using gpt-oss in Ollama's app
  • Better scrolling better in Ollama's app when submitting long prompts
  • Cmd +/- will now zoom and shrink text in Ollama's app
  • Assistant messages can now by copied in Ollama's app
  • Fixed error that would occur when attempting to import satefensor files by @rick-github in ollama#12176
  • Improved memory estimates for hybrid and recurrent models by @gabe-l-hart in ollama#12186
  • Fixed error that would occur when when batch size was greater than context length
  • Flash attention & KV cache quantization validation fixes by @jessegross in ollama#12231
  • Add dimensions field to embed requests by @mxyng in ollama#12242
  • Enable new memory estimates in Ollama's new engine by default by @jessegross in ollama#12252
  • Ollama will no longer load split vision models in the Ollama engine by @jessegross in ollama#12241

New Contributors

Full Changelog: v0.11.10...v0.11.11

Don't miss a new ollama-linux-amd-apu release

NewReleases is sending notifications on new releases.