github rjmalagon/ollama-linux-amd-apu v0.6.6-rc2

latest releases: v0.15.0, v0.14.1, v0.13.5...
pre-release9 months ago

What's Changed (this repo branch)

Sync to Ollama main v0.6.6-rc2

What's Changed (from Ollama)

New models

  • IBM Granite 3.3: 2B and 8B models with 128K context length that have been fine-tuned for improved reasoning and instruction-following capabilities.
  • DeepCoder: a fully open-Source 14B coder model at O3-mini level, with a 1.5B version also available.

What's Changed

  • New, faster model downloading: OLLAMA_EXPERIMENT=client2 ollama serve to try running Ollama with this new downloader with improved performance and reliability when running ollama pull. Please share feedback!
  • Fixed memory leak issues when running Gemma 3, Mistral Small 3.1 and other models on Ollama
  • Improved performance of ollama create when importing models from Safetensors
  • Ollama will now allow tool function parameters with either a single type or an array of types by @rozgo
  • Fixed certain out of memory issues from not reserving enough memory at startup
  • Fix nondeterministic model unload order by @IreGaddr
  • Include the items and $defs fields to properly handle array types in the API by @sheffler
  • OpenAI-Beta headers are now included in the CORS safelist by @drifkin
  • Fixed issue where model tensor data would be corrupted when importing models from Safetensors

New Contributors

Full Changelog: v0.6.5...v0.6.6-rc2

Don't miss a new ollama-linux-amd-apu release

NewReleases is sending notifications on new releases.