github ggml-org/llama.cpp b7707

latest releases: b8922, b8920, b8919...
3 months ago
Details

Vulkan: Optimize Matmul parameters for AMD GPUs with Coopmat support (#18749)

  • vulkan: Enable and optimize large matmul parameter combination for AMD

  • limit tuning to AMD GPUs with coopmat support

  • use tx_m values instead of _l

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.