github ggml-org/llama.cpp b7707

latest releases: b8179, b8178, b8177...
one month ago
Details

Vulkan: Optimize Matmul parameters for AMD GPUs with Coopmat support (#18749)

  • vulkan: Enable and optimize large matmul parameter combination for AMD

  • limit tuning to AMD GPUs with coopmat support

  • use tx_m values instead of _l

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.