github ggml-org/llama.cpp b7549

11 hours ago
Details

vulkan: preprocess mul_mat_id experts and discard workgroups more quickly (#18352)

Run a preprocess to count how many times each expert is used, and use this to
quickly discard workgroups that aren't needed.

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.