github ggml-org/llama.cpp b8089

2 hours ago
Details

vulkan: split mul_mat into multiple dispatches to avoid overflow (#19509)

  • vulkan: split mul_mat into multiple dispatches to avoid overflow

The batch dimensions can be greater than the max workgroup count limit,
in which case we need to split into multiple dispatches and pass the base
index through a push constant.

Fall back for the less common p021 and nc variants.

  • address feedback

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.