github ggml-org/llama.cpp b7632

latest releases: b7690, b7689, b7688...
4 days ago
Details

vulkan: handle quantize_q8_1 overflowing the max workgroup count (#18515)

  • vulkan: handle quantize_q8_1 overflowing the max workgroup count

  • vulkan: Fix small tile size matmul on lavapipe

  • fix mul_mat_id failures

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.