github ggml-org/llama.cpp b8417

latest releases: b8420, b8419, b8418...
3 hours ago
Details

CANN: support flash attention for head dim not multiple of 16, fix ALiBi slope offset (#20031)

  • Allow FLASH_ATTN_EXT when head dimension D is not a multiple of 16 by
    padding Q/K/V to D_padded = GGML_PAD(D, 16), running FusedInferAttentionScoreV2,
    then slicing the output back to D (ggml-cann.cpp + aclnn_ops.cpp).
  • Fix aclnn_get_slope second-part offset: use ggml_type_size(dtype) instead of
    sizeof(float) so ALiBi slopes are correct when dtype is F16 (e.g. GQA with
    48 heads); fixes buffer overflow and large numerical errors in those cases.

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.