github ggml-org/llama.cpp b8317

latest releases: b8322, b8320, b8318...
2 hours ago
Details

vulkan: add GATED_DELTA_NET op support (#20334)

  • vulkan: add GATED_DELTA_NET op support

Implements the fused gated delta net recurrence as a Vulkan compute
shader with full support for scalar gate, KDA vector gate, GQA
broadcast, multi-token sequences, and permuted (non-contiguous) q/k
inputs. Specialization constants select head size (32/64/128) and
KDA mode at pipeline creation time.

Passes all 13 test-backend-ops cases on AMD Radeon 890M (RADV GFX1150).

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

  • vulkan: optimize GATED_DELTA_NET shader (Phase 1)
  • vec4 dot products on all inner loops (dp4 hardware intrinsic)
  • Cache exp(g) in shared memory for KDA path, eliminating ~32K
    redundant global reads and ~16K redundant exp() calls per token
  • vec4 fused decay + rank-1 update (3 vec4 ops vs 12 scalar ops)
  • Add perf benchmark cases for GATED_DELTA_NET to test-backend-ops

KDA TG: +5.4% throughput. Non-KDA: no regressions.
13/13 test-backend-ops passing on AMD Radeon 890M (RADV GFX1150).

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

  • vulkan: address review feedback for GATED_DELTA_NET

Pipeline array refactor [3][2], A_TYPE/D_TYPE/FLOAT_TYPE shader macros,
scale in push constants, supports_op fix, dispatch restructuring.

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

  • vulkan: use FLOAT_TYPE for buffer/shared declarations, align formatting

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

  • vulkan: add explicit FLOAT_TYPE casts for buffer loads

Wrap data_q, data_k, and data_g buffer reads with FLOAT_TYPE() casts
to ensure correct behavior across all Vulkan configurations.

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

  • vulkan: fix Q/K broadcast for interleaved head layout

Adapt to the interleaved broadcast convention from #20340:
head_id / rq1 → head_id % neq1

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com


Co-authored-by: Progeny Alpha ProgenyAlpha@users.noreply.github.com
Co-authored-by: Claude Opus 4.6 noreply@anthropic.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.