Details
llama : enable chunked fused GDN path (#20340)
-
llama : enable chunked fused GDN path
-
models : avoid Q and K repeats when using fused GDA
-
cont : fix comment
Co-authored-by: Aman Gupta amangupta052@gmail.com
- cont : fix the fix
Co-authored-by: Aman Gupta amangupta052@gmail.com
-
cont : fix
-
metal : add GDN kernel (#20361)
-
metal : add Metal backend for GGML_OP_GATED_DELTA_NET
Add a fused Metal kernel for the gated delta net recurrence op
(#19504), enabling GPU-accelerated inference for DeltaNet-based
models (Qwen3.5, etc.) on Apple Silicon.
Supports both GDA (scalar gate) and KDA (per-row gate) modes
with head_size 64 and 128. Unsupported configurations (head_size
32, non-contiguous tensors) gracefully fall back to CPU.
Performance: Qwen3.5-0.8B Q4_K_M on M4 Max
tg128: 170 -> 213 t/s (+25%)
Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
- metal : validate contiguity of all input tensors in supports_op
Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
- metal : add algorithm equivalence comment for GDA decay path
Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
-
cont : unslop + optimize
-
cont : clean-up
Co-authored-by: Paul Flynn paul@arkavo.com
Co-authored-by: Claude Opus 4.6 noreply@anthropic.com
-
CUDA: AR gated delta net improvements (#20391)
-
Add FastDiv to gated_delta_net_cuda
-
Shard columns across warps
This reduces register pressure (avoids spill for S_v = 128) and gives
the warp-scheduler more CTAs to schedule (thus hiding data-access
latencies).
-
Remove unneded include in gated_delta_net.cu
-
Improve comments
-
Apply code-formating
-
Make sharding HIP-compatible
- Use ggml_cuda_get_physical_warp_size() to determine warp size flexibly
- Add test with partial warp to test sum reduction on CUDA
-
Remove fastdiv_s64, as we can treat neqk1 and rq3 as uint32_t
-
Rename variables
-
Enable GDN also for prefill, move TODO for chunked_GDN
-
Actually remove the TODO from 2068908
-
Get warp size at runtime
warp_size is not known at compile time in hip host code.
- Don't expose ggml_cuda_get_physical_warp_size on host
Co-authored-by: uvos devnull@uvos.xyz
- llama : refactor llm_build_delta_net_base API
Co-authored-by: Aman Gupta amangupta052@gmail.com
Co-authored-by: Paul Flynn paul@arkavo.com
Co-authored-by: Claude Opus 4.6 noreply@anthropic.com
Co-authored-by: Oliver Simons osimons@nvidia.com
Co-authored-by: uvos devnull@uvos.xyz
macOS/iOS:
Linux:
Windows:
- Windows x64 (CPU)
- Windows arm64 (CPU)
- Windows x64 (CUDA 12) - CUDA 12.4 DLLs
- Windows x64 (CUDA 13) - CUDA 13.1 DLLs
- Windows x64 (Vulkan)
- Windows x64 (SYCL)
- Windows x64 (HIP)
openEuler: