github ggml-org/llama.cpp b7815

latest release: b7818
7 hours ago
Details

ggml-cpu: aarm64: q5_K repack gemm and gemv (and generic) implementations (i8mm) (#18860)

  • Boilerplate for q5_Kx8 REPACK on ARM and fallback

Signed-off-by: Alberto Cabrera alberto.cabrera@liquid.ai

  • Implements make_block_q5_Kx8 by extending make_block_q4_Kx8

Signed-off-by: Alberto Cabrera alberto.cabrera@liquid.ai

  • q5_K repack gemm and gemv generics

  • Gemm and Gemv ARM implementations (i8mm)

  • Improved qh manipulation looking at non-repack vec_dot implementation

  • Full unroll

  • Apply Q5_K Gemv vand and vshl optimizations to gemm. Improve comments.

Signed-off-by: Alberto Cabrera alberto.cabrera@liquid.ai

  • Fix wrong fallback definitions of Q5_K

Signed-off-by: Alberto Cabrera alberto.cabrera@liquid.ai

  • Fixed comments. Reverted unnecessary formatting

Signed-off-by: Alberto Cabrera alberto.cabrera@liquid.ai

  • Fixed typo in generic definitions

  • Switching AND + Shift with Shift Insert. Better op interleaving.

  • Vectorize + unroll the block scales

  • Apply gemm optimizations to gemv

  • Improve bias calculation


Signed-off-by: Alberto Cabrera alberto.cabrera@liquid.ai

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.