github ggml-org/llama.cpp b8329

one hour ago
Details

ggml-cpu: add RVV vec dot kernels for quantization types (#18859)

  • ggml-cpu: add rvv quantize_row_q8_K kernel

Co-authored-by: Rehan Qasim rehan.qasim@10xengineers.ai

  • ggml-cpu: add rvv vec_dot for iq4_nl, mxfp4, iq2_xxs

Co-authored-by: Rehan Qasim rehan.qasim@10xengineers.ai

  • ggml-cpu: add rvv vec_dot for iq4_xs, refactor

  • ggml-cpu: remove ifunc for rvv vec dot

  • ggml-cpu: add vec_dot for iq2_xs, iq3_xxs

Co-authored-by: Rehan Qasim rehan.qasim@10xengineers.ai

  • ggml-cpu: refactor quants.c

Co-authored-by: taimur-10x taimur.ahmad@10xengineers.ai
Co-authored-by: Rehan Qasim rehan.qasim@10xengineers.ai
Co-authored-by: Rehan Qasim rehanbhatti0317@gmail.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.