github ggml-org/llama.cpp b8268

latest release: b8269
2 hours ago
Details

ggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121)

  • ggml-cpu: add rvv ggml_quantize_mat_4x8 for q8_0

Co-authored-by: Rehan Qasim rehan.qasim@10xengineers.ai

  • ggml-cpu: add rvv repacking for iq4_nl

  • ggml-cpu: add generic impl for iq4_nl gemm/gemv

  • ggml-cpu: add rvv repacking for q8_0

  • ggml-cpu: refactor; add rvv repacking for q4_0, q4_K

  • ggml-cpu: refactor; add rvv repacking for q2_K

Co-authored-by: Rehan Qasim rehan.qasim@10xengineers.ai

  • ggml-cpu: refactor rvv repack

Co-authored-by: Rehan Qasim rehan.qasim@10xengineers.ai

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.