github ggml-org/llama.cpp b7983

latest releases: b7987, b7984
7 hours ago
Details

CANN: implement quantized MUL_MAT_ID for MoE models (#19228)

Implement ggml_cann_mul_mat_id_quant function to support quantized matrix
multiplication for Mixture of Experts (MoE) architectures on CANN backend.

Key features:

  • Support Q4_0 and Q8_0 quantized weight formats
  • Use IndexSelect to dynamically route expert-specific weights based on indices
  • Leverage WeightQuantBatchMatmulV2 for efficient quantized computation
  • Handle automatic F16 type conversion for hardware compatibility
  • Support both per-expert and broadcast input modes

Implementation details:

  • Extract expert weights and scales using CANN IndexSelect operation
  • Process each batch and expert combination independently
  • Create proper tensor views with correct stride for matmul operations
  • Automatic input/output type casting to/from F16 as needed

Testing: All test cases passed for supported types (F32, F16, Q4_0, Q8_0).

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.