github ggml-org/llama.cpp b7645

latest releases: b7692, b7691, b7690...
3 days ago
Details

mmq.cu: tune mmq/rocblas switching for RDNA (#18537)

  • Patch perf regression for mmq kernels in ROCm

recover performance regression for #17917

  • add n_experts branch like the cdna path

  • mmq.cu: tune mmq/wmma switching for RDNA

  • mmq.cu: move amd wmma mmq/wmma switching behind IS_RDNA3

  • Update ggml/src/ggml-cuda/mmq.cu

Co-authored-by: Johannes Gäßler johannesg@5d6.de


Co-authored-by: Jiacheng (Jason) Chen 76919340+jiachengjason@users.noreply.github.com
Co-authored-by: jiachengjason jasonchen.jiacheng@gmail.com
Co-authored-by: Johannes Gäßler johannesg@5d6.de

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.