github ggml-org/llama.cpp b7925

2 hours ago
Details

CUDA: use mmvq for mul-mat-id for small batch sizes (#18958)

  • CUDA: use mmvq for mul-mat-id for small batch sizes

  • add mmvq too

  • Fix perf issue on ampere. Use mmvf mm-id only for non-nvidia GPUs

  • templatize multi_token_path

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.