github ggml-org/llama.cpp b7649

latest releases: b7694, b7692, b7691...
3 days ago
Details

ggml : optimize cuda ssm_scan using warp-level reduction (#18505)

  • ggml : optimize cuda ssm_scan using warp-level reduction

  • ggml : apply code review suggestions (style, const, constexpr)

  • ggml : add TODO regarding stride consistency

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.