github ggml-org/llama.cpp b7256

latest release: b7261
4 hours ago

Warning

Release Format Update: Linux releases will soon use .tar.gz archives instead of .zip. Please make the necessary changes to your deployment scripts.

CUDA: generalized (mma) FA, add Volta support (#17505)

  • CUDA: generalized (mma) FA, add Volta support

  • use struct for MMA FA kernel config


Co-authored-by: Aman Gupta

macOS/iOS:

Linux:

Windows:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.