github ggml-org/llama.cpp b7832

latest releases: b8334, b8333, b8332...
one month ago
Details

kv-cache : support V-less cache (#19067)

  • kv-cache : support V-less cache

  • cuda : better check for V_is_K_view

  • cuda : improve V_is_K_view check

  • graph : add comments

  • hparams : refactor

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.