github ggml-org/llama.cpp b7832

latest releases: b7836, b7835, b7833...
6 hours ago
Details

kv-cache : support V-less cache (#19067)

  • kv-cache : support V-less cache

  • cuda : better check for V_is_K_view

  • cuda : improve V_is_K_view check

  • graph : add comments

  • hparams : refactor

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.