github ggml-org/llama.cpp b7543

latest releases: b8027, b8026, b8024...
one month ago
Details

server : fix crash when seq_rm fails for hybrid/recurrent models (#18391)

  • server : fix crash when seq_rm fails for hybrid/recurrent models

  • server : add allow_processing param to clear_slot

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.