github ggml-org/llama.cpp b7543

latest releases: b8352, b8351, b8350...
2 months ago
Details

server : fix crash when seq_rm fails for hybrid/recurrent models (#18391)

  • server : fix crash when seq_rm fails for hybrid/recurrent models

  • server : add allow_processing param to clear_slot

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.