github ggml-org/llama.cpp b7543

latest releases: b7549, b7548, b7547...
16 hours ago
Details

server : fix crash when seq_rm fails for hybrid/recurrent models (#18391)

  • server : fix crash when seq_rm fails for hybrid/recurrent models

  • server : add allow_processing param to clear_slot

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.