github ggml-org/llama.cpp b7413

latest releases: b7423, b7422, b7418...
12 hours ago

Warning

Release Format Update: Linux releases will soon use .tar.gz archives instead of .zip. Please make the necessary changes to your deployment scripts.

Details

kv-cache: Fix state restore fragmented cache (#17982)

  • kv-cache : fix state restore with fragmented cache (#17527)

Change find_slot to allow non-contiguous allocation during state restore. Fixes 'failed to find available cells in kv cache' error when restoring state to fragmented cache.

  • tests : update logic

  • cleanup: tightened state_read_meta sig, added is_contiguous case

  • fix: state_read_meta arg reorder loose ends


Co-authored-by: Georgi Gerganov ggerganov@gmail.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.