github ggml-org/llama.cpp b8842

latest releases: b8846, b8843
2 hours ago
Details

server : speculative checkpointing (#19493)

  • server : speculative decoding using checkpoints

  • server : fix draft check with checkpoints

  • server : rename spec vars

  • server : log levels

  • server : refactored spec logic to speculative.cpp

  • server : renamed spec checkpoints option

  • server : fix spec checkpoints, logging

  • speculative : checkpoints with draft model, logging

  • server : n_tokens_cur and create_checkpoint in draft

  • server : fix server_speculative_callback (slot.id)

  • spec : fix ngram-map/begin idx_last_check

  • spec : init ckpt (begin() wasn't called)

  • chore: update webui build output

  • server : restore sampler in spec checkpoint and clear mem

  • cont : avoid --spec-use-checkpoints argument

  • cont : remove server_prompt_checkpoint_with_size

  • spec : rename (leave_draft_state)

  • cont : clean-up

  • cont : do not ignore partial drafts even if the are short

  • cont : spec callback owned by session

  • cont : simplify

  • cont : avoid empty speculative session

  • cont : simplify

  • cont : simplify

  • cont : enable mtmd speculative decoding

  • cont : keep the spec sampler alive

  • cont : simplify

  • cont : fix nullptr deref + draft checkpoints

  • cont : remove common_speculative_accept_response

  • cont : remove callback

  • cont : simplify

  • cont : minor

  • cont : simplify

  • cont : fix accepted number


Co-authored-by: Georgi Gerganov ggerganov@gmail.com

macOS/iOS:

Linux:

Android:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.