github ggml-org/llama.cpp b8144

latest releases: b8797, b8796, b8795...
one month ago
Details

server : support max_completion_tokens request property (#19831)

"max_tokens" is deprectated in favor of "max_completion_tokens" which
sets the upper bound for reasoning+output token.

Closes: #13700

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.