github ggml-org/llama.cpp b7716

latest releases: b8838, b8836, b8837...
3 months ago
Details

server : add arg for disabling prompt caching (#18776)

  • server : add arg for disabling prompt caching

Disabling prompt caching is useful for clients who are restricted to
sending only OpenAI-compat requests and want deterministic
responses.

  • address review comments

  • address review comments

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.