github ggml-org/llama.cpp b8147

3 hours ago
Details

server: fix query params lost when proxying requests in multi-model router mode (#19854)

  • server: fix query params lost when proxying requests in multi-model router mode

  • server: re-encode query params using httplib::encode_query_component in proxy

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.