github ggml-org/llama.cpp b8676

latest releases: b8679, b8678
3 hours ago
Details

server : handle unsuccessful sink.write in chunked stream provider (#21478)

Check the return value of sink.write() in the chunked content provider
and return false when the write fails, matching cpp-httplib's own
streaming contract. This prevents logging chunks as sent when the sink
rejected them and properly aborts the stream on connection failure.

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.