github ggml-org/llama.cpp b8201

latest release: b8200
4 hours ago
Details

[WebGPU] Fix wait logic for inflight jobs (#20096)

  • Enable tmate debugging for investigating thread safety issue

  • Refactor wait and submit to operate on vectorwgpu::FutureWaitInfo, and fix wait to delete only the future that is completed.

  • Cleanup

  • Remove clear change and run clang-format

  • Cleanup

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.