github ggml-org/llama.cpp b7903

3 hours ago
Details

Remove pipeline cache mutexes (#19195)

  • Remove mutex for pipeline caches, since they are now per-thread.

  • Add comment

  • Run clang-format

  • Cleanup

  • Run CI again

  • Run CI once more

  • Run clang-format

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.