github ggml-org/llama.cpp b7411

latest releases: b7414, b7415, b7413...
10 hours ago

Warning

Release Format Update: Linux releases will soon use .tar.gz archives instead of .zip. Please make the necessary changes to your deployment scripts.

metal: use shared buffers on eGPU (#17866)

  • metal: use shared buffers on eGPU

With #15906, I noticed on important regression when using metal backend on eGPU.
This commit restore the previous behavior and add an option to force its activation.

  • metal: use shared buffers on eGPU

  • metal: use shared buffers on eGPU

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.