github ggml-org/llama.cpp b7945

latest releases: b7947, b7946
3 hours ago
Details

vulkan: fix GPU deduplication logic. (#19222)

  • vulkan: fix GPU deduplication logic.

As reported in #19221, the
(same uuid, same driver) logic is problematic for windows+intel igpu.

Let's just avoid filtering for MoltenVK which is apple-specific, and
keep the logic the same as before 88d23ad - just dedup based on UUID.

Verified that MacOS + 4xVega still reports 4 GPUs with this version.

  • vulkan: only skip dedup when both drivers are moltenVk

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.