github ggml-org/llama.cpp b8071

latest releases: b8075, b8074, b8073...
3 hours ago
Details

Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (#19591)

Avoids issues with ROCm 6.4.4.

Closes: #19580
Fixes: 6845f7f ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)")

Signed-off-by: Mario Limonciello (AMD) superm1@kernel.org

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.