github ggml-org/llama.cpp b8459

latest releases: b8461, b8460
5 hours ago
Details

ggml-cpu: add always_inline to tinyBLAS_PPC accumulator saves (#20791)

Explicitly mark save_acc and add_save_Acc with always_inline
in tinyBLAS_PPC. This ensures the compiler keeps MMA accumulator
disassembly within kernel's register context, preventing un-necessary
stask spills.

Signed-off-by: Shalini Salomi Bodapati Shalini.Salomi.Bodapati@ibm.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.