github ggml-org/llama.cpp b8221

latest releases: b8229, b8227, b8226...
6 hours ago
Details

ggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) (#20130)

This patch addresses an Internal Compiler Error (Segmentation fault)
observed with gcc 15 by replacing the intrinsic + cast by doing
a cat on the data first and then calling the intrinsic. This bypasses the
buggy compiler path while maintaining identical instruction selection.

Performance Verification:
Assembly analysis on RHEL 9 (GCC 15.1.1) confirms that both the original
code and this fix generate the identical Power10 prefixed load instruction:
plxv 40, 2(14)

This ensures zero performance regression while unblocking builds on
newer toolchains.

Reproduced on:

  • Alpine Linux + GCC 15.2.0-r2
  • RHEL 9 + GCC 15.1.1 (gcc-toolset-15)

Signed-off-by: Shalini Salomi Bodapati Shalini.Salomi.Bodapati@ibm.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.