github ggml-org/llama.cpp b7588

2 hours ago
Details

Work around broken IntelSYCLConfig.cmake in Intel oneAPI 2025.x (#18345)

  • cmake: work around broken IntelSYCLConfig.cmake in oneAPI 2025.x

  • [AI] sycl: auto-detect and skip incompatible IntelSYCL package

Automatically detect compiler versions with incompatible IntelSYCL
CMake configuration files and fall back to manual SYCL flags instead
of requiring users to set options manually.

Fixes build failures with oneAPI 2025.x where IntelSYCLConfig.cmake
has SYCL_FEATURE_TEST_EXTRACT invocation errors.

  • refactor: improve SYCL provider handling and error messages in CMake configuration

  • refactor: enhance SYCL provider validation and error handling in CMake configuration

  • ggml-sycl: wrap find_package(IntelSYCL) to prevent build crashes

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.