github ggml-org/llama.cpp b7557

latest releases: b8022, b8021, b8020...
one month ago
Details

cmake: Added more x86_64 CPU backends when building with GGML_CPU_ALL_VARIANTS=On (#18186)

  • minor: Consolidated #include <immintrin.h> under ggml-cpu-impl.h

  • cmake: Added more x86-64 CPU backends when building with GGML_CPU_ALL_VARIANTS=On

  • ivybridge
  • piledriver
  • cannonlake
  • cascadelake
  • cooperlake
  • zen4

Resolves: #17966

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.