github ggml-org/llama.cpp b7553

latest releases: b7556, b7555, b7554...
8 hours ago
Details

llama: fix magic number of 999 for GPU layers (#18266)

  • llama: fix magic number of 999 for GPU layers

  • use strings for -ngl, -ngld

  • enacapsulate n_gpu_layers, split_mode

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.