github ggml-org/llama.cpp b7730

latest releases: b8298, b8297, b8295...
one month ago
Details

mmap: add Haiku support by skipping RLIMIT_MEMLOCK check (#18819)

Haiku OS does not support RLIMIT_MEMLOCK, similar to visionOS/tvOS.
Skip the resource limit check on Haiku to allow mlock functionality
to work without compile errors.

Tested on Haiku with NVIDIA RTX 3080 Ti using Vulkan backend.

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.