github ggml-org/llama.cpp b7679

latest releases: b7688, b7687, b7685...
one day ago
Details

ggml-webgpu: Fix GGML_MEM_ALIGN to 8 for emscripten. (#18628)

  • Fix GGML_MEM_ALIGN to 8 for emscripten.

  • Add a comment explaining the need for GGML_MEM_ALIGN == 8 in 64-bit wasm with emscripten

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.