github ggml-org/llama.cpp b7679

latest releases: b8883, b8882, b8881...
3 months ago
Details

ggml-webgpu: Fix GGML_MEM_ALIGN to 8 for emscripten. (#18628)

  • Fix GGML_MEM_ALIGN to 8 for emscripten.

  • Add a comment explaining the need for GGML_MEM_ALIGN == 8 in 64-bit wasm with emscripten

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.