github ggml-org/llama.cpp b8188

latest release: b8189
4 hours ago
Details

ggml-webgpu: Support non-contiguous src0 and overlapping src0/src1 in binary ops (#19850)

  • ggml-webgpu: Add binary op support for overlapping and non-contiguous.

  • Add newline to binary.wgsl

  • Append the test of binary op for src overlapping to test_bin_bcast.

  • Remove unnecessary newline.

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.