github ggml-org/llama.cpp b8787

latest releases: b8797, b8796, b8795...
11 hours ago
Details

ggml-webgpu: Update register tiling matmul to use f32 accumulation (#21644)

  • Update register tiling matmul to use f32 accumulation

  • fix profiling code

  • Fix register tiling matmul for chrome, i'm blaming dawn

  • Update batch tuning value for iOS

  • compile fix

  • Fix use of new load function

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.