github ggml-org/llama.cpp b8660

3 hours ago
Details

ggml-webgpu: move from parameter buffer pool to single buffer with offsets (#21278)

  • Work towards removing bitcast

  • Move rest of existing types over

  • Add timeout back to wait and remove synchronous set_tensor/memset_tensor

  • move to unpackf16 for wider compatibility

  • cleanup

  • Remove deadlock condition in free_bufs

  • Start work on removing parameter buffer pools

  • Simplify and optimize further

  • simplify profile futures

  • Fix stride

  • Try using a single command buffer per batch

  • formatting

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.