github ggml-org/llama.cpp b8578

2 hours ago
Details

hexagon: dma optimizations (mostly fixing regressions) (#21137)

  • hex-fa: add simple dma cache for Mask

I noticed that we were refetch the mask rows over and over.
This simple cache avoids that.

  • hex-dma: unset in-order desc bit which caused signficant perf regression

We don't rely on true in order processing of the DMA descriptors anywhere.
Turns out this mode caused significant regression of around 3-4 TPS during token gen.

  • hex-rope: update comment to clarify that we don't need in-order DMA completions

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.