github ggml-org/llama.cpp b7898

2 hours ago
Details

ggml-hexagon: flash-attention and reduce-sum optimizations (#19141)

  • wip

  • ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation

  • ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations

  • wip

  • ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance

  • ggml-hexagon: refactor dot product functions to use a common loading function for improved readability

  • optimize vector dot product functions to use unified reduction for improved performance

  • wip

  • ggml-hexagon: add vectorized dot product function for FP32 and FP16 accumulation

  • ggml-hexagon: optimize dot product functions for FP16 and FP32 with new vectorized implementations

  • wip

  • ggml-hexagon: optimize hvx_vec_dump_f32_n and hvx_vec_reduce_sum_qf32x2 functions for improved performance

  • ggml-hexagon: refactor dot product functions to use a common loading function for improved readability

  • optimize vector dot product functions to use unified reduction for improved performance

  • hexagon: optimize reduce-sum for v75+

  • hexagon: always keep row_sums in sf/fp32

  • ggml-hexagon: enhance directory checks for HEXAGON_SDK_ROOT and HEXAGON_TOOLS_ROOT

  • fix compiling error after rebase


Co-authored-by: Max Krasnyansky maxk@qti.qualcomm.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.