github ggml-org/llama.cpp b8212

latest releases: b8215, b8214, b8213...
7 hours ago
Details

hexagon: add fp16 support for binary ops: add,sub,mul,div (#20139)

  • hexagon: add fp16 support for binary ops: add,sub,mul,div

  • hexagon: fix test-backend-ops failures for fp16 binary ops on older arches (<v79)

  • hexagon: decide on n_threads (aka n_jobs) early to avoid overallocating scratchpad

  • snapdragon: fix readme link


Co-authored-by: Max Krasnyansky maxk@qti.qualcomm.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.