github ggml-org/llama.cpp b8121

7 hours ago
Details

Improve CUDA graph capture (#19754)

  • Improve CUDA graph capture

Currently, CUDA graphs are eagerly enabled on the first call to ggml_backend_cuda_graph_compute. If the graph properties keep changing (4+ consecutive updates), the graph is permanently disabled. This is suboptimal because:

  • The first call always incurs CUDA graph capture overhead even if the graph is unstable
  • Once permanently disabled, CUDA graphs never re-enable even after the graph stabilizes (e.g., switching from prompt processing to decode)

The new approach delays CUDA graph activation until warmup completes: the same cgraph must be called at least twice with matching properties before CUDA graph capture begins. This avoids wasted capture overhead on volatile graphs and allows graphs to become eligible once they stabilize.
This also fixes issues such as #19708

  • Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler johannesg@5d6.de

  • Remove EM dashes

  • Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Aman Gupta amangupta052@gmail.com


Co-authored-by: Johannes Gäßler johannesg@5d6.de
Co-authored-by: Aman Gupta amangupta052@gmail.com

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.