github ggml-org/llama.cpp b7774

latest releases: b7779, b7777, b7775...
4 hours ago
Details

ggml : add ggml_build_forward_select (#18550)

  • ggml : add ggml_build_forward_select

  • cuda : adapt CUDA graph compat to new feature

  • vulkan : update logic to handle command buffer closing

  • ggml : check compute for fusion

  • ggml : add comment

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.