github ggml-org/llama.cpp b7735

latest releases: b9010, b9009, b9008...
3 months ago
Details

vulkan: Check maxStorageBufferRange in supports_op (#18709)

  • vulkan: Check maxStorageBufferRange in supports_op

  • skip maxStorageBufferRange check when shader64BitIndexing is enabled

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.