github ggml-org/llama.cpp b7609

5 hours ago
Details

cuda : fix copy of large tensors (ggml_nbytes <= INT_MAX assertion) (#18433)

  • ggml-cuda: fixed assertion in ggml_cuda_cpy (#18140)

  • ggml-cuda: changes in data types to int64_t

  • ggml-cuda: added asserts for CUDA block numbers

  • ggml-cuda: changed the condition for y and z dimension

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.