github ggml-org/llama.cpp b8163

latest releases: b8172, b8171, b8170...
9 hours ago
Details

ggml-virtgpu: improve the reliability of the code (#19846)

  • ggml-virtgpu-backend: validate the consistency of the received objects

This patch adds consistency checks in the
ggml-virtgpu-backend (running on the host side) to ensure that the
data received from the guest is consistent (valid pointers, valid
sizes and offsets).

  • ggml-virtgpu-backend: add fallback/skips for optional ggml backend methods
  1. bck->iface.synchronize(bck)
  2. buft->iface.get_alloc_size(buft, op)
  3. buft->iface.get_max_size(buft)

these three methods are optional in the GGML interface. get_max_size
was already properly defaulted, but backend sychronize and butf get_max_size would have segfaulted the backend if not implemented.

  • ggml-virtgpu-backend: fix log format missing argument

  • ggml-virtgpu-backend: improve the abort message

  • ggml-virtgpu-backend: more safety checks

  • ggml-virtgpu-backend: new error code

  • ggml-virtgpu-backend: initialize all the error codes

  • ggml-virtgpu: add a missing comment generated by the code generator

  • ggml-virtgpu: add the '[virtgpu]' prefix to the device/buffer names

  • ggml-virtgpu: apir_device_buffer_from_ptr: improve the error message

  • ggml-virtgpu: shared: make it match the latest api_remoting.h of Virglrenderer APIR

(still unmerged)

  • ggml-virtgpu: update the code generator to have dispatch_command_name in a host/guest shared file

  • ggml-virtgpu: REMOTE_CALL: fail if the backend returns an error

  • docs/backend/VirtGPU.md: indicate that the RAM+VRAM size is limed to 64 GB with libkrun

  • ggml-virtgpu: turn off clang-format header ordering for some of the files

Compilation breaks when ordered alphabetically.

  • ggml-virtgpu: clang-format

  • ggml-virtgpu/backend/shared/api_remoting: better comments for the APIR return codes

macOS/iOS:

Linux:

Windows:

openEuler:

Don't miss a new llama.cpp release

NewReleases is sending notifications on new releases.