Changes
- Reduce the size of all Linux/macOS portable builds by excluding llama.cpp symlinks (dereferenced due to Python whl limitations) and recreating them on first launch.
Backend updates
- Update llama.cpp to https://github.com/ggml-org/llama.cpp/tree/5c8a717128cc98aa9e5b1c44652f5cf458fd426e
- Update ExLlamaV3 to 0.0.18
- Update safetensors to 0.7
- Update triton-windows to 3.5.1.post22
Portable builds
Below you can find self-contained packages that work with GGUF models (llama.cpp) and require no installation! Just download the right version for your system, unzip, and run.
Which version to download:
-
Windows/Linux:
- NVIDIA GPU: Use
cuda12.4. - AMD/Intel GPU: Use
vulkanbuilds. - CPU only: Use
cpubuilds.
- NVIDIA GPU: Use
-
Mac:
- Apple Silicon: Use
macos-arm64.
- Apple Silicon: Use
Updating a portable install:
- Download and unzip the latest version.
- Replace the
user_datafolder with the one in your existing install. All your settings and models will be moved.