github oobabooga/text-generation-webui v3.15

one day ago

Changes

  • log error when llama-server request exceeds context size (#7263). Thanks, @mamei16.
  • Make --trust-remote-code immutable from the UI/API for better security.

Bug fixes

  • Fix metadata leaking into branched chats.
  • Fix "continue" missing an initial space in chat-instruct/chat modes.
  • Fix resuming incomplete downloads after HF moved to Xet.
  • Revert exllamav3_hf changes in v3.14 that made it output gibberish.

Backend updates


Portable builds

Below you can find self-contained packages that work with GGUF models (llama.cpp) and require no installation! Just download the right version for your system, unzip, and run.

Which version to download:

  • Windows/Linux:

    • NVIDIA GPU: Use cuda12.4 for newer GPUs or cuda11.7 for older GPUs and systems with older drivers.
    • AMD/Intel GPU: Use vulkan builds.
    • CPU only: Use cpu builds.
  • Mac:

    • Apple Silicon: Use macos-arm64.
    • Intel CPU: Use macos-x86_64.

Updating a portable install:

  1. Download and unzip the latest version.
  2. Replace the user_data folder with the one in your existing install. All your settings and models will be moved.

Don't miss a new text-generation-webui release

NewReleases is sending notifications on new releases.