github oobabooga/text-generation-webui v3.11

12 days ago

Changes

  • Add the Tensor Parallelism option to the ExLlamav3/ExLlamav3_HF loaders through the --enable-tp and --tp-backend options.
  • Set multimodal status during Model Loading instead of checking every generation (#7199). Thanks, @altoiddealer.
  • Improve the multimodal API examples slightly.

Bug fixes

  • Make web search functional again
  • mtmd: Fix a bug when "include past attachments" is unchecked
  • Fix code blocks having an extra empty line in the UI

Backend updates


Portable builds

Below you can find self-contained packages that work with GGUF models (llama.cpp) and require no installation! Just download the right version for your system, unzip, and run.

Which version to download:

  • Windows/Linux:

    • NVIDIA GPU: Use cuda12.4 for newer GPUs or cuda11.7 for older GPUs and systems with older drivers.
    • AMD/Intel GPU: Use vulkan builds.
    • CPU only: Use cpu builds.
  • Mac:

    • Apple Silicon: Use macos-arm64.
    • Intel CPU: Use macos-x86_64.

Updating a portable install:

  1. Download and unzip the latest version.
  2. Replace the user_data folder with the one in your existing install. All your settings and models will be moved.

Don't miss a new text-generation-webui release

NewReleases is sending notifications on new releases.