github LostRuins/koboldcpp v1.78
koboldcpp-1.78

6 hours ago

koboldcpp-1.78

image

  • NEW: Added support for Flux and Stable Diffusion 3.5 models: Image generation has been updated with new arch support (thanks to stable-diffusion.cpp) with additional enhancements. You can use either fp16 or fp8 safetensor models, or the GGUF models. Supports all-in-one models (bundled T5XXL, Clip-L/G, VAE) or loading them individually.
  • Debug mode prints penalties for XTC
  • Added a new flag --nofastforward, this forces full prompt reprocessing on every request. It can potentially give more repeatable/reliable/consistent results in some cases.
  • CLBlast support is still retained, but has been further downgraded to "compatibility mode" and is no longer recommended (use Vulkan instead). CLBlast GPU offload must now maintain duplicate a copy of the layers in RAM as well, as it now piggybacks off the CPU backend.
  • Added common identity provider /.well-known/serviceinfo Haidra-Org/AI-Horde#466 PygmalionAI/aphrodite-engine#807 theroyallab/tabbyAPI#232
  • Reverted some changes that reduced speed in HIPBLAS.
  • Fixed a bug where bad logprobs JSON was output when logits were -Infinity
  • Updated Kobold Lite, multiple fixes and improvements
    • Added support for custom CSS styles
    • Added support for generating larger images (select BigSquare in image gen settings)
    • Fixed some streaming issues when connecting to Tabby backend
    • Better world info length limiting (capped at 50% of max context before appending to memory)
    • Added support for Clip Skip for local image generation.
  • Merged fixes and improvements from upstream

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

Don't miss a new koboldcpp release

NewReleases is sending notifications on new releases.