github LostRuins/koboldcpp v1.104
koboldcpp-1.104

14 hours ago

koboldcpp-1.104

calm before the storm edition

  • NEW: Added --smartcache adapted from @Pento95: This is a 2-in-1 dynamic caching solution that intelligently creates KV state snapshots automatically. Read more here
    • This will greatly speed up performance when different contexts are swapped back to back (e.g. hosting on AI Horde or shared instances).
    • Also allows snapshotting when used with a RNN or Hybrid model (e.g. Qwen3Next, RWKV) which avoids having to reprocess everything.
    • Reuses the KV save/load states from admin mode. Max number of KV states increased to 6.
  • NEW: Added --autofit flag which utilizes upstream's "automatic GPU fitting (-fit )" behavior from ggml-org#16653. Note that this flag overwrites all your manual layer configs and tensor overrides and is not guaranteed to work. However, it can provide a better automatic fit in some cases. Will not be accurate if you load multiple models e.g. image gen.
  • Pipeline parallelism is no longer the default, instead its now a flag you can enable with --pipelineparallel. Only affects multi-gpu setups, faster speed at the cost of memory usage.
  • Key Improvement - Vision Bugfix: A bug in mrope position handling has been fixed, which improves vision models like Qwen3-VL. You should now see much better visual accuracy in some multimodal models compared to earlier koboldcpp versions. If you previously had issues with hallucinated text or numbers, it should be much better now.
  • Increased default gen amount from 768 to 896.
  • Deprecated obsolete --forceversion flag.
  • Fixed safetensors loading for Z-Image
  • Fixed image importer in SDUI
  • Capped cfg_scale to max 3.0 for Z-Image to avoid blurry gens. If you want to override this, set remove_limits to 1 in your payload or inside --sdgendefaults.
  • Removed cc7.0 as a CUDA build target, Volta (V100) will fall back to PTX from cc6.1
  • Tweaked branding in llama.cpp UI to make it clear it's not llama.cpp
  • Added indentation to .kcpps configs
  • Updated Kobold Lite, multiple fixes and improvements
  • Merged fixes and improvements from upstream

Important Notice: The CLBlast backend may be removed soon, as it is very outdated and no longer receives and updates, fixes or improvements. It can be considered superceded by the Vulkan backend. If you have concerns, please join the discussion here.

Download and run the koboldcpp.exe (Windows) or koboldcpp-linux-x64 (Linux), which is a one-file pyinstaller for NVIDIA GPU users.
If you have an older CPU or older NVIDIA GPU and koboldcpp does not work, try oldpc version instead (Cuda11 + AVX1).
If you don't have an NVIDIA GPU, or do not need CUDA, you can use the nocuda version which is smaller.
If you're using AMD, we recommend trying the Vulkan option in the nocuda build for best support.
If you're on a modern MacOS (M-Series) you can use the koboldcpp-mac-arm64 MacOS binary.
Click here for .gguf conversion and quantization tools

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag. You can also refer to the readme and the wiki.

Don't miss a new koboldcpp release

NewReleases is sending notifications on new releases.