github LostRuins/koboldcpp v1.84.1
koboldcpp-1.84.1

latest release: v1.84.2
6 days ago

koboldcpp-1.84.1

This is mostly a bugfix release as 1.83.1 had some issues, but there were too many changes for another patch release.

  • Added support for using aria2c and wget for model downloading if detected on system. (credits @henk717).
  • It's also now possible to specify multiple URLs when loading multipart models online with --model [url1] [url2]... (CLI only), which will allow KoboldCpp to download multiple model file URLs.
  • Added automatic recovery in admin mode if it fails when switching to a faulty config, it will attempt to rollback to the original known-good config.
  • Fixed MoE experts override not working for Deepseek
  • Fixed multiple loader bugs when using the AutoGuess adapter.
  • Fixed images failing to generate when using the AutoGuess adapter.
  • Removed TTS caching as it was not very good.
  • Updated Kobold Lite, multiple fixes and improvements
    • Fix websearch button visibility
    • Improved instruct formatting in classic UI
    • Fixed some LaTeX and markdown edge cases
    • Upped max length slider to 1024 if detected context is larger than 4096.
  • Merged fixes and improvements from upstream

Hotfix 1.84.1 - vulkan iq1 support and fixed lite instruct icon display

This build may still have minor issues - if you have problems please use 1.82.4 for now, I am working on a fix.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're on a modern MacOS (M1, M2, M3) you can try the koboldcpp-mac-arm64 MacOS binary.
If you're using AMD, we recommend trying the Vulkan option (available in all releases) first, for best support. Alternatively, you can try koboldcpp_rocm at YellowRoseCx's fork here

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag. You can also refer to the readme and the wiki.

Don't miss a new koboldcpp release

NewReleases is sending notifications on new releases.