github LostRuins/koboldcpp v1.11
koboldcpp-1.11

latest releases: v1.75.1, v1.75.2, v1.75...
17 months ago

koboldcpp-1.11

  • Now has GPT-NeoX / Pythia / StableLM support!
  • Added upstream LORA file support for llama, use the --lora parameter.
  • Added limited fast-forwarding capabilities for RWKV, context can be reused if its completely unmodified.
  • Kobold Lite now supports using an additional custom stopping sequence, edit it in the Memory panel.
  • Updated Kobold Lite, and pulled llama improvements from upstream.
  • Improved OSX and Linux build support - now automatically builds all libraries with the requested flags, and you can select which ones to use at runtime. Example: do a make LLAMA_OPENBLAS=1 LLAMA_CLBLAST=1 and it will build both OpenBlas and CLBlast libraries on your platform, then you select clblast with --useclblast at runtime.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
Alternatively, drag and drop a compatible ggml model on top of the .exe, or run it and manually select the model in the popup dialog.

and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program with the --help flag.

Alternative Options:
Non-AVX2 version now included in the same .exe file, enable with --noavx2 flags
Big context too slow? Try the --smartcontext flag to reduce prompt processing frequency
Run with your GPU using CLBlast, with --useclblast flag for a speedup

Disclaimer: This version has Cloudflare Insights in the Kobold Lite UI, which was subsequently removed in v1.17

Don't miss a new koboldcpp release

NewReleases is sending notifications on new releases.