github LostRuins/koboldcpp v1.0.8beta
koboldcpp-1.0.8beta

latest releases: v1.75.1, v1.75.2, v1.75...
18 months ago

koboldcpp-1.0.8beta

  • Rebranded to koboldcpp (formerly llamacpp-for-kobold). Library file names and references are changed too, Please let me know if anything is broken!
  • Added support for the original GPT4ALL.CPP format!
  • Added support for GPT-J formats, including the original 16bit legacy format as well as the 4bit version from Pygmalion.cpp
  • Switched compiler flag from -O3 to -Ofast. This should increase generation speed even more, but I dunno if anything will break, please let me know if so.
  • Changed default threads to scale according to physical Core counts instead of os.cpu_count(). This will generally result in fewer threads being utilized, but it should provide a better default for slower systems. You can override this manually with --threads parameter.

To use, download and run the koboldcpp.exe
Alternatively, drag and drop a compatible quantized model for llamacpp on top of the .exe, or run it and manually select the model in the popup dialog.

and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

Don't miss a new koboldcpp release

NewReleases is sending notifications on new releases.