github LostRuins/koboldcpp v1.0.5
llamacpp-for-kobold-1.0.5

latest releases: v1.74, v1.73.1, v1.73...
18 months ago

llamacpp-for-kobold-1.0.5

  • Merged the upstream fixes for 65b
  • Clamped max thread count to 4, it actually provides better results as it is memory bottlenecked.
  • Added support for select kv data type, defaulting to f32 instead of f16
  • Added more default build flags
  • Added softprompts endpoint

To use, download and run the llamacpp_for_kobold.exe
Alternatively, drag and drop a compatible quantized model for llamacpp on top of the .exe, or run it and manually select the model in the popup dialog.

and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

Don't miss a new koboldcpp release

NewReleases is sending notifications on new releases.