koboldcpp-1.10
- Now has RWKV support without needing pytorch or tokernizers or other external libraries!
- Try RWKV-v4-169m here: https://huggingface.co/concedo/rwkv-v4-169m-ggml/tree/main
- Now allows direct launching browser with
--launch
parameter. You can also do something like--stream --launch
. - Updated Kobold Lite, and pulled llama improvements from upstream.
- API now lists the KoboldCpp version number with a new endpoint
/api/extra/version
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
Alternatively, drag and drop a compatible ggml model on top of the .exe, or run it and manually select the model in the popup dialog.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program with the --help
flag.
Alternative Options:
Non-AVX2 version now included in the same .exe file, enable with --noavx2
flags
Big context too slow? Try the --smartcontext
flag to reduce prompt processing frequency
Run with your GPU using CLBlast, with --useclblast
flag for a speedup