llamacpp-for-kobold-1.0.2
- Added an embedded version of Kobold Lite inside (AGPL Licensed)
- Updated to new ggml model format, but still maintain support for the old one and the old tokenizer.
- Changed license to AGPL v3. The original GGML library and llama.cpp are still under MIT license in their original repos.
Weights not included.
To use, download, extract and run (defaults port is 5001):
llama_for_kobold.py [ggml_quant_model.bin] [port]
and then you can connect like this (or use the full koboldai client):
http://localhost:5001