github LostRuins/koboldcpp v1.32.3
koboldcpp-1.32.3

latest releases: v1.74, v1.73.1, v1.73...
15 months ago

koboldcpp-1.32.3

sadge

  • Ported the optimized K-Quant CUDA kernels to OpenCL ! This speeds up K-Quants generation speed by about 15% with CL (Special thanks: @0cc4m)
  • Implemented basic GPU offloading for MPT, GPT-2, GPT-J and GPT-NeoX via OpenCL! It still keeps a copy of the weights in RAM, but generation speed for these models should now be much faster! (50% speedup for GPT-J, and even WizardCoder is now 30% faster for me.)
  • Implemented scratch buffers for the latest versions of all non-llama architectures except RWKV (MPT, GPT-2, NeoX, GPT-J), BLAS memory usage should be much lower on average, and larger BLAS batch sizes will be usable on these models.
  • Merged GPT-Tokenizer improvements for non-llama models. Support Starcoder special added tokens. Coherence for non-llama models should be improved.
  • Updated Lite, pulled updates from upstream, various minor bugfixes.

1.32.1 Hotfix:

  • A number of bugs were fixed. The include memory allocation errors with OpenBLAS, and errors recognizing the new MPT-30B model correctly.

1.32.2 Hotfix.

  • Solves an issue with the MPT-30B vocab having missing words due to an problems with wide-string tokenization.
  • Solve an issue with LLAMA WizardLM-30B running out of memory near 2048 context at larger k-quants.

1.32.3 Hotfix.

  • Reverted wstring changes, they negatively affected model coherency.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
Alternatively, drag and drop a compatible ggml model on top of the .exe, or run it and manually select the model in the popup dialog.

and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program with the --help flag.

Don't miss a new koboldcpp release

NewReleases is sending notifications on new releases.