github YellowRoseCx/koboldcpp-rocm v1.59.1.yr1-ROCm
KoboldCPP-v1.59.1.yr1-ROCm

latest releases: v1.78.yr0-ROCm, v1.77.yr1-ROCm, v1.77.yr0-ROCm...
9 months ago

Upstream Changelog:

  • Added --nocertify mode which allows you to disable SSL certificate checking on your embedded Horde worker. This can help bypass some SSL certificate errors.
  • Fixed pre-gguf models loading with incorrect thread counts. This issue affected the past 2 versions.
  • Added build target for Old CPU (NoAVX2) Vulkan support.
  • Fixed cloudflare remotetunnel URLs not displaying on runpod.
  • Reverted CLBlast back to 1.6.0, pending CNugteren/CLBlast#533 and other correctness fixes.
  • Smartcontext toggle is now hidden when contextshift toggle is on.
  • Various improvements and bugfixes merged from upstream, which includes google gemma support.
  • Bugfixes and updates for Kobold Lite
  • Changed makefile build flags, fix for tooltips, merged IQ3_S support

To use on Windows, download and run the koboldcpp_rocm.exe, which is a one-file pyinstaller OR download koboldcpp_rocm_files.zip and run python koboldcpp.py (additional python pip modules might need installed, like customtkinter and tk or python-tk.
To use on Linux, clone the repo and build with make LLAMA_HIPBLAS=1 -j4 (-j4 can be adjusted to your number of CPU threads for faster build times)

For a full Linux build, make sure you have the OpenBLAS and CLBlast packages installed:
For Arch Linux: Install cblas openblas and clblast.
For Debian: Install libclblast-dev and libopenblas-dev.
then run make LLAMA_HIPBLAS=1 LLAMA_OPENBLAS=1 LLAMA_CLBLAST=1 -j4

If you're using NVIDIA, you can try koboldcpp.exe at LostRuin's upstream repo here
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller, also at LostRuin's repo.
To use on Linux, clone the repo and build with make LLAMA_HIPBLAS=1 -j4

Run it from the command line with the desired launch parameters (see --help), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001

For more information, be sure to run the program from command line with the --help flag.

Don't miss a new koboldcpp-rocm release

NewReleases is sending notifications on new releases.