koboldcpp-1.104
calm before the storm edition
- NEW: Added
--smartcacheadapted from @Pento95: This is a 2-in-1 dynamic caching solution that intelligently creates KV state snapshots automatically. Read more here- This will greatly speed up performance when different contexts are swapped back to back (e.g. hosting on AI Horde or shared instances).
- Also allows snapshotting when used with a RNN or Hybrid model (e.g. Qwen3Next, RWKV) which avoids having to reprocess everything.
- Reuses the KV save/load states from admin mode. Max number of KV states increased to 6.
- NEW: Added
--autofitflag which utilizes upstream's "automatic GPU fitting (-fit)" behavior from ggml-org#16653. Note that this flag overwrites all your manual layer configs and tensor overrides and is not guaranteed to work. However, it can provide a better automatic fit in some cases. Will not be accurate if you load multiple models e.g. image gen. - Pipeline parallelism is no longer the default, instead its now a flag you can enable with
--pipelineparallel. Only affects multi-gpu setups, faster speed at the cost of memory usage. - Key Improvement - Vision Bugfix: A bug in mrope position handling has been fixed, which improves vision models like Qwen3-VL. You should now see much better visual accuracy in some multimodal models compared to earlier koboldcpp versions. If you previously had issues with hallucinated text or numbers, it should be much better now.
- Increased default gen amount from 768 to 896.
- Deprecated obsolete
--forceversionflag. - Fixed safetensors loading for Z-Image
- Fixed image importer in SDUI
- Capped cfg_scale to max 3.0 for Z-Image to avoid blurry gens. If you want to override this, set
remove_limitsto1in your payload or inside--sdgendefaults. - Removed cc7.0 as a CUDA build target, Volta (V100) will fall back to PTX from cc6.1
- Tweaked branding in llama.cpp UI to make it clear it's not llama.cpp
- Added indentation to .kcpps configs
- Updated Kobold Lite, multiple fixes and improvements
- Merged fixes and improvements from upstream
- GLM4.6V and GLM4.6V Flash are now supported. You can get the model and the mmproj here.
- If you want to test out GLM ASR Nano, I've made quants here, works best with short audio clips, for longer audio please stick to Whisper.
- NEW: Added support for Flux2 and Z-Image Turbo! Another big thanks to @leejet for the sd.cpp implementation and @wbruna for the assistance with testing and merging.
- To obtain models for Z-Image (Most recommended, lightweight):
- Get the Z-Image Image model here
- Get the Z-Image VAE here, which is the same vae as FluxOne.
- Get the Z-Image text encoder here (load this as Clip 1)
- Alternative: Load this template to download all 3 automatically
- To obtain models for Flux2 (Not recommended as this model is huge so i will link the q2k. Remember to enable cpu offload. Running anything larger requires a very powerful GPU):
- NEW: Mistral and Ministral 3 model support has been merged from upstream.
- Improved "Assistant Continue" in llama.cpp UI mode, now can be used to continue partial turns.
- We have added prefill support to chat completions if you have /lcpp in your URL (/lcpp/v1/chat/completions), the regular chat completions is meant to mimick OpenAI and does not do this. Point your frontend to the URL that most fits your use case, we'd like feedback on which one of these you prefer and if the /lcpp behavior would break an existing use case.
- Minor tool calling fix to avoid passing base64 media strings into the tool call.
- Tweaked resizing behavior of the launcher UI.
- Added a secondary terminal UI to view the console logging (only for Linux), can be used even when not launched from CLI. Launch this auxiliary terminal from the Extras tab.
- AutoGuess Template fixes for GPT-OSS and Kimi
- Fixed a bug with
--showguimode being saved into some configs - Updated Kobold Lite, multiple fixes and improvements
- Merged fixes and improvements from upstream
- New: Now bundles the llama.cpp UI into KoboldCpp, as an extra option for those who prefer it. Access it at http://localhost:5001/lcpp
- The llama.cpp UI is designed strongly for assistant use-cases and provides a ChatGPT like interface, with support for importing documents like .pdf files. It can be accessed in parallel to the usual KoboldAI Lite UI (which is recommended for roleplay/story writing) and does not take up any additional resources while not in use.
- New: Massive universal tool calling improvement from @Rose22, with the new format KoboldCpp is now even better at calling tools and using multiple tools in sequence correctly. Works automatically with all tool calling capable frontends (OpenWebUI / SillyTavern etc) in chat completions mode and may work on models that normally do not support tool calling (in the correct format).
- New: Added support for jinja2 templates via
/v1/chat/completions, for those that have been asking for it. There are 3 modes:- Current Default: Uses KoboldCpp ChatAdapter templates, KoboldCpp universal toolcalling module (current behavior, most recommended).
- Using
--jinja: Uses jinja2 template from GGUF in chat completions mode for normal messages, uses KoboldCpp universal toolcalling module. Use this only if you love jinja. There are GGUF models on Huggingface which will explicitly mention --jinja must be used to get normal results, this does not apply to KoboldCpp as our regular modes cover these cases. - Using
--jinja_tools: Uses jinaja2 template from GGUF in chat completions mode for all messages and tools. Not recommended in general. In this mode the model and frontend are responsible for the compatibility.
- Synced and updated Image Generation to latest stable-diffusion.cpp, big thanks to @wbruna. Please report any issues you encounter.
- Updated google Colab notebook with easier default selectable presets, thanks @henk717
- Allow GUI launcher window to be resized slightly larger horizontally, in case some text gets cut off.
- Fixed a divide by zero error with audio projectors
- Added Vulkan support for whisper.
- Filename insensitive search when selecting chat completion adapters
- Fixed an old bug that caused mirostat to swap parameters. To get the same result as before, swap values for
tauandeta. - Added a debug command
--testmemoryto check what values auto GPU detection retrieves (not needed for most) - Now serves KoboldAI Lite UI gzipped to browsers that can support it, for faster UI loading.
- Added sampler support for smoothing curve
- Updated Kobold Lite, multiple fixes and improvements
- Web Link-sharing now defaults to dpaste.com as dpaste.org is shut down
- Added option to save and load custom scenarios in a Scenario Library (like stories but do not contain most settings)
- Allow single-turn deletion and editing in classic theme instruct mode (click on the icon)
- Better turn chunking and repacking after editing a message
- Merged new model support, fixes and improvements from upstream
Hotfix 1.102.2 - Try to fix some issues with flash attention, fixed media attachments in jinja mode
Hotfix 1.102.3 - Merged Qwen3Next support. Note that you need to use batch size 512 or less.
- Support for Qwen3-VL is merged - For a quick test, get the Qwen3-VL-2B-Instruct model here and the mmproj here. Larger versions exist, but this will work well enough for simple tasks.
- Added Qwen Image and Qwen Image Edit - Support is now officially available for Qwen Image generation models. These have much better prompt adherence than SDXL or even Flux. Here's how to set up qwen image edit:
- Get the Qwen Image Edit 2509 model here and load it as the image gen model
- Get the Qwen Image VAE and load it as VAE
- Get Qwen2.5-VL-7B-Instruct and load it as Clip-1
- Get Qwen2.5-VL-7B mmproj and load it as Clip-2
- That's basically it! You can now generate images normally in Kobold or any connected frontend.
- You can do image editing using the SDUI (http://localhost:5001/sdui) by uploading a source Reference Image and asking the AI to make changes. Alternatively, providing no reference image allows normal txt2img generation.
- To use the non-edit version of Qwen Image, you can use these models instead
- For a quick setup, you can use this .kcppt launcher template by @henk717
- Added aliases for the OpenAI compatible endpoints without
/v1/prefix. - Supports using multiple
--overridekv, split by commas. - Renamed
--blasbatchsizeto just--batchsize(old name will still work) - Made preview in GUI GPU layer count more accurate, no more +2 extra layers.
- Added experimental support for fractional scaling in the GUI launcher for Wayland on GNOME. You're still recommended to use KDE or disable fractional scaling for better results.
- Image generation precision fixes and fallbacks. SDUI also now supports copy with right click on the image preview.
- Added selection for image generation scheduler
- Added support for logprobs streaming in openai chat completions API (sent at end)
- Added VITS api server compatibility endpoint
- PyInstaller upgraded from 5.11 to 5.12 to fix a crashing bug
- Added Horde worker Job stats by @xzuyn
- Updated Kobold Lite, multiple fixes and improvements
- New: Added branching support! You can now create ST style "branches" in the same story, allowing you to explore multiple alternate possibilities without requiring multiple save files. You can create and delete branches at any point in your story and swap between them at will.
- Better inline markdown and code rendering
- Better turn history segmenting after leaving edit mode, also improved AutoRole turn packing
- Improve trim sentences behavior, improve autoscroll behavior, improve mobile detection
- Added ccv3 tavern card support
- Aborted gens will now request for logprobs if enabled
- Merged new model support, fixes and improvements from upstream, including some Vulkan speedups from occam
- NOTE: Qwen3Next support is NOT merged yet. It is still undergoing development upstream, follow it here: ggml-org#16095
- NEW: WAN Video Generation has been added to KoboldCpp! - You can now generate short videos in KoboldCpp using the WAN model. Special thanks to @leejet for the sd.cpp implementation, and @wbruna for help merging and QoL fixes.
- Note: WAN requires a LOT of VRAM to run. If you run out of memory, try generating fewer frames and using a lower resolution. Especially on Vulkan, the VAE buffer size may be too large, use
--sdvaecputo run VAE on CPU instead. For comparison, 30 frames (2 seconds) of a 384x576 video will still require about 16GB VRAM even with VAE on CPU and CPU offloading enabled. You can also generate a single frame in which case it will behave like a normal image generation model. - Obtain the WAN2.2 14B rapid mega AIO model here. This is the most versatile option and can do both T2V and I2V. I do not recommend using the 1.3B WAN2.1 or the 5B WAN2.2, they both produce rather poor results. If you really don't care about quality, you can use small the 1.3B from here.
- Next, you will need the correct VAE and UMT5-XXL, note that some WAN models use different ones so if you're bringing your own do check it. Reference links are here.
- Load them all via the GUI launcher or by using
--sdvae,--sdmodeland--sdt5xxl - Launch KoboldCpp and open SDUI at http://localhost:5001/sdui. I recommend starting with something small like 15 frames of a 384x384 video with 20 steps. Be prepared to wait a few minutes. The video will be rendered and saved to SDUI when done!
- It's recommended to use
--sdoffloadcpuand--sdvaecpuif you don't have enough VRAM. The VAE buffer can really be huge.
- Note: WAN requires a LOT of VRAM to run. If you run out of memory, try generating fewer frames and using a lower resolution. Especially on Vulkan, the VAE buffer size may be too large, use
- Added additional toggle flags for image generation:
--sdoffloadcpu- Allows image generation weights to be dynamically loaded/unloaded to RAM when not in use, e.g. during VAE decoding.--sdvaecpu- Performs VAE decoding on CPU using RAM instead.--sdclipcpu- Performs CLIP/T5 decoding on GPU instead (new default is CPU)
- Updated StableUI to support animations/videos. If you want to perform I2V (Image-To-Video), you can do so in the txt2img panel.
- Renamed
--sdcliplto--sdclip1, and--sdclipgto--sdclip2. These flags are now used whenever there is a vision encoder to be used (e.g. WAN's clip_vision if applicable). - Disable TAESD if not applicable.
- Moved all
.embdresource files into a separate directory for improved organization. Also extracted out image generation vocabs into their own files. - Moved
lowvramCUDA option into a new flag--lowvram(same as -nkvo), which can be used in both CUDA and Vulkan to avoid offloading the KV. Note: This is slow and not generally recommended. - Fixed Kimi template, added Granite 4 template.
- Enabled building for CUDA13 in the CMake, however it's untested and no binaries will be provided, also fixed Vulkan noext compiles.
- Fixed q4_0 repacking incoherence on CPU only, which started in v1.98.
- Fixed FastForwarding issues due to misidentified hybrid/rnn models, which should not happen anymore.
- Added
--sdgendefaultsto allow setting some default image generation parameters. - On admin config reload, reset nonexistent fields in config to default values instead of keeping the old value.
- Updated Kobold Lite, multiple fixes and improvements
- Set default filenames based on slot's name when downloading from saved slot.
- Added
dry_penalty_last_nfrom @joybod which decouples dry range from rep pen range. - LaTeX rendering fixes, autoscroll fixes, various small tweaks
- Merged new model support including GLM4.6 and Granite 4, fixes and improvements from upstream
Hotfix 1.100.1 - Fixed a regression with flash attention on oldcpu builds, fixed kokoro regression.
- NEW: - The bundled KoboldAI Lite UI has received a substantial design overhaul in an effort to make it look more modern and polished. The default color scheme has been changed, however the old color scheme is still available (set 'Nostalgia' color scheme in advanced settings). A few extra custom color schemes have also been added (Thanks Lakius, TwistedShadows, toastypigeon, @PeterPeet). Please report any UI bugs you encounter.
- QOL Change: - Added aliases for llama.cpp command-line flags. To reduce the learning curve for llama.cpp users, the following llama.cpp compatibility flags have been added:
-m,-t,--ctx-size,-c,--gpu-layers,--n-gpu-layers,-ngl,--tensor-split,-ts,--main-gpu,-mg,--batch-size,-b,--threads-batch,--no-context-shift,--mlock,-p,--no-mmproj-offload,--model-draft,-md,--draft-max,--draft-n,--gpu-layers-draft,--n-gpu-layers-draft,-ngld,--flash-attn,-fa,--n-cpu-moe,-ncmoe,--override-kv,--override-tensor,-ot,--no-mmap. They should behave as you'd expect from llama.cpp. - Renamed
--promptlimitto--genlimit, now applies to API requests as well, can be set in the UI launcher. - Added a new parameter
--ratelimitthat will apply per-IP based rate limiting (to help prevent abuse of public instances). - Fixed Automatic VRAM detection for rocm and vulkan backends on AMD systems (thanks @lone-cloud)
- Hide API info display if running in CLI mode.
- Flash attention is now checked by default when using GUI launcher. (Reverted in 1.99.1 by popular demand)
- Try fix some embedding models using too much memory.
- Standardize model file download locations to the koboldcpp executable's directory. This should help solve issues about non-writable system paths when launching from a different working directory. If you prefer the old behavior, please send some feedback, but I think standardizing it is better than adding special exceptions for some directory paths. (Reverted in 1.99.2, with some exceptions)
- Add psutil to conda environment. Please report if this breaks any setups.
- Added
/v1/audio/voicesendpoint, fixed dia wrong voice mapping - Updated Kobold Lite, multiple fixes and improvements
- UI design rework, as mentioned above
- Fixes for markdown renderer
- Added a popup to allow enabling TTS or image generation if it's disabled but available.
- Added new scenario "Aletheia"
- Increased default context size and amount generated
- Fix for GPT-OSS instruct format.
- Smarter automatic detection for "Enter Sends" default based on platform. Toggle moved into advanced settings.
- Fix for Palemoon browser compatibility
- Reworked best practices recommendation to think tags - now provides Think/NoThink instruct tags for each instruct sequence. You are now recommended to explicitly select the correct Think/NoThink instruct tags instead of using the
<think>forced/prevented prefill. This will provide better results for preventing reasoning than simply injecting a blank<think></think>since some models require specialized reasoning trace formats. - For example, to prevent thinking in GLM-Air, you're simply recommended to set the instruct tag to
GLM-4.5 Non-Thinkingand leave "Insert Thinking" as "Normal" instead of manually messing with the tag injections. This ensures the correct postfix tags for each format are used. - By default, KoboldCppAutomatic template permits thinking in models that use it.
- Merged new model support, fixes and improvements from upstream
Hotfix 1.99.1 - Fix for chroma, revert FA default off, revert ggml-org#16056, fixed rocm compile issues.
Hotfix 1.99.2 - Reverted the download file path changes on request from @henk717 for most cases. Fixed rocm VRAM detection.
Hotfix 1.99.3 and Hotfix 1.99.4 - Fixed aria2 downloading and try to fix kokoro