Fixes for Flux Single File loading, LoRA loading for 4bit BnB Flux, Hunyuan Video
This patch release
- Fixes a regression in loading Comfy UI format single file checkpoints for Flux
- Fixes a regression in loading LoRAs with bitsandbytes 4bit quantized Flux models
- Adds
unload_lora_weights
for Flux Control - Fixes a bug that prevents Hunyuan Video from running with batch size > 1
- Allow Hunyuan Video to load LoRAs created from the original repository code
All commits
- [Single File] Fix loading Flux Dev finetunes with Comfy Prefix by @DN6 in #10545
- [CI] Update HF Token on Fast GPU Model Tests by @DN6 #10570
- [CI] Update HF Token in Fast GPU Tests by @DN6 #10568
- Fix batch > 1 in HunyuanVideo by @hlky in #10548
- Fix HunyuanVideo produces NaN on PyTorch<2.5 by @hlky in #10482
- Fix hunyuan video attention mask dim by @a-r-r-o-w in #10454
- [LoRA] Support original format loras for HunyuanVideo by @a-r-r-o-w in #10376
- [LoRA] feat: support loading loras into 4bit quantized Flux models. by @sayakpaul in #10578
- [LoRA] clean up
load_lora_into_text_encoder()
andfuse_lora()
copied from by @sayakpaul in #10495 - [LoRA] feat: support
unload_lora_weights()
for Flux Control. by @sayakpaul in #10206 - Fix Flux multiple Lora loading bug by @maxs-kan in #10388
- [LoRA] fix: lora unloading when using expanded Flux LoRAs. by @sayakpaul in #10397