Added training support for Qwen image LoRA on consumer-grade GPUs (<24 GiB VRAM)
This release introduces a lightweight training pipeline and configuration optimized for running Qwen LoRA fine-tuning on GPUs with less than 24 GiB of VRAM:
- Compatible with consumer-grade GPUs
- Optimized memory usage without sacrificing training quality
- Ready-to-use configuration for easy setup
- Supports ComfyUI-compatible LoRA outputs