What's Changed
- Revert "Revert "feat: Add CacheProvider API for external distributed caching"" by @deepme987 in #12915
- fix(api-nodes): Tencent TextToModel and ImageToModel nodes by @bigcat88 in #12680
- Bump comfyui-frontend-package to 1.41.19 by @comfy-pr-bot in #12923
- bump manager version to 4.1b4 by @ltdrdata in #12930
- comfy aimdo 0.2.11 + Improved RAM Pressure release strategies - Windows speedups by @rattus128 in #12925
- Update README. by @comfyanonymous in #12931
- comfy-aimdo 0.2.12 by @rattus128 in #12941
- fix: use no-store cache headers to prevent stale frontend chunks by @christian-byrne in #12911
- feat: Support mxfp8 by @kijai in #12907
- Add --fp16-intermediates to use fp16 for intermediate values between … by @comfyanonymous in #12953
- Update comfyui-frontend-package to version 1.41.20 by @comfyanonymous in #12954
- LTXV: Accumulate VAE decode results on intermediate_device by @kijai in #12955
- bump manager version to 4.1b5 by @ltdrdata in #12957
- ops: opt out of deferred weight init if subclassed by @rattus128 in #12967
- Make EmptyLatentImage follow intermediate dtype. by @comfyanonymous in #12974
- Enable Pytorch Attention for AMD gfx1150 (Strix Point) by @lostdisc in #12973
- feat: add essentials_category to nodes and blueprints for Essentials tab by @christian-byrne in #12573
- feat(assets): align local API with cloud spec by @luke-mino-altherr in #12863
- ci: add check to block AI agent Co-authored-by trailers in PRs by @christian-byrne in #12799
- Skip running model finalizers at exit by @blepping in #12994
- Add --enable-dynamic-vram options to force enable it. by @comfyanonymous in #13002
- [Trainer] FP4, 8, 16 training by native dtype support and quant linear autograd function by @KohakuBlueleaf in #12681
- Fix potential issue. by @comfyanonymous in #13009
- fix: atomic writes for userdata to prevent data loss on crash by @christian-byrne in #12987
- Disable SageAttention for Hunyuan3D v2.1 DiT by @paulomuggler in #12772
- Update workflow templates to v0.9.26 by @comfyui-wiki in #13012
- Mark weight_dtype as advanced input in Load Diffusion Model node by @christian-byrne in #12769
- Reduce LTX VAE VRAM usage and save use cases from OOMs/Tiler by @rattus128 in #13013
- Reduce WAN VAE VRAM, Save use cases for OOM/Tiler by @rattus128 in #13014
- bump manager version to 4.1b6 by @ltdrdata in #13022
- Inplace VAE output processing to reduce peak RAM consumption. by @kijai in #13028
- Fix case where pixel space VAE could cause issues. by @comfyanonymous in #13030
- cascade: remove dead weight init code by @rattus128 in #13026
- fix: run text encoders on MPS GPU instead of CPU for Apple Silicon by @k06a in #12809
- fix(api-nodes): add support for "thought_image" in Nano Banana 2 by @bigcat88 in #13038
- Update comfyui-frontend-package version to 1.41.21 by @DrJKL in #13035
- Make more intermediate values follow the intermediate dtype. by @comfyanonymous in #13051
- Further Reduce LTX VAE decode peak RAM usage by @kijai in #13052
- Fix regression. by @comfyanonymous in #13053
- fp16 intermediates doen't work for some text enc models. by @comfyanonymous in #13056
- ltx: vae: implement chunked encoder + CPU IO chunking (Big VRAM reductions) by @rattus128 in #13062
- memory: Add more exclusion criteria to pinned read (fixes corrupt outputs - rare cases) by @rattus128 in #13067
- Reduce tiled decode peak memory by @kijai in #13050
- Revert "fix: run text encoders on MPS GPU instead of CPU for Apple Silicon" by @comfyanonymous in #13070
- Fix VRAM leak in tiler fallback in video VAEs by @rattus128 in #13073
- ltx: vae: Fix missing init variable by @rattus128 in #13074
- [API Nodes] mark seedream-3-0-t2i and seedance-1-0-lite models as deprecated by @bigcat88 in #13060
- Add slice_cond and per-model context window cond resizing by @drozbay in #12645
- feat(api-nodes): add Quiver SVG nodes by @bigcat88 in #13047
- Make EmptyImage node follow intermediate device/dtype. by @comfyanonymous in #13079
- Move inline comfy.context_windows imports to top-level in model_base.py by @Kosinkadink in #13083
New Contributors
- @lostdisc made their first contribution in #12973
- @paulomuggler made their first contribution in #12772
- @k06a made their first contribution in #12809
Full Changelog: v0.17.0...v0.18.0