github invoke-ai/InvokeAI v4.2.6

latest releases: v5.4.1rc2, v5.4.1rc1, v5.4.0...
3 months ago

v4.2.6 includes a handful of fixes and improvements, plus three major changes:

  • Gallery updates
  • Tiled upscaling via MultiDiffusion
  • Checkpoint models work without conversion to diffusers

Gallery Updates

We've made some changes to the gallery, adding features, improving the performance of the app and reducing memory usage. The changes also fix a number of bugs relating to stale data - for example, a board not updating as expected after moving an image to it.

Thanks to @chainchompa and @maryhipp for working on this major effort.

Pagination & Selection

Infinite scroll is dead, long live infinite scroll!

The gallery is now paginated. Selection logic has been updated to work with pagination. An indicator shows how many images are selected and allows you to clear the selection entirely. Arrow keys still navigate.

Gallery.Pagination.and.Selection.mov

The number of images per page is dynamically calculated as the panel is resized, ensuring the panel is always filled with images.

Boards UI Refresh

The bulky tiled boards grid has been replaced by a scrollable list. The boards list panel is now a resizable, collapsible panel.

Boards.List.and.Resizable.Panel.mov

Boards and Image Search

Search for boards by name and images by metadata. The search term is matched against the image's metadata as a string. We landed on full-text search as a flexible yet simple implementation after considering a few methods for search.

Boards.and.Images.Search.mov

Archived Boards

Archive a board to hide it from the main boards list. This is purely an organizational enhancement. You can still interact with archived boards as you would any other board.

Archived.Boards.mov

Image Sorting

You can now change the sort for images to show oldest first. A switch allows starred images to be placed in the list according to their age, instead of always showing them first.

Image.Sorting.mov

Tiled Upscaling via MultiDiffusion

MultiDiffusion is a fairly straightforward technique for tiled denoising. The gist is similar to other tiled upscaling methods - split the input image up in to tiles, process each independently, and stitch them back together. The main innovation for MultiDiffusion is to do this in latent space, blending the tensors together continuously. This results in excellent consistency across the output image, with no seams.

This feature is exposed as a Tiled MultiDiffusion Denoise Latents node, currently classified as a beta version. It works much the same as the OG Denoise Latents node. You can find an example workflow in the workflow library's default workflows.

We are still thinking about to expose this in the linear UI. Most likely, we expose this with very minimal settings. If you want to tweak it, use the workflow.

Thanks to @RyanJDick for designing and implementing MultiDiffusion.

How to use it

This technique is fundamentally the same as normal img2img. Appropriate use of conditioning and control will greatly improve the output. The one hard requirement is to use the Tile ControlNet model.

Besides that, here are some tips from our initial testing:

  • Use a detail-adding or style LoRAs.
  • Use a base model best suited for the desired output style.
  • Prompts make a difference.
  • The initial upscaling method makes a difference.
  • Scheduler makes a difference. Some produce softer outputs.

VRAM Usage

This technique can upscale images to very large sizes without substantially increasing VRAM usage beyond what you'd see for a "normal" sized generation. The VRAM bottlenecks then become the first VAE encode (Image to Latents) and final VAE decode (Latents to Image) steps.

You may run into OOM errors during these steps. The solution is to enable tiling using the toggle on the Image to Latents and Latents to Image nodes. This allows the VAE operations to be done piecewise, similar to the tiled denoising process, without using gobs of VRAM.

There's one caveat - VAE tiling often introduces inconsistency across tiles. Textures and colors may differ from tile to tile. This is a function of diffusers' handling of VAE tiling, not the new tiled denoising process. We are investigating ways to improve this.

Takeaway: If your GPU can handle non-tiled VAE encode and decode for a given output size, use that for best results.

Checkpoint models work without conversion to diffusers

The required conversion of checkpoint format models to diffusers format has long been a pain point. The diffusers library now supports loading single-file (checkpoint) models directly, and we have removed the mandatory checkpoint-to-diffusers conversion step.

The main user-facing change is that there is no longer a conversion cache directory.

Major thanks to @lstein for getting this working.

📈 Patch Nodes for v4.2.6

Enhancements

  • When downloading image metadata, graphs or workflows, the JSON file includes the image name and type of data. Thanks @jstnlowe!
  • Add clear_queue_on_startup config setting to clear problematic queues. This is useful for a rare edge case where your queue is full of items that somehow crash the app. Set this to true, and the queue will clear before it has time to attempt to execute the problematic item. Thanks @steffy-lo!
  • Performance and memory efficiency improvements for LoRA patching and model offloading.
  • Addition of a simplified model installation methods to the Invocation API: download_and_cache_model, load_local_model and load_remote_model. These methods allow models to be used without needing them to be added to the model manager. For example, we are now using these methods to load ESRGAN models.
  • Support for probing and loading SDXL VAE checkpoint.
  • Updated gallery UI.
  • Checkpoint models work without conversion to diffusers.
  • When using a VAE in tiled mode, you may now select the tile size.

Fixes

  • Fix handling handling of 0-step denoising process.
  • If a control image's processed version is missing when the app loads, it is now re-processed.
  • Fixed an issue where a model's size could be misreported as 0, possibly causing memory issues.
  • Fixed an issue where images - especially large images - may fail to delete.

Performance improvements

  • Improved LoRA patching.
  • Improved RAM <-> VRAM model transfer performance.

Internal changes

  • The DenoiseLatentsInvocation has had its internal methods split up to support tiled upscaling via MultiDiffusion. This included some amount of file shuffling and renaming. The invokeai package's exported classes should still be the same. Please let us know if this has broken an import for you.
  • Internal cleanup, intending to eliminate circular import issues. There's a lot left to do for this issue, but we are making progress.

💾 Installation and Updating

To install or update to v4.2.6, download the installer and follow the installation instructions.

To update, select the same installation location. Your user data (images, models, etc) will be retained.

Missing models after updating from v3 to v4

See this FAQ.

Error during installation ModuleNotFoundError: No module named 'controlnet_aux'

See this FAQ

What's Changed

New Contributors

Full Changelog: v4.2.4...v4.2.6

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.