github invoke-ai/InvokeAI v5.12.0rc1

latest releases: v6.6.0, v6.6.0rc2, v6.6.0rc1...
pre-release3 months ago

This release includes support for Nvidia 50xx GPUs, a way to relate models (e.g. LoRAs with a specific main model), and new IP Adapter methods.

Changes

  • Bumped PyTorch dependency to v2.7.0, which means Invoke now supports Nvidia 50xx GPUs.
  • New model relationship feature. In the model manager tab, you may "link" two models. At this time, the primary intended use case is to link LoRAs to main models. When you have the main model selected, the linked LoRAs will be at the top of the LoRA list. Thanks @xiaden!
  • New IP Adapter methods Style (Strong) and Style (Precise). The previous style method is renamed to Style (Simple). Thanks @cubiq!
  • Fixed GGUF quantization on MPS. Thanks @Vargol!
  • Internal invocation model changes, which aim to reduce occurrences of ValidationError errors.
  • Addressed pydantic deprecation warning.

Installing and Updating

The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

New Contributors

Full Changelog: v5.11.0...v5.12.0rc1

Don't miss a new InvokeAI release

NewReleases is sending notifications on new releases.