This patch removes the hard requirement for transformers>=4.25.1
in case external libraries were downgrading the library upon startup in a non-controllable way.
- do not automatically enable xformers by @patrickvonplaten in #1640
- Adapt to forced transformers version in some dependent libraries by @anton-l in #1638
- Re-add xformers enable to UNet2DCondition by @patrickvonplaten in #1627
🚨🚨🚨 Note that xformers in not automatically enabled anymore 🚨🚨🚨
The reasons for this are given here: #1640 (comment):
We should not automatically enable xformers for three reasons:
It's not PyTorch-like API. PyTorch doesn't by default enable all the fastest options available
We allocate GPU memory before the user even does .to("cuda")
This behavior is not consistent with cases where xformers is not installed
=> This means: If you were used to have xformers automatically enabled, please make sure to add the following now:
from diffusers.utils.import_utils import is_xformers_available
unet = ... # load unet
if is_xformers_available():
try:
unet.enable_xformers_memory_efficient_attention(True)
except Exception as e:
logger.warning(
"Could not enable memory efficient attention. Make sure xformers is installed"
f" correctly and a GPU is available: {e}"
)
for the UNet (e.g. in dreambooth) or for the pipeline:
from diffusers.utils.import_utils import is_xformers_available
pipe = ... # load pipeline
if is_xformers_available():
try:
pipe.enable_xformers_memory_efficient_attention(True)
except Exception as e:
logger.warning(
"Could not enable memory efficient attention. Make sure xformers is installed"
f" correctly and a GPU is available: {e}"
)