Highlights
New methods for merging LoRA weights together
With PR #1364, we added new methods for merging LoRA weights together. This is not about merging LoRA weights into the base model. Instead, this is about merging the weights from different LoRA adapters into a single adapter by calling add_weighted_adapter
. This allows you to combine the strength from multiple LoRA adapters into a single adapter, while being faster than activating each of these adapters individually.
Although this feature has already existed in PEFT for some time, we have added new merging methods that promise much better results. The first is based on TIES, the second on DARE and a new one inspired by both called Magnitude Prune. If you haven't tried these new methods, or haven't touched the LoRA weight merging feature at all, you can find more information here:
AWQ and AQLM support for LoRA
Via #1394, we now support AutoAWQ in PEFT. This is a new method for 4bit quantization of model weights.
Similarly, we now support AQLM via #1476. This method allows to quantize weights to as low as 2 bits. Both methods support quantizing nn.Linear
layers. To find out more about all the quantization options that work with PEFT, check out our docs here.
Note these integrations do not support merge_and_unload()
yet, meaning for inference you need to always attach the adapter weights into the base model
DoRA support
We now support Weight-Decomposed Low-Rank Adaptation aka DoRA via #1474. This new method is builds on top of LoRA and has shown very promising results. Especially at lower ranks (e.g. r=8
), it should perform much better than LoRA. Right now, only non-quantized nn.Linear
layers are supported. If you'd like to give it a try, just pass use_dora=True
to your LoraConfig
and you're good to go.
Documentation
Thanks to @stevhliu and many other contributors, there have been big improvements to the documentation. You should find it more organized and more up-to-date. Our DeepSpeed and FSDP guides have also been much improved.
Check out our improved docs if you haven't already!
Development
If you're implementing custom adapter layers, for instance a custom LoraLayer
, note that all subclasses should now implement update_layer
-- unless they want to use the default method by the parent class. In particular, this means you should no longer use different method names for the subclass, like update_layer_embedding
. Also, we generally don't permit ranks (r
) of 0 anymore. For more, see this PR.
Developers should have an easier time now since we fully embrace ruff. If you're the type of person who forgets to call make style
before pushing to a PR, consider adding a pre-commit hook. Tests are now a bit less verbose by using plain asserts and generally embracing pytest features more fully. All of this comes thanks to @akx.
What's Changed
On top of these changes, we have added a lot of small changes since the last release, check out the full changes below. As always, we had a lot of support by many contributors, you're awesome!
- Release patch version 0.8.2 by @pacman100 in #1428
- [docs] Polytropon API by @stevhliu in #1422
- Fix
MatMul8bitLtBackward
view issue by @younesbelkada in #1425 - Fix typos by @szepeviktor in #1435
- Fixed saving for models that don't have _name_or_path in config by @kovalexal in #1440
- [docs] README update by @stevhliu in #1411
- [docs] Doc maintenance by @stevhliu in #1394
- [
core
/TPLinear
] Fix breaking change by @younesbelkada in #1439 - Renovate quality tools by @akx in #1421
- [Docs] call
set_adapters()
after add_weighted_adapter by @sayakpaul in #1444 - MNT: Check only selected directories with ruff by @BenjaminBossan in #1446
- TST: Improve test coverage by skipping fewer tests by @BenjaminBossan in #1445
- Update Dockerfile to reflect how to compile bnb from source by @younesbelkada in #1437
- [docs] Lora-like guides by @stevhliu in #1371
- [docs] IA3 by @stevhliu in #1373
- Add docstrings for set_adapter and keep frozen by @EricLBuehler in #1447
- Add new merging methods by @pacman100 in #1364
- FIX Loading with AutoPeftModel.from_pretrained by @BenjaminBossan in #1449
- Support
modules_to_save
config option when using DeepSpeed ZeRO-3 with ZeRO init enabled. by @pacman100 in #1450 - FIX Honor HF_HUB_OFFLINE mode if set by user by @BenjaminBossan in #1454
- [docs] Remove iframe by @stevhliu in #1456
- [docs] Docstring typo by @stevhliu in #1455
- [
core
/get_peft_state_dict
] Ignore all exceptions to avoid unexpected errors by @younesbelkada in #1458 - [
Adaptation Prompt
] Fix llama rotary embedding issue with transformers main by @younesbelkada in #1459 - [
CI
] Add CI tests on transformers main to catch early bugs by @younesbelkada in #1461 - Use plain asserts in tests by @akx in #1448
- Add default IA3 target modules for Mixtral by @arnavgarg1 in #1376
- add
magnitude_prune
merging method by @pacman100 in #1466 - [docs] Model merging by @stevhliu in #1423
- Adds an example notebook for showing multi-adapter weighted inference by @sayakpaul in #1471
- Make tests succeed more on MPS by @akx in #1463
- [
CI
] Fix adaptation prompt CI on transformers main by @younesbelkada in #1465 - Update docstring at peft_types.py by @eduardozamudio in #1475
- FEAT: add awq suppot in PEFT by @younesbelkada in #1399
- Add pre-commit configuration by @akx in #1467
- ENH [
CI
] Run tests only when relevant files are modified by @younesbelkada in #1482 - FIX [
CI
/bnb
] Fix failing bnb workflow by @younesbelkada in #1480 - FIX [
PromptTuning
] Simple fix for transformers >= 4.38 by @younesbelkada in #1484 - FIX: Multitask prompt tuning with other tuning init by @BenjaminBossan in #1144
- previous_dtype is now inferred from F.linear's result output type. by @MFajcik in #1010
- ENH: [
CI
/Docker
]: Create a workflow to temporarly build docker images in case dockerfiles are modified by @younesbelkada in #1481 - Fix issue with unloading double wrapped modules by @BenjaminBossan in #1490
- FIX: [
CI
/Adaptation Prompt
] Fix CI on transformers main by @younesbelkada in #1493 - Update peft_bnb_whisper_large_v2_training.ipynb: Fix a typo by @martin0258 in #1494
- covert SVDLinear dtype by @PHOSPHENES8 in #1495
- Raise error on wrong type for to modules_to_save by @BenjaminBossan in #1496
- AQLM support for LoRA by @BlackSamorez in #1476
- Allow trust_remote_code for tokenizers when loading AutoPeftModels by @OfficialDelta in #1477
- Add default LoRA and IA3 target modules for Gemma by @arnavgarg1 in #1499
- FIX Bug in prompt learning after disabling adapter by @BenjaminBossan in #1502
- add example and update deepspeed/FSDP docs by @pacman100 in #1489
- FIX Safe merging with LoHa and LoKr by @BenjaminBossan in #1505
- ENH: [
Docker
] Notify us when docker build pass or fail by @younesbelkada in #1503 - Implement DoRA by @BenjaminBossan in #1474
New Contributors
- @szepeviktor made their first contribution in #1435
- @akx made their first contribution in #1421
- @EricLBuehler made their first contribution in #1447
- @eduardozamudio made their first contribution in #1475
- @MFajcik made their first contribution in #1010
- @martin0258 made their first contribution in #1494
- @PHOSPHENES8 made their first contribution in #1495
- @BlackSamorez made their first contribution in #1476
- @OfficialDelta made their first contribution in #1477
Full Changelog: v0.8.2...v0.9.0