pypi peft 0.4.0
QLoRA, IA3 PEFT method, support for QA and Feature Extraction tasks, AutoPeftModelForxxx for simplified UX , LoRA for custom models with new added utils

latest releases: 0.14.0, 0.13.2, 0.13.1...
18 months ago

QLoRA Support:

QLoRA uses 4-bit quantization to compress a pretrained language model. The LM parameters are then frozen and a relatively small number of trainable parameters are added to the model in the form of Low-Rank Adapters. During finetuning, QLoRA backpropagates gradients through the frozen 4-bit quantized pretrained language model into the Low-Rank Adapters. The LoRA layers are the only parameters being updated during training. For more details read the blog Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA

New PEFT methods: IA3 from T-Few paper

To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) rescales inner activations with learned vectors. These learned vectors are injected into the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA) keeps the number of trainable parameters much smaller. For more details, read the paper Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

Support for new tasks: QA and Feature Extraction

Addition of PeftModelForQuestionAnswering and PeftModelForFeatureExtraction classes to support QA and Feature Extraction tasks, respectively. This enables exciting new use-cases with PEFT, e.g., LoRA for semantic similarity tasks.

  • feat: Add PeftModelForQuestionAnswering by @sjrl in #473
  • add support for Feature Extraction using PEFT by @pacman100 in #647

AutoPeftModelForxxx for better and Simplified UX

Introduces a new paradigm, AutoPeftModelForxxx intended for users that want to rapidly load and run peft models.

from peft import AutoPeftModelForCausalLM

peft_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")

LoRA for custom models

Not a transformer model, no problem, we have got you covered. PEFT now enables the usage of LoRA with custom models.

New LoRA utilities

Improvements to add_weighted_adapter method to support SVD for combining multiple LoRAs when creating new LoRA.
New utils such as unload and delete_adapter providing users much better control about how they deal with the adapters.

  • [Core] Enhancements and refactoring of LoRA method by @pacman100 in #695

PEFT and Stable Diffusion

PEFT is very extensible and easy to use for performing DreamBooth of Stable Diffusion. Community has added conversion scripts to be able to use PEFT models with Civitai/webui format and vice-versa.

  • LoRA for Conv2d layer, script to convert kohya_ss LoRA to PEFT by @kovalexal in #461
  • Added Civitai LoRAs conversion to PEFT, PEFT LoRAs conversion to webui by @kovalexal in #596
  • [Bugfix] Fixed LoRA conv2d merge by @kovalexal in #637
  • Fixed LoraConfig alpha modification on add_weighted_adapter by @kovalexal in #654

What's Changed

New Contributors

Full Changelog: v0.3.0...v0.4.0

Significant community contributions

The following contributors have made significant changes to the library over the last release:

@TimDettmers

  • 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by @TimDettmers in #476

@SumanthRH

@kovalexal

  • LoRA for Conv2d layer, script to convert kohya_ss LoRA to PEFT by @kovalexal in #461
  • Added Civitai LoRAs conversion to PEFT, PEFT LoRAs conversion to webui by @kovalexal in #596
  • [Bugfix] Fixed LoRA conv2d merge by @kovalexal in #637
  • Fixed LoraConfig alpha modification on add_weighted_adapter by @kovalexal in #654

@sywangyi

  • do not use self.device. In FSDP cpu offload mode. self.device is "CPU… by @sywangyi in #352
  • add accelerate example for DDP and FSDP in sequence classification fo… by @sywangyi in #358
  • enable lora for mpt by @sywangyi in #576
  • fix Prefix-tuning error in clm Float16 evaluation by @sywangyi in #520
  • fix ptun and prompt tuning generation issue by @sywangyi in #543
  • when from_pretrained is called in finetune case of lora with flag "… by @sywangyi in #591

@aarnphm

  • feat(model): Allow from_pretrained to accept PeftConfig class by @aarnphm in #612
  • style: tentatively add hints for some public function by @aarnphm in #614
  • chore(type): annotate that peft does contains type hints by @aarnphm in #678

@martin-liu

@thomas-schillaci

Don't miss a new peft release

NewReleases is sending notifications on new releases.