pypi peft 0.5.0
GPTQ Quantization, Low-level API

latest releases: 0.14.0, 0.13.2, 0.13.1...
17 months ago

GPTQ Integration

Now, you can finetune GPTQ quantized models using PEFT. Here are some examples of how to use PEFT with a GPTQ model: colab notebook and finetuning script.

Low-level API

Enables users and developers to use PEFT as a utility library, at least for injectable adapters (LoRA, IA3, AdaLoRA). It exposes an API to modify the model in place to inject the new layers into the model.

Support for XPU and NPU devices

Leverage the support for more devices for loading and fine-tuning PEFT adapters.

Mix-and-match LoRAs

Stable support and new ways of merging multiple LoRAs. There are currently 3 ways of merging loras supported: linear, svd and cat.

  • Added additional parameters to mixing multiple LoRAs through SVD, added ability to mix LoRAs through concatenation by @kovalexal in #817

What's Changed

New Contributors

Full Changelog: v0.4.0...v0.5.0

Don't miss a new peft release

NewReleases is sending notifications on new releases.