pypi peft 0.3.0
Docs, Testing Suite, Multi Adapter Support, New methods and examples

latest releases: 0.13.2, 0.13.1, 0.13.0...
19 months ago

Brand new Docs

With task guides, conceptual guides, integration guides, and code references all available at your fingertips, 🤗 PEFT's docs (found at https://huggingface.co/docs/peft) provide an insightful and easy-to-follow resource for anyone looking to how to use 🤗 PEFT. Whether you're a seasoned pro or just starting out, PEFT's documentation will help you to get the most out of it.

Comprehensive Testing Suite

Comprised of both unit and integration tests, it rigorously tests core features, examples, and various models on different setups, including single and multiple GPUs. This commitment to testing helps ensure that PEFT maintains the highest levels of correctness, usability, and performance, while continuously improving in all areas.

Multi Adapter Support

PEFT just got even more versatile with its new Multi Adapter Support! Now you can train and infer with multiple adapters, or even combine multiple LoRA adapters in a weighted combination. This is especially handy for RLHF training, where you can save memory by using a single base model with multiple adapters for actor, critic, reward, and reference. And the icing on the cake? Check out the LoRA Dreambooth inference example notebook to see this feature in action.

New PEFT methods: AdaLoRA and Adaption Prompt

PEFT just got even better, thanks to the contributions of the community! The AdaLoRA method is one of the exciting new additions. It takes the highly regarded LoRA method and improves it by allocating trainable parameters across the model to maximize performance within a given parameter budget. Another standout is the Adaption Prompt method, which enhances the already popular Prefix Tuning by introducing zero init attention.

New LoRA utilities

Good news for LoRA users! PEFT now allows you to merge LoRA parameters into the base model's parameters, giving you the freedom to remove the PEFT wrapper and apply downstream optimizations related to inference and deployment. Plus, you can use all the features that are compatible with the base model without any issues.

What's Changed

New Contributors

Significant community contributions

The following contributors have made significant changes to the library over the last release:

@QingruZhang

  • The Implementation of AdaLoRA (ICLR 2023) in #233

@yeoedward

  • Implement adaption prompt from Llama-Adapter paper in #268

@Splo2t

  • Add nn.Embedding Support to Lora in #337

Don't miss a new peft release

NewReleases is sending notifications on new releases.