pypi peft 0.2.0
v0.2.0

latest releases: 0.14.0, 0.13.2, 0.13.1...
21 months ago

Whisper large tuning using PEFT LoRA+INT-8 on T4 GPU in Colab notebooks

We tested PEFT on @OpenAI's Whisper Large model and got:
i) 5x larger batch sizes
ii) Less than 8GB GPU VRAM
iii) Best part? Almost no degredation to WER 🤯

Without PEFT:

  • OOM on a T4 GPU ❌
  • 6GB checkpoint ❌
  • 13.64 WER ✅

With PEFT:

  • Train on a T4 GPU ✅
  • 60MB checkpoint ✅
  • 14.01 WER ✅
  • adding whisper large peft+int8 training example by @pacman100 in #95

prepare_for_int8_training utility

This utility enables preprocessing the base model to be ready for INT8 training.

disable_adapter() context manager

Enables to disable adapter layers to get the outputs from the frozen base models.
An exciting application of this feature allows only a single model copy to be used for policy model and reference model generations in RLHF.

What's Changed

New Contributors

Significant community contributions

The following contributors have made significant changes to the library over the last release:

Full Changelog: v0.1.0...v0.2.0

Don't miss a new peft release

NewReleases is sending notifications on new releases.