github huggingface/trl v0.4.1

latest releases: v0.8.6, v0.8.5, v0.8.4...
20 months ago

Large models training, Naive Pipeline Parallelism, peft Data Parallelism support and distributed training bug fixes

This release includes a set of features and bug fixes to scale up your RLHF experiments for much larger models leveraging peft and bitsandbytes.

Naive Pipeline Parallelism support

We introduce a new paradigm in trl , termed as Naive Pipeline Parallelism, to fit large scale models on your training setup and apply RLHF on them. This feature uses peft to train adapters and bitsandbytes to reduce the memory foot print of your active model

image

peft Data Parallelism support

There were some bugs with respect to peft integration and DP. This release includes the bug fixes to enable multi-GPU training using accelerate + DDP (DIstributed Data Parallel)

Memory optimization

Your training runs can be now much more memory efficient thanks to few tricks / bug fixes:
Now PPOConfig also supports the flag optimize_cuda_cache (set to False by default) to avoid increasing CUDA memory issues

Pytorch 2.0 fixes

This release also includes minor fixes related to PyTorch 2.0 release

What's Changed

New Contributors

  • @TeamDman made their first contribution in #212
  • @k-for-code made their first contribution in #213

Full Changelog: v0.4.0...v0.4.1

Don't miss a new trl release

NewReleases is sending notifications on new releases.