github hiyouga/LLaMA-Factory v0.6.0
v0.6.0: Paper Release, GaLore and FSDP+QLoRA

latest releases: v0.9.0, v0.8.3, v0.8.2...
7 months ago

We released our paper on arXiv! Thanks to all co-authors and AK's recommendation

New features

  • Support GaLore algorithm, allowing full-parameter learning of a 7B model using less than 24GB VRAM
  • Support FSDP+QLoRA that allows QLoRA fine-tuning of a 70B model on 2x24GB GPUs
  • Support LoRA+ algorithm for better LoRA fine-tuning by @qibaoyuan in #2830
  • LLaMA Factory 🤝 vLLM, enjoy 270% inference speed with --infer_backend vllm
  • Add Colab notebook for easily getting started
  • Support pushing fine-tuned models to Hugging Face Hub in web UI
  • Support apply_chat_template by adding a chat template to the tokenizer after fine-tuning
  • Add dockerize support by @S3Studio in #2743 #2849

New models

  • Base models
    • OLMo (1B/7B)
    • StarCoder2 (3B/7B/15B)
    • Yi-9B
  • Instruct/Chat models
    • OLMo-7B-Instruct

New datasets

  • Supervised fine-tuning datasets
    • Cosmopedia (en)
  • Preference datasets
    • Orca DPO (en)

Bug fix

Don't miss a new LLaMA-Factory release

NewReleases is sending notifications on new releases.