github hpcaitech/ColossalAI v0.1.7
Version v0.1.7 Released Today

latest releases: v0.4.6, v0.4.5, v0.4.4...
2 years ago

Version v0.1.7 Released Today

Highlights

  • Started torch.fx for auto-parallel training
  • Update the zero mechanism with ColoTensor
  • Fixed various bugs

What's Changed

Hotfix

Zero

  • [zero] avoid zero hook spam by changing log to debug level (#1137) by Frank Lee
  • [zero] added error message to handle on-the-fly import of torch Module class (#1135) by Frank Lee
  • [zero] fixed api consistency (#1098) by Frank Lee
  • [zero] zero optim copy chunk rather than copy tensor (#1070) by ver217

Optim

Ddp

  • [ddp] add save/load state dict for ColoDDP (#1127) by ver217
  • [ddp] add set_params_to_ignore for ColoDDP (#1122) by ver217
  • [ddp] supported customized torch ddp configuration (#1123) by Frank Lee

Pipeline

Fx

Gemini

  • [gemini] gemini mgr supports "cpu" placement policy (#1118) by ver217
  • [gemini] zero supports gemini (#1093) by ver217

Test

Release

Tensor

Amp

  • [amp] included dict for type casting of model output (#1102) by Frank Lee

Workflow

Engine

Doc

  • [doc] added documentation to chunk and chunk manager (#1094) by Frank Lee

Context

Refactory

Cudnn

  • [cudnn] set False to cudnn benchmark by default (#1063) by Frank Lee

Full Changelog: v0.1.7...v0.1.6

Don't miss a new ColossalAI release

NewReleases is sending notifications on new releases.