github Lightning-AI/pytorch-lightning 1.2.8
Standard weekly patch release

latest releases: 2.2.4, 2.2.3, 2.2.2...
3 years ago

[1.2.8] - 2021-04-14

Added

  • Added TPUSpawn + IterableDataset error message (#6875)

Fixed

  • Fixed process rank not being available right away after Trainer instantiation (#6941)
  • Fixed sync_dist for tpus (#6950)
  • Fixed AttributeError for require_backward_grad_sync` when running manual optimization with sharded plugin (#6915)
  • Fixed --gpus default for parser returned by Trainer.add_argparse_args (#6898)
  • Fixed TPU Spawn all gather (#6896)
  • Fixed EarlyStopping logic when min_epochs or min_steps requirement is not met (#6705)
  • Fixed csv extension check (#6436)
  • Fixed checkpoint issue when using Horovod distributed backend (#6958)
  • Fixed tensorboard exception raising (#6901)
  • Fixed setting the eval/train flag correctly on accelerator model (#6983)
  • Fixed DDP_SPAWN compatibility with bug_report_model.py (#6892)
  • Fixed bug where BaseFinetuning.flatten_modules() was duplicating leaf node parameters (#6879)
  • Set better defaults for rank_zero_only.rank when training is launched with SLURM and torchelastic:
    • Support SLURM and torchelastic global rank environment variables (#5715)
    • Remove hardcoding of local rank in accelerator connector (#6878)

Contributors

@ananthsub @awaelchli @ethanwharris @justusschock @kandluis @kaushikb11 @liob @SeanNaren @skmatz

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Don't miss a new pytorch-lightning release

NewReleases is sending notifications on new releases.