github Lightning-AI/pytorch-lightning 1.6.3
Standard patch release

latest releases: 2.4.0, 2.3.3, 2.3.2...
2 years ago

[1.6.3] - 2022-05-03

Fixed

  • Use only a single instance of rich.console.Console throughout codebase (#12886)
  • Fixed an issue to ensure all the checkpoint states are saved in a common filepath with DeepspeedStrategy (#12887)
  • Fixed trainer.logger deprecation message (#12671)
  • Fixed an issue where sharded grad scaler is passed in when using BF16 with the ShardedStrategy (#12915)
  • Fixed an issue wrt recursive invocation of DDP configuration in hpu parallel plugin (#12912)
  • Fixed printing of ragged dictionaries in Trainer.validate and Trainer.test (#12857)
  • Fixed threading support for legacy loading of checkpoints (#12814)
  • Fixed pickling of KFoldLoop (#12441)
  • Stopped optimizer_zero_grad from being called after IPU execution (#12913)
  • Fixed fuse_modules to be qat-aware for torch>=1.11 (#12891)
  • Enforced eval shuffle warning only for default samplers in DataLoader (#12653)
  • Enable mixed precision in DDPFullyShardedStrategy when precision=16 (#12965)
  • Fixed TQDMProgressBar reset and update to show correct time estimation (#12889)
  • Fixed fit loop restart logic to enable resume using the checkpoint (#12821)

Contributors

@akihironitta @carmocca @HMellor @jerome-habana @kaushikb11 @krshrimali @mauvilsa @niberger @ORippler @otaj @rohitgr7 @SeanNaren

Don't miss a new pytorch-lightning release

NewReleases is sending notifications on new releases.