pypi pytorch-lightning 1.5.5
Standard weekly patch release

latest releases: 2.4.0, 2.3.3, 2.3.2...
2 years ago

[1.5.5] - 2021-12-07

Fixed

  • Disabled batch_size extraction for torchmetric instances because they accumulate the metrics internally (#10815)
  • Fixed an issue with SignalConnector not restoring the default signal handlers on teardown when running on SLURM or with fault-tolerant training enabled (#10611)
  • Fixed SignalConnector._has_already_handler check for callable type (#10483)
  • Fixed an issue to return the results for each dataloader separately instead of duplicating them for each (#10810)
  • Improved exception message if rich version is less than 10.2.2 (#10839)
  • Fixed uploading best model checkpoint in NeptuneLogger (#10369)
  • Fixed early schedule reset logic in PyTorch profiler that was causing data leak (#10837)
  • Fixed a bug that caused incorrect batch indices to be passed to the BasePredictionWriter hooks when using a dataloader with num_workers > 0 (#10870)
  • Fixed an issue with item assignment on the logger on rank > 0 for those who support it (#10917)
  • Fixed importing torch_xla.debug for torch-xla<1.8 (#10836)
  • Fixed an issue with DDPSpawnPlugin and related plugins leaving a temporary checkpoint behind (#10934)
  • Fixed a TypeError occuring in the SingalConnector.teardown() method (#10961)

Contributors

@awaelchli @carmocca @four4fish @kaushikb11 @lucmos @mauvilsa @Raalsky @rohitgr7

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Don't miss a new pytorch-lightning release

NewReleases is sending notifications on new releases.