[1.7.6] - 2022-09-13
Changed
- Improved the error messaging when passing
Trainer.method(model, x_dataloader=None)
with no module-method implementations available (#14614)
Fixed
- Reset the dataloaders on OOM failure in batch size finder to use the last successful batch size (#14372)
- Fixed an issue to keep downscaling the batch size in case there hasn't been even a single successful optimal batch size with
mode="power"
(#14372) - Fixed an issue where
self.log
-ing a tensor would create a user warning from PyTorch about cloning tensors (#14599) - Fixed compatibility when
torch.distributed
is not available (#14454)
Contributors
@akihironitta @awaelchli @Borda @carmocca @dependabot @krshrimali @mauvilsa @pierocor @rohitgr7 @wangraying
If we forgot someone due to not matching commit email with GitHub account, let us know :)