[1.5.7] - 2021-12-21
Fixed
- Fixed
NeptuneLogger
when using DDP (#11030) - Fixed a bug to disable logging hyperparameters in logger if there are no hparams (#11105)
- Avoid the deprecated
onnx.export(example_outputs=...)
in torch 1.10 (#11116) - Fixed an issue when torch-scripting a
LightningModule
after training withTrainer(sync_batchnorm=True)
(#11078) - Fixed an
AttributeError
occuring when using aCombinedLoader
(multiple dataloaders) for prediction (#11111) - Fixed bug where
Trainer(track_grad_norm=..., logger=False)
would fail (#11114) - Fixed an incorrect warning being produced by the model summary when using
bf16
precision on CPU (#11161)
Changed
- DeepSpeed does not require lightning module zero 3 partitioning (#10655)
- The
ModelCheckpoint
callback now saves and restores attributesbest_k_models
,kth_best_model_path
,kth_value
, andlast_model_path
(#10995)
Contributors
@awaelchli @borchero @carmocca @guyang3532 @kaushikb11 @ORippler @Raalsky @rohitgr7 @SeanNaren
If we forgot someone due to not matching commit email with GitHub account, let us know :]