Notable changes in this release
PyTorch Lightning
Changed
- Added
save_on_exception
option toModelCheckpoint
Callback (#20916) - Allow
dataloader_idx_
in log names whenadd_dataloader_idx=False
(#20987) - Allow returning
ONNXProgram
when callingto_onnx(dynamo=True)
(#20811) - Extended support for general mappings being returned from
training_step
when using manual optimization (#21011)
Fixed
- Fixed Allowing trainer to accept CUDAAccelerator instance as accelerator with FSDP strategy (#20964)
- Fixed progress bar console clearing for Rich
14.1+
(#21016) - Fixed
AdvancedProfiler
to handle nested profiling actions for Python 3.12+ (#20809) - Fixed
rich
progress bar error when resume training (#21000) - Fixed double iteration bug when resumed from a checkpoint. (#20775)
- Fixed support for more dtypes in
ModelSummary
(#21034) - Fixed metrics in
RichProgressBar
being updated according to user providedrefresh_rate
(#21032) - Fixed
save_last
behavior in the absence of validation (#20960) - Fixed integration between
LearningRateFinder
andEarlyStopping
(#21056) - Fixed gradient calculation in
lr_finder
formode="exponential"
(#21055) - Fixed
save_hyperparameters
crashing withdataclasses
usinginit=False
fields (#21051)
Lightning Fabric
Changed
Removed
Full commit list: 2.5.2 -> 2.5.3
Contributors
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@baskrahmer, @bhimrazy, @deependujha, @fnhirwa, @GdoongMathew, @jonathanking, @relativityhd, @rittik9, @SkafteNicki, @sudiptob2, @vsey, @YgLK
Thank you ❤️ and we hope you'll keep them coming!