github Lightning-AI/pytorch-lightning 2.6.0
Lightning v2.6.0

4 hours ago

Changes in 2.6.0

PyTorch Lightning

Added
  • Added WeightAveraging callback that wraps the PyTorch AveragedModel class (#20545)
  • Added Torch-Tensorrt integration with LightningModule (#20808)
  • Added time-based validation support though val_check_interval (#21071)
  • Added attributes to access stopping reason in EarlyStopping callback (#21188)
  • Added support for variable batch size in ThroughputMonitor (#20236)
  • Added EMAWeightAveraging callback that wraps Lightning's WeightAveraging class (#21260)
Changed
  • Expose weights_only argument for Trainer.{fit,validate,test,predict} and let torch handle default value (#21072)
  • Default to RichProgressBar and RichModelSummary if the rich package is available. Fallback to TQDMProgressBar and ModelSummary otherwise (#20896)
  • Add MPS accelerator support for mixed precision (#21209)
Fixed
  • Fixed edgecase when max_trials is reached in Tuner.scale_batch_size (#21187)
  • Fixed case where LightningCLI could not be initialized with trainer_default containing callbacks (#21192)
  • Fixed missing reset when ModelPruning is applied with lottery ticket hypothesis (#21191)
  • Fixed preventing recursive symlink creation iwhen save_last='link' and save_top_k=-1 (#21186)
  • Fixed last.ckpt being created and not linked to another checkpoint (#21244)
  • Fixed bug that prevented BackboneFinetuning from being used together with LearningRateFinder (#21224)
  • Fixed ModelPruning sparsity logging bug that caused incorrect sparsity percentages (#21223)
  • Fixed LightningCLI loading of hyperparameters from ckpt_path failing for subclass model mode (#21246)
  • Fixed check the init args only when the given frames are in __init__ method (#21227)
  • Fixed how ThroughputMonitor calculated training time (#21291)
  • Fixed synchronization of gradients in manual optimization with DDPStrategy(static_graph=True) (#21251)
  • Fixed FSDP mixed precision semantics and added user warning (#21361)

Lightning Fabric

Changed
  • Expose weights_only argument for Trainer.{fit,validate,test,predict} and let torch handle default value (#21072)
  • Set _DeviceDtypeModuleMixin._device from torch's default device function (#21164)
  • Added kwargs-filtering for Fabric.call to support different callback method signatures (#21258)
Fixed
  • Fixed issue in detecting MPIEnvironment with partial mpi4py installation (#21353)
  • Learning rate scheduler is stepped at the end of epoch when on_train_batch_start returns -1 (#21296).
  • Fixed FSDP mixed precision semantics and added user warning (#21361)

Full commit list: 2.5.4 -> 2.5.5

Contributors

We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!

In particular, we would like to thank the authors of the pull-requests above

Don't miss a new pytorch-lightning release

NewReleases is sending notifications on new releases.