pypi pytorch-lightning 2.4.0
Lightning v2.4

2 months ago

Lightning AI ⚡ is excited to announce the release of Lightning 2.4. This is mainly a compatibility upgrade for PyTorch 2.4 and Python 3.12, with a sprinkle of a few features and bug fixes.

Did you know? The Lightning philosophy extends beyond a boilerplate-free deep learning framework: We've been hard at work bringing you Lightning Studio. Code together, prototype, train, deploy, host AI web apps. All from your browser, with zero setup.

Changes

PyTorch Lightning

Added
  • Made saving non-distributed checkpoints fully atomic (#20011)
  • Added dump_stats flag to AdvancedProfiler (#19703)
  • Added a flag verbose to the seed_everything() function (#20108)
  • Added support for PyTorch 2.4 (#20010)
  • Added support for Python 3.12 (20078)
  • The TQDMProgressBar now provides an option to retain prior training epoch bars (#19578)
  • Added the count of modules in train and eval mode to the printed ModelSummary table (#20159)
Changed
  • Triggering KeyboardInterrupt (Ctrl+C) during .fit(), .evaluate(), .test() or .predict() now terminates all processes launched by the Trainer and exits the program (#19976)
  • Changed the implementation of how seeds are chosen for dataloader workers when using seed_everything(..., workers=True) (#20055)
  • NumPy is no longer a required dependency (#20090)
Removed
  • Removed support for PyTorch 2.1 (#20009)
  • Removed support for Python 3.8 (#20071)
Fixed
  • Avoid LightningCLI saving hyperparameters with class_path and init_args since this would be a breaking change (#20068)
  • Fixed an issue that would cause too many printouts of the seed info when using seed_everything() (#20108)
  • Fixed _LoggerConnector's _ResultMetric to move all registered keys to the device of the logged value if needed (#19814)
  • Fixed _optimizer_to_device logic for special 'step' key in optimizer state causing performance regression (#20019)
  • Fixed parameter counts in ModelSummary when model has distributed parameters (DTensor) (#20163)

Lightning Fabric

Added
  • Made saving non-distributed checkpoints fully atomic (#20011)
  • Added a flag verbose to the seed_everything() function (#20108)
  • Added support for PyTorch 2.4 (#20028)
  • Added support for Python 3.12 (20078)
Changed
  • Changed the implementation of how seeds are chosen for dataloader workers when using seed_everything(..., workers=True) (#20055)
  • NumPy is no longer a required dependency (#20090)
Removed
  • Removed support for PyTorch 2.1 (#20009)
  • Removed support for Python 3.8 (#20071)
Fixed
  • Fixed an attribute error when loading a checkpoint into a quantized model using the _lazy_load() function (#20121)
  • Fixed _optimizer_to_device logic for special 'step' key in optimizer state causing performance regression (#20019)

Full commit list: 2.3.0 -> 2.4.0

Contributors

We thank all our contributors who submitted pull requests for features, bug fixes and documentation updates.

New Contributors

Did you know?

Chuck Norris can solve NP-hard problems in polynomial time. In fact, any problem is easy when Chuck Norris solves it.

Don't miss a new pytorch-lightning release

NewReleases is sending notifications on new releases.