github Lightning-AI/pytorch-lightning 0.6.0
Simplifications & new docs

latest releases: 2.2.3, 2.2.2, 2.2.1...
4 years ago

This release focused on a ton of bug fixes, small optimizations to training but most importantly, clean new docs!

Major changes

We have released New documentation, please bear with us as we fix broken links and patch in missing pieces.
This project moved to new org PyTorchLightning, so no longer the root sits on WilliamFalcon/PyTorchLightning.
We have added own custom Tensorboard logger as default logger.
We have upgrade Continues Integration to speed up the automatic testing.
We have fixed GAN training - supporting multiple optimizers.

Complete changelog

Added

  • Added support for resuming from a specific checkpoint via resume_from_checkpoint argument (#516)
  • Added support for ReduceLROnPlateau scheduler (#320)
  • Added support for Apex mode O2 in conjunction with Data Parallel (#493)
  • Added option (save_top_k) to save the top k models in the ModelCheckpoint class (#128)
  • Added on_train_start and on_train_end hooks to ModelHooks (#598)
  • Added TensorBoardLogger (#607)
  • Added support for weight summary of model with multiple inputs (#543)
  • Added map_location argument to load_from_metrics and load_from_checkpoint (#625)
  • Added option to disable validation by setting val_percent_check=0 (#649)
  • Added NeptuneLogger class (#648)
  • Added WandbLogger class (#627)

Changed

  • Changed the default progress bar to print to stdout instead of stderr (#531)
  • Renamed step_idx to step, epoch_idx to epoch, max_num_epochs to max_epochs and min_num_epochs to min_epochs (#589)
  • Renamed several Trainer atributes: (#567)
    • total_batch_nb to total_batches,
    • nb_val_batches to num_val_batches,
    • nb_training_batches to num_training_batches,
    • max_nb_epochs to max_epochs,
    • min_nb_epochs to min_epochs,
    • nb_test_batches to num_test_batches,
    • and nb_val_batches to num_val_batches (#567)
  • Changed gradient logging to use parameter names instead of indexes (#660)
  • Changed the default logger to TensorBoardLogger (#609)
  • Changed the directory for tensorboard logging to be the same as model checkpointing (#706)

Deprecated

  • Deprecated max_nb_epochs and min_nb_epochs (#567)
  • Deprecated the on_sanity_check_start hook in ModelHooks (#598)

Removed

  • Removed the save_best_only argument from ModelCheckpoint, use save_top_k=1 instead (#128)

Fixed

  • Fixed a bug which ocurred when using Adagrad with cuda (#554)
  • Fixed a bug where training would be on the GPU despite setting gpus=0 or gpus=[] (#561)
  • Fixed an error with print_nan_gradients when some parameters do not require gradient (#579)
  • Fixed a bug where the progress bar would show an incorrect number of total steps during the validation sanity check when using multiple validation data loaders (#597)
  • Fixed support for PyTorch 1.1.0 (#552)
  • Fixed an issue with early stopping when using a val_check_interval < 1.0 in Trainer (#492)
  • Fixed bugs relating to the CometLogger object that would cause it to not work properly (#481)
  • Fixed a bug that would occur when returning -1 from on_batch_start following an early exit or when the batch was None (#509)
  • Fixed a potential race condition with several processes trying to create checkpoint directories (#530)
  • Fixed a bug where batch 'segments' would remain on the GPU when using truncated_bptt > 1 (#532)
  • Fixed a bug when using IterableDataset (#547](#547))
  • Fixed a bug where .item was called on non-tensor objects (#602)
  • Fixed a bug where Trainer.train would crash on an uninitialized variable if the trainer was run after resuming from a checkpoint that was already at max_epochs (#608)
  • Fixed a bug where early stopping would begin two epochs early (#617)
  • Fixed a bug where num_training_batches and num_test_batches would sometimes be rounded down to zero (#649)
  • Fixed a bug where an additional batch would be processed when manually setting num_training_batches (#653)
  • Fixed a bug when batches did not have a .copy method (#701)
  • Fixed a bug when using log_gpu_memory=True in Python 3.6 (#715)
  • Fixed a bug where checkpoint writing could exit before completion, giving incomplete checkpoints (#689)
  • Fixed a bug where on_train_end was not called when early stopping (#723)

Contributors

@akhti, @alumae, @awaelchli, @Borda, @borisdayma, @ctlaltdefeat, @dreamgonfly, @elliotwaite, @fdiehl, @goodok, @haossr, @HarshSharma12, @Ir1d, @jakubczakon, @jeffling, @kuynzereb, @MartinPernus, @matthew-z, @MikeScarp, @mpariente, @neggert, @rwesterman, @ryanwongsa, @schwobr, @tullie, @vikmary, @VSJMilewski, @williamFalcon, @YehCF

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Don't miss a new pytorch-lightning release

NewReleases is sending notifications on new releases.