github Lightning-AI/pytorch-lightning 1.2.0
Pruning & Quantization & SWA

latest releases: 2.2.4, 2.2.3, 2.2.2...
3 years ago

[1.2.0] - 2021-02-18

Added

  • Added DataType, AverageMethod and MDMCAverageMethod enum in metrics (#5657)
  • Added support for summarized model total params size in megabytes (#5590)
  • Added support for multiple train loaders (#1959)
  • Added Accuracy metric now generalizes to Top-k accuracy for (multi-dimensional) multi-class inputs using the top_k parameter (#4838)
  • Added Accuracy metric now enables the computation of subset accuracy for multi-label or multi-dimensional multi-class inputs with the subset_accuracy parameter (#4838)
  • Added HammingDistance metric to compute the hamming distance (loss) (#4838)
  • Added max_fpr parameter to auroc metric for computing partial auroc metric (#3790)
  • Added StatScores metric to compute the number of true positives, false positives, true negatives and false negatives (#4839)
  • Added R2Score metric (#5241)
  • Added LambdaCallback (#5347)
  • Added BackboneLambdaFinetuningCallback (#5377)
  • Accelerator all_gather supports collection (#5221)
  • Added image_gradients functional metric to compute the image gradients of a given input image. (#5056)
  • Added MetricCollection (#4318)
  • Added .clone() method to metrics (#4318)
  • Added IoU class interface (#4704)
  • Support to tie weights after moving model to TPU via on_post_move_to_device hook
  • Added missing val/test hooks in LightningModule (#5467)
  • The Recall and Precision metrics (and their functional counterparts recall and precision) can now be generalized to Recall@K and Precision@K with the use of top_k parameter (#4842)
  • Added ModelPruning Callback (#5618, #5825, #6045)
  • Added PyTorchProfiler (#5560)
  • Added compositional metrics (#5464)
  • Added Trainer method predict(...) for high performence predictions (#5579)
  • Added on_before_batch_transfer and on_after_batch_transfer data hooks (#3671)
  • Added AUC/AUROC class interface (#5479)
  • Added PredictLoop object (#5752)
  • Added QuantizationAwareTraining callback (#5706, #6040)
  • Added LightningModule.configure_callbacks to enable the definition of model-specific callbacks (#5621)
  • Added dim to PSNR metric for mean-squared-error reduction (#5957)
  • Added promxial policy optimization template to pl_examples (#5394)
  • Added log_graph to CometLogger (#5295)
  • Added possibility for nested loaders (#5404)
  • Added sync_step to Wandb logger (#5351)
  • Added StochasticWeightAveraging callback (#5640)
  • Added LightningDataModule.from_datasets(...) (#5133)
  • Added PL_TORCH_DISTRIBUTED_BACKEND env variable to select backend (#5981)
  • Added Trainer flag to activate Stochastic Weight Averaging (SWA) Trainer(stochastic_weight_avg=True) (#6038)
  • Added DeepSpeed integration (#5954, #6042)

Changed

  • Changed stat_scores metric now calculates stat scores over all classes and gains new parameters, in line with the new StatScores metric (#4839)
  • Changed computer_vision_fine_tunning example to use BackboneLambdaFinetuningCallback (#5377)
  • Changed automatic casting for LoggerConnector metrics (#5218)
  • Changed iou [func] to allow float input (#4704)
  • Metric compute() method will no longer automatically call reset() (#5409)
  • Set PyTorch 1.4 as min requirements, also for testing and examples torchvision>=0.5 and torchtext>=0.5 (#5418)
  • Changed callbacks argument in Trainer to allow Callback input (#5446)
  • Changed the default of find_unused_parameters to False in DDP (#5185)
  • Changed ModelCheckpoint version suffixes to start at 1 (#5008)
  • Progress bar metrics tensors are now converted to float (#5692)
  • Changed the default value for the progress_bar_refresh_rate Trainer argument in Google COLAB notebooks to 20 (#5516)
  • Extended support for purely iteration-based training (#5726)
  • Made LightningModule.global_rank, LightningModule.local_rank and LightningModule.logger read-only properties (#5730)
  • Forced ModelCheckpoint callbacks to run after all others to guarantee all states are saved to the checkpoint (#5731)
  • Refactored Accelerators and Plugins (#5743)
    • Added base classes for plugins (#5715)
    • Added parallel plugins for DP, DDP, DDPSpawn, DDP2 and Horovod (#5714)
    • Precision Plugins (#5718)
    • Added new Accelerators for CPU, GPU and TPU (#5719)
    • Added Plugins for TPU training (#5719)
    • Added RPC and Sharded plugins (#5732)
    • Added missing LightningModule-wrapper logic to new plugins and accelerator (#5734)
    • Moved device-specific teardown logic from training loop to accelerator (#5973)
    • Moved accelerator_connector.py to the connectors subfolder (#6033)
    • Trainer only references accelerator (#6039)
    • Made parallel devices optional across all plugins (#6051)
    • Cleaning (#5948, #5949, #5950)
  • Enabled self.log in callbacks (#5094)
  • Renamed xxx_AVAILABLE as protected (#5082)
  • Unified module names in Utils (#5199)
  • Separated utils: imports & enums (#5256, #5874)
  • Refactor: clean trainer device & distributed getters (#5300)
  • Simplified training phase as LightningEnum (#5419)
  • Updated metrics to use LightningEnum (#5689)
  • Changed the seq of on_train_batch_end, on_batch_end & on_train_epoch_end, on_epoch_end hooks (#5688)
  • Refactored setup_training and remove test_mode (#5388)
  • Disabled training with zero num_training_batches when insufficient limit_train_batches (#5703)
  • Refactored EpochResultStore (#5522)
  • Update lr_finder to check for attribute if not running fast_dev_run (#5990)
  • LightningOptimizer manual optimizer is more flexible and expose toggle_model (#5771)
  • MlflowLogger limit parameter value length to 250 char (#5893)
  • Re-introduced fix for Hydra directory sync with multiple process (#5993)

Deprecated

  • Function stat_scores_multiple_classes is deprecated in favor of stat_scores (#4839)
  • Moved accelerators and plugins to its legacy pkg (#5645)
  • Deprecated LightningDistributedDataParallel in favor of new wrapper module LightningDistributedModule (#5185)
  • Deprecated LightningDataParallel in favor of new wrapper module LightningParallelModule (#5670)
  • Renamed utils modules (#5199)
    • argparse_utils >> argparse
    • model_utils >> model_helpers
    • warning_utils >> warnings
    • xla_device_utils >> xla_device
  • Deprecated using 'val_loss' to set the ModelCheckpoint monitor (#6012)
  • Deprecated .get_model() with explicit .lightning_module property (#6035)
  • Deprecated Trainer attribute accelerator_backend in favor of accelerator (#6034)

Removed

  • Removed deprecated checkpoint argument filepath (#5321)
  • Removed deprecated Fbeta, f1_score and fbeta_score metrics (#5322)
  • Removed deprecated TrainResult (#5323)
  • Removed deprecated EvalResult (#5633)
  • Removed LoggerStages (#5673)

Fixed

  • Fixed distributed setting and ddp_cpu only with num_processes>1 (#5297)
  • Fixed the saved filename in ModelCheckpoint when it already exists (#4861)
  • Fixed DDPHPCAccelerator hangs in DDP construction by calling init_device (#5157)
  • Fixed num_workers for Windows example (#5375)
  • Fixed loading yaml (#5619)
  • Fixed support custom DataLoader with DDP if they can be re-instantiated (#5745)
  • Fixed repeated .fit() calls ignore max_steps iteration bound (#5936)
  • Fixed throwing MisconfigurationError on unknown mode (#5255)
  • Resolve bug with Finetuning (#5744)
  • Fixed ModelCheckpoint race condition in file existence check (#5155)
  • Fixed some compatibility with PyTorch 1.8 (#5864)
  • Fixed forward cache (#5895)
  • Fixed recursive detach of tensors to CPU (#6007)
  • Fixed passing wrong strings for scheduler interval doesn't throw an error (#5923)
  • Fixed wrong requires_grad state after return None with multiple optimizers (#5738)
  • Fixed add on_epoch_end hook at the end of validation, test epoch (#5986)
  • Fixed missing process_dataloader call for TPUSpawn when in distributed mode (#6015)
  • Fixed progress bar flickering by appending 0 to floats/strings (#6009)
  • Fixed synchronization issues with TPU training (#6027)
  • Fixed hparams.yaml saved twice when using TensorBoardLogger (#5953)
  • Fixed basic examples (#5912, #5985)
  • Fixed fairscale compatible with PT 1.8 (#5996)
  • Ensured process_dataloader is called when tpu_cores > 1 to use Parallel DataLoader (#6015)
  • Attempted SLURM auto resume call when non-shell call fails (#6002)
  • Fixed wrapping optimizers upon assignment (#6006)
  • Fixed allowing hashing of metrics with lists in their state (#5939)

Contributors

@alanhdu, @ananthsub, @awaelchli, @Borda, @borisdayma, @carmocca, @ddrevicky, @deng-cy, @ducthienbui97, @justusschock, @kartik4949, @kaushikb11, @manipopopo, @marload, @neighthan, @peblair, @prampey, @pranjaldatta, @rohitgr7, @SeanNaren, @sid-sundrani, @SkafteNicki, @tadejsv, @tchaton, @teddykoker, @titu1994, @yuntai

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Don't miss a new pytorch-lightning release

NewReleases is sending notifications on new releases.