github microsoft/nni v2.9
NNI v2.9 Release

latest releases: v3.0, v2.10.1, v3.0rc1...
2 years ago

Neural Architecture Search

  • New tutorial of model space hub and one-shot strategy. (tutorial)
  • Add pretrained checkpoints to AutoFormer. (doc)
  • Support loading checkpoint of a trained supernet in a subnet. (doc)
  • Support view and resume of NAS experiment. (doc)

Enhancements

  • Support fit_kwargs in lightning evaluator. (doc)
  • Support drop_path and auxiliary_loss in NASNet. (doc)
  • Support gradient clipping in DARTS. (doc)
  • Add export_probs to monitor the architecture weights.
  • Rewrite configure_optimizers, functions to step optimizers / schedulers, along with other hooks for simplicity, and to be compatible with latest lightning (v1.7).
  • Align implementation of DifferentiableCell with DARTS official repo.
  • Re-implementation of ProxylessNAS.
  • Move nni.retiarii code-base to nni.nas.

Bug fixes

  • Fix a performance issue caused by tensor formatting in weighted_sum.
  • Fix a misuse of lambda expression in NAS-Bench-201 search space.
  • Fix the gumbel temperature schedule in Gumbel DARTS.
  • Fix the architecture weight sharing when sharing labels in differentiable strategies.
  • Fix the memo reusing in exporting differentiable cell.

Compression

  • New tutorial of pruning transformer model. (tutorial)
  • Add TorchEvaluator, LightningEvaluator, TransformersEvaluator to ease the expression of training logic in pruner. (doc, API)

Enhancements

  • Promote all pruner API using Evaluator, the old API is deprecated and will be removed in v3.0. (doc)
  • Greatly enlarge the set of supported operators in pruning speedup via automatic operator conversion.
  • Support lr_scheduler in pruning by using Evaluator.
  • Support pruning NLP task in ActivationAPoZRankPruner and ActivationMeanRankPruner.
  • Add training_steps, regular_scale, movement_mode, sparse_granularity for MovementPruner. (doc)
  • Add GroupNorm replacement in pruning speedup. Thanks external contributor @cin-xing .
  • Optimize balance mode performance in LevelPruner.

Bug fixes

  • Fix the invalid dependency_aware mode in scheduled pruners.
  • Fix the bug where bias mask cannot be generated.
  • Fix the bug where max_sparsity_per_layer has no effect.
  • Fix Linear and LayerNorm speedup replacement in NLP task.
  • Fix tracing LightningModule failed in pytorch_lightning >= 1.7.0.

Hyper-parameter optimization

  • Fix the bug that weights are not defined correctly in adaptive_parzen_normal of TPE.

Training service

  • Fix trialConcurrency bug in K8S training service: use${envId}_run.sh to replace run.sh.
  • Fix upload dir bug in K8S training service: use a separate working directory for each experiment. Thanks external contributor @amznero .

Web portal

  • Support dict keys in Default metric chart in the detail page.
  • Show experiment error message with small popup windows in the bottom right of the page.
  • Upgrade React router to v6 to fix index router issue.
  • Fix the issue of details page crashing due to choices containing None.
  • Fix the issue of missing dict intermediate dropdown in comparing trials dialog.

Known issues

  • Activation based pruner can not support [batch, seq, hidden].
  • Failed trials are NOT auto-submitted when experiment is resumed (#4931 is reverted due to its pitfalls).

Don't miss a new nni release

NewReleases is sending notifications on new releases.