github microsoft/nni v3.0rc1
NNI v3.0 Preview Release (v3.0rc1)

latest releases: v3.0, v2.10.1
18 months ago

Web Portal

  • New look and feel

Neural Architecture Search

  • Breaking change: nni.retiarii is no longer maintained and tested. Please migrate to nni.nas.
    • Inherit nni.nas.nn.pytorch.ModelSpace, rather than use @model_wrapper.
    • Use nni.choice, rather than nni.nas.nn.pytorch.ValueChoice.
    • Use nni.nas.experiment.NasExperiment and NasExperimentConfig, rather than RetiariiExperiment.
    • Use nni.nas.model_context, rather than nni.nas.fixed_arch.
    • Please refer to quickstart for more changes.
  • A refreshed experience to construct model space.
    • Enhanced debuggability via freeze() and simplify() APIs.
    • Enhanced expressiveness with nni.choice, nni.uniform, nni.normal and etc.
    • Enhanced experience of customization with MutableModule, ModelSpace and ParamterizedModule.
    • Search space with constraints is now supported.
  • Improved robustness and stability of strategies.
    • Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
    • Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
    • Most multi-trial strategies now supports specifying seed for reproducibility.
    • Performance of strategies have been verified on a set of benchmarks.
  • Strategy/engine middleware.
    • Filtering, replicating, deduplicating or retrying models submitted by any strategy.
    • Merging or transforming models before executing (e.g., CGO).
    • Arbitrarily-long chains of middlewares.
  • New execution engine.
    • Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
    • The old execution engine is now decomposed into execution engine and model format.
    • Enhanced extensibility of execution engines.
  • NAS profiler and hardware-aware NAS.
    • New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
    • Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
    • Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.

Compression

  • Compression framework is refactored, new framework import path is nni.contrib.compression.
    • Configure keys are refactored, support more detailed compression configurations. view doc
    • Support multi compression methods fusion. view doc
    • Support distillation as a basic compression component. view doc
    • Support more compression targets, like input, output and any registered parameters. view doc
    • Support compressing any module type by customizing module settings. view doc
  • Pruning
    • Pruner interfaces have fine-tuned for easy to use. view doc
    • Support configuring granularity in pruners. view doc
    • Support different mask ways, multiply zero or add a large negative value.
    • Support manully setting dependency group and global group. view doc
    • A new powerful pruning speedup is released, applicability and robustness have been greatly improved. view doc
    • The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. view doc
  • Quantization
    • Support using Evaluator to handle training/inferencing.
    • Support more module fusion combinations. view doc
    • Support configuring granularity in quantizers. view doc
  • Distillation
  • Compression documents now updated for the new framework, the old version please view v2.10 doc.
  • New compression examples are under nni/examples/compression

Training Services

  • Breaking change: NNI v3.0 cannot resume experiments created by NNI v2.x
  • Local training service:
    • Reduced latency of creating trials
    • Fixed "GPU metric not found"
    • Fixed bugs about resuming trials
  • Remote training service:
    • reuse_mode now defaults to False; setting it to True will fallback to v2.x remote training service
    • Reduced latency of creating trials
    • Fixed "GPU metric not found"
    • Fixed bugs about resuming trials
    • Supported viewing trial logs on the web portal
    • Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)

Don't miss a new nni release

NewReleases is sending notifications on new releases.