github huggingface/optimum-intel v1.3.0
v1.3.0: Knowledge distillation and one-shot optimization support

latest releases: v1.20.1, v1.20.0, v1.19.0...
2 years ago

Knowledge distillation

Knowledge distillation was introduced in #8. To perform distillation, an IncDistiller must be instantiated with the appropriate configuration.

One-shot optimization

The possibility to combine compression techniques such as pruning, knowledge distillation and quantization aware training in one-shot during training was introduced (#7). One-shot optimization is set by default, but can be cancelled by setting the one_shot_optimization parameter to False when instantiating the IncOptimizer.

Seq2Seq models support

Both quantization and pruning can now be applied on Seq2Seq models (#14)

Don't miss a new optimum-intel release

NewReleases is sending notifications on new releases.