github XiaoMi/mace v1.0.0

latest releases: v1.1.1, v1.0.4, v1.0.3...
3 years ago

Release Note

The following are the highlights in this release:

Support Quantization For MACE Micro

At the beginning of this year, we released MACE Micro to fully support ultra-low-power inference scenarios of mobile phones and IoT devices. In this version, we support quantization for MACE Micro and integrate CMSIS5 to support Cortex-M chips better.

Support More Model Formats

We find more and more R&D engineers are using the PyTorch framework to train their models. In previous versions, MACE transformed the PyTorch model by using ONNX format as a bridge. In order to serve PyTorch developers better, we support direct transformation for PyTorch models in this version, which improves the performance of the model inference.
At the same time, we cooperated with MEGVII company and support its MegEngine model format. If you trained your models by MegEngine framework, now you can use MACE to deploy the models on mobile phones or IoT devices.

Support More Data Precision

Armv8.2 provides support for half-precision floating-point data processing instructions, in this version we support the fp16 precision computation by Armv8.2 fp16 instructions, which increases inference speed by roughly 40% for models such as mobilenet-v1 model.
The bfloat16 (Brain Floating Point) floating-point format is a computer number format occupying 16 bits in computer memory, we also support bfloat16 precision in this version, which increases inference speed by roughly 40% for models such as mobilenet-v1/2 model on some low-end chips.

Others

In this version, we also add the following features:

  1. Support more operators, such as GroupNorm, ExtractImagePatches, Elu, etc.
  2. Optimize the performance of the framework and operators, such as the Reduce operator.
  3. Support dynamic filter of conv2d/deconv2d.
  4. Integrate MediaTek APU support on mt6873, mt6885, and mt6853.

Acknowledgement

Thanks to the following guys who contribute code which makes MACE better.

@ZhangZhijing1, who contributed the bf16 code which was then committed by someone else.
@yungchienhsu, @Yi-Kai-Chen, @Eric-YK-Chen, @yzchen, @gasgallo, @lq, @huahang, @elswork, @LovelyBuggies, @freewym.

Attachment

libmace-v1.0.0.tar.gz: Prebuilt MACE library using NDK-19c, which contains armeabi-v7a, arm64-v8a, arm_linux and linux-x86-64 libraries.

Don't miss a new mace release

NewReleases is sending notifications on new releases.