Bug Fixes
- Fix calibration data generation with multiple samples in the ONNX workflow.
New Features
- Added a standalone type inference option (
--use_standalone_type_inference) to ONNX AutoCast as an experimental alternative to ONNX'sinfer_shapes. This option performs type-only inference without shape inference, which can help when shape inference fails or when you want to avoid extra shape inference overhead. - Added quantization support for the Kimi K2 Thinking model from the original int4 checkpoint.
- Introduced support for params constraint-based automatic neural architecture search in Minitron pruning (
mcore_minitron) as an alternative to manual pruning withexport_config. See examples/pruning/README.md for more details. - Example added for Minitron pruning using the Megatron-Bridge framework, including advanced pruning usage with params-constraint-based pruning and a new distillation example. See examples/megatron_bridge/README.md.
- Supported calibration data with multiple samples in
.npzformat in the ONNX Autocast workflow. - Added the
--opsetoption to the ONNX quantization CLI to specify the target opset version for the quantized model. - Enabled support for context parallelism in Eagle speculative decoding for both HuggingFace and Megatron Core models.
- Added unified Hugging Face export support for diffusers pipelines/components.
- Added support for LTX-2 and Wan2.2 (T2V) in the diffusers quantization workflow.
- Provided PTQ support for GLM-4.7, including loading MTP layer weights from a separate
mtp.safetensorsfile and supporting export as-is. - Added support for image-text data calibration in PTQ for Nemotron VL models.
- Enabled advanced weight scale search for NVFP4 quantization and its export pathway.
- Provided PTQ support for Nemotron Parse.
- Added distillation support for LTX-2. See examples/diffusers/distillation/README.md for more details.