🌟 Summary
TFLite INT8 export is now enabled for end2end models, plus improved quantized NMS accuracy and full IMX500 Classification support—expanding fast, small-footprint deployments across mobile and edge devices. 🚀📱
📊 Key Changes
- TFLite INT8 for end2end models ✅
- Enables exporting quantized TFLite INT8 models even when
model.end2end=True(Enable TFLite INT8 export for end2end models – PR #22503). - Works with simple commands like:
yolo export model=yolov10n.pt format=tflite int8yolo val task=detect model=yolov10n_saved_model/yolov10n_int8.tflite data=coco128.yaml
- Enables exporting quantized TFLite INT8 models even when
- More accurate NMS with quantization 🧠
- Improves NMS stability and accuracy for quantized exports, especially with TFLite INT8 (Improve accuracy of NMS export with quantization – PR #22487).
- IMX500: Classification export + inference 🎛️
- Adds full Classification task support for IMX500, docs, examples, and benchmarks (Support IMX export and inference for classification – PR #21405).
- Documentation and DX polish ✨
- Consistent US English across docs, logs, tests (e.g., grey→gray, labelled→labeled) (Standardize American English – PR #22508).
- Clearer hyperparameter tuning guidance (Clarify tuning limitations – PR #22513).
- Fixed a segmentation docs snippet variable name (Fix variable name – PR #22501).
- Added a Construction-PPE training video to the dataset page (Add PPE video – PR #22485).
- Shorter docs banner copy (Update Banner Text – PR #22486).
- CI and tooling upgrades 🧰
- Expanded Ruff rules in docs CI (Update docs.yml with RUF and FA – PR #22480).
- Cleaner Codespell config to reduce false positives (Update codespell settings – PR #22509).
- GitHub Actions artifact steps bumped to latest (download-artifact v6 – PR #22489, upload-artifact v5 – PR #22490).
🎯 Purpose & Impact
- Broader edge deployment options 🌍
- INT8 TFLite for end2end models unlocks smaller, faster models on mobile and embedded devices with minimal friction.
- Better accuracy under quantization 📈
- Improved NMS logic helps retain performance when using INT8 exports—meaning more reliable results on constrained hardware.
- IMX500 across tasks 🧩
- With Classification now supported alongside Detection and Pose, teams can cover more use cases on the Raspberry Pi AI Camera and similar devices.
- Smoother onboarding and contributor experience 🧑💻
- Clearer docs, fewer CI false positives, and corrected examples reduce confusion and speed up development.
Happy exporting and deploying! ⚡
What's Changed
- Update docs.yml with
RUFandFAruff formatting by @glenn-jocher in #22480 - Update Banner Text by @sergiuwaxmann in #22486
- Bump actions/download-artifact from 5 to 6 in /.github/workflows by @dependabot[bot] in #22489
- Bump actions/upload-artifact from 4 to 5 in /.github/workflows by @dependabot[bot] in #22490
- Add https://youtu.be/lFaVnrhMmaE to docs by @RizwanMunawar in #22485
- Improve accuracy of NMS export with quantization by @Y-T-G in #22487
- Fix British english by @glenn-jocher in #22508
- Update pyproject.toml codespell settings by @glenn-jocher in #22509
- Support IMX export and inference for classification by @ambitious-octopus in #21405
- Clarify hyperparameter tuning limitations in guide by @amm1111 in #22513
- Fix variable name from
restoresultsby @RizwanMunawar in #22501 ultralytics 8.3.222Enable TFLite INT8 export for end2end models by @Y-T-G in #22503
Full Changelog: v8.3.221...v8.3.222