๐ Summary
Ultralytics v8.4.37 is a quality + workflow-focused release: the tag PR itself is a version bump, while the main substance is improved hyperparameter tuning (now NDJSON-based for multi-dataset runs), better handling of class imbalance, stronger training reliability, and clearer docs/UI guidance. ๐
๐ Key Changes
-
(Priority note) Current PR #24192 by @glenn-jocher: release tag/version update only (
8.4.36โ8.4.37) ๐ฆ- No direct runtime/model logic change in this PR alone.
-
Major tuning upgrade (PR #24179 by @Laughing-q) ๐ง
- Hyperparameter tuning logs moved from CSV to
tune_results.ndjson. - Better support for multi-dataset tuning with per-dataset fitness tracking.
- Plot/output naming updated (e.g.,
tune_fitness.png), and MongoDB sync now aligns with local NDJSON logs.
- Hyperparameter tuning logs moved from CSV to
-
Class imbalance support in training (PR #23565 by @ahmet-f-gumustas) โ๏ธ
- Added new
cls_pwhyperparameter to weight underrepresented classes more during detection training. - Default is off (
0.0), so existing behavior stays unchanged unless enabled.
- Added new
-
Training stability + robustness fixes ๐ก๏ธ
- First-epoch checkpoint is now saved even if EMA has invalid values early on (PR #24170 by @Laughing-q).
- Fixed local zip dataset regression in HUB/Platform training (
safe_downloadpath handling) (PR #24185 by @glenn-jocher).
-
Evaluation and CI reliability improvements โ
compute_apprecision edge-case fix for more robust AP calculation (PR #24175 by @Laughing-q).- CI/Docker benchmark threshold updates to reduce false failures.
-
Cleaner distributed training logs (PR #24177 by @Laughing-q) ๐งน
- Reduced duplicate model info prints in DDP/multi-process training.
-
Documentation and platform UX improvements ๐
- Correct task-specific
.load()weights in segment/OBB docs. - OpenVINO links updated for YOLO26 optimization notebooks.
- Platform docs updated for new Settings > API Keys flow.
- Quickstart diagram made interactive for easier navigation.
- Correct task-specific
๐ฏ Purpose & Impact
- For ML practitioners tuning across datasets: much better experiment tracking and analysis thanks to NDJSON + per-dataset records. ๐
- For teams with imbalanced data:
cls_pwcan improve rare-class learning without changing your whole pipeline. ๐ฏ - For production/research training: fewer broken runs and better checkpoint safety in edge cases. ๐
- For CI and benchmarking users: fewer flaky failures from minor threshold drift. ๐งช
- For new users: clearer docs and safer examples reduce silent misconfigurations and confusion. ๐
What's Changed
- Add class weights support for handling class imbalance in training by @ahmet-f-gumustas in #23565
- Fix AP calculation precision in
compute_apby @Laughing-q in #24175 - Fix: Allow checkpoint save for first epoch even when EMA contains NaN/Inf by @Laughing-q in #24170
- Fix duplicate model info print for DDP training by @Laughing-q in #24177
- Fix benchmark verbose for
Dockerfile-nvidia-arm64by @Laughing-q in #24181 - docs: fix wrong weights in segment and obb .load() examples by @amanharshx in #24180
- Label Docs non-text code fences by @glenn-jocher in #24176
- docs: update OpenVINO notebook links to YOLO26 optimization by @easyrider11 in #24182
- Simplify engine resume tests by @fcakyon in #24183
- Fix HUB training regression for local zip datasets by @glenn-jocher in #24185
- replace journey diagram with interactive workflow graph on platform quickstart by @raimbekovm in #24186
ultralytics 8.4.37NDJSON-based multidataset hyperparameter tuning by @Laughing-q in #24179- improve trainer callbacks docs with precise descriptions by @raimbekovm in #24122
ultralytics 8.4.37NDJSON-based multidataset hyperparameter tuning by @glenn-jocher in #24192
New Contributors
- @easyrider11 made their first contribution in #24182
Full Changelog: v8.4.36...v8.4.37