🌟 Summary
Distributed hyperparameter tuning lands in ultralytics 8.3.192 with optional MongoDB Atlas coordination, delivering scalable HPO across machines, a steadier progress ETA, faster GPU data transfers, and more reliable CSV parsing — plus a clear docs warning on YOLOv5 compatibility. 🚀
📊 Key Changes
- Distributed Hyperparameter Tuning (primary)
- Optional MongoDB Atlas integration to coordinate tuning across multiple machines.
- New args:
mongodb_uri
,mongodb_db
(default:ultralytics
),mongodb_collection
(default:tuner_results
). - Robust connection handling: retries, pooling, and an automatic fitness index.
- Shared results: reads best runs from MongoDB, writes new runs back, and syncs to CSV for plotting/resume.
- Early stopping when the shared collection reaches the target iteration count.
- Improved mutation logic, safer bounds, stronger resume behavior, and refreshed examples/docs.
- Progress ETA (tqdm)
- More stable remaining-time estimates, reducing jitter in progress bars. ⏱️
- Performance: non-blocking GPU transfers
- Applied
.to(device, non_blocking=True)
across train/val for Classification, Detection, Pose, Segmentation, YOLO-World (text embeddings), and YOLOE (text/visual prompts) to reduce data-transfer bottlenecks. ⚡
- Applied
- CSV parsing reliability
- Polars now infers schema from entire files (
infer_schema_length=None
) across dataset conversion, training results, and plotting, preventing type mix-ups and improving plot/metric accuracy. 📈
- Polars now infers schema from entire files (
- Docs update
- Clear warning: models trained in ultralytics/yolov5 are not compatible with the ultralytics/ultralytics library (YOLOv5u is anchor-free). ⚠️
🎯 Purpose & Impact
- Scale your HPO effortlessly
- Distribute tuning jobs across machines using MongoDB Atlas — speed up experimentation and find better hyperparameters sooner. 🌐
- Smoother training experience
- More accurate ETA and faster input pipelines translate to improved GPU utilization and potentially shorter epoch times.
- More trustworthy metrics and plots
- Full-file CSV schema inference reduces parsing errors for local runs and Ultralytics HUB exports.
- Reduced confusion for YOLOv5 users
- Clear compatibility guidance helps teams avoid loading errors and plan migrations to YOLOv5u within ultralytics/ultralytics.
Quick start for distributed tuning:
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.tune(
data="coco8.yaml",
epochs=10,
iterations=300,
mongodb_uri="mongodb+srv://user:pass@cluster.mongodb.net/",
mongodb_db="ultralytics",
mongodb_collection="tuner_results",
plots=False,
save=False,
val=False,
)
Helpful links:
- Read the Hyperparameter Tuning guide: https://docs.ultralytics.com/guides/hyperparameter-tuning
- See ultralytics/yolov5 vs ultralytics/ultralytics note: https://github.com/ultralytics/ultralytics/tree/main/docs/en/models/yolov5.md
What's Changed
- fix: 🐞 improve CSV reading performance by disabling schema inference by @onuralpszr in #21909
- Add
non_blocking=True
for additional train/val loaders by @glenn-jocher in #21912 - Add warning that legacy YOLOv5 models are not compatible with Ultralytics by @Y-T-G in #21915
ultralytics 8.3.192
Distributed Hyperparameter Tuning with MongoDB Atlas Integration by @glenn-jocher in #21882
Full Changelog: v8.3.191...v8.3.192