-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better performance with older versions of YOLO #995
Comments
👋 Hello @Josephts1, thank you for reaching out and sharing your question about YOLO performance with Ultralytics HUB 🚀! Your interest in model version comparisons is exciting, and we're here to help. Here are some resources you may find useful:
If this is a ❓ Question, could you provide additional details? For deeper analysis of the behavior across YOLO versions, it would be helpful to share:
If this is a 🐛 Bug Report about performance discrepancies, could you reproduce the issue with a minimum reproducible example (MRE)? You can find instructions for creating an MRE here: Minimum Reproducible Example. Please include any relevant logs or error messages and ensure all images or illustrations (e.g., validation plots like image 1) are correctly embedded for clarity. We aim to address every issue as promptly as possible. This is an automated response to ensure you receive guidance quickly, but an Ultralytics engineer will follow up soon to assist further. Thank you for your patience! 😊 |
@Josephts1 hi there, Thanks for sharing the detailed setup and results of your experiment. It's great to see the effort you've put into systematically testing the models and plotting performance metrics! Let me clarify a few points that may help explain why older YOLO versions might be producing better results on your dataset:
Recommendations:
Ultimately, model performance depends on a combination of factors, including dataset size, quality, and the model's architecture. Older versions of YOLO might seem to perform better in your specific case due to the reasons outlined, but with the right adjustments, newer models could potentially outperform them. Feel free to share further details or results if you refine your approach—we’re here to help! 😊 Happy training! |
Search before asking
Question
Hi everyone.
I'm looking for the best YOLO model for detecting mandarin oranges. I'm using my own dataset with 236 images (196 for train, 30 for val and 10 for test).
Without any pre-processing or alteration of other parameters in the YOLO model
result=model.train(data="/content/drive/MyDrive/Proyecto_de_grado/data/data.yaml", epochs=100,patience=50,batch=16,plots=True,optimizer="auto",lr0=1e-4,project="/content/drive/MyDrive/Proyecto_de_grado/runs/YOLO10/l")
I've performed a training for each version of YOLO, from the YOLOv8n version to the YOLO11l version (the model used was a new model, without pre-trained weights). I plotted the mAP50-95 performance metric in validation against the number of parameters for each version of YOLO and this was the result (see image 1).
My question is: Why do older versions of YOLO have better mAP50-95 val than newer versions of YOLO?
I attach the model validation code and an image of my data set (see image 2):
metrics = model.val(data='/content/drive/MyDrive/Proyecto_de_grado/data/data.yaml', project='/content/drive/MyDrive/Proyecto_de_grado/runs/YOLO10/l')
image 1
image 2
Thanks for your help
Additional
No response
The text was updated successfully, but these errors were encountered: