Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to improve the prediction to 90 % for all the classes #985

Open
1 task done
anunay1 opened this issue Jan 16, 2025 · 3 comments
Open
1 task done

How to improve the prediction to 90 % for all the classes #985

anunay1 opened this issue Jan 16, 2025 · 3 comments
Labels
detect Object Detection issues, PR's question Further information is requested

Comments

@anunay1
Copy link

anunay1 commented Jan 16, 2025

Search before asking

Question

Hi

I have trained a yolo model with my custom data set. For one of the class the prediction comes out as 92% with 0.5 confidence threshold, but for another class the prediction comes out as 71 % with the same confidence level. How do I increase the prediction to 92 % with the same confidence level.

Image

Image

Additional

No response

@anunay1 anunay1 added the question Further information is requested label Jan 16, 2025
@UltralyticsAssistant UltralyticsAssistant added the detect Object Detection issues, PR's label Jan 16, 2025
@UltralyticsAssistant
Copy link
Member

👋 Hello @anunay1, thank you for your question about improving model predictions using Ultralytics HUB 🚀! Please visit our HUB Docs for comprehensive guides and best practices:

To help us assist you better with this issue, please provide additional details or adjustments you might have made during training, such as:

  1. Dataset specifics (e.g., number of images per class, balance between classes, etc.)
  2. Model configuration or training parameters (e.g., epochs, batch size, learning rate, etc.)
  3. Environment details (HUB settings or local training setup)
  4. Any relevant logs or metrics observed during training (e.g., class-wise precision/recall)

For specific suggestions to improve class-level performance, it's beneficial to consider:

  • Dataset augmentation: Over/undersample classes with fewer samples or use augmentation to balance.
  • Review training data quality: Ensure sufficient labeled data with diverse examples per class.
  • Hyperparameter optimization: Adjust learning rate, epochs, or other training parameters.

If possible, please share any additional screenshots, examples, or related artifacts that might help us understand and reproduce your scenario. For detailed troubleshooting, you can refer to our guide on creating a Minimum Reproducible Example.

This is an automated response to help guide you, but an Ultralytics engineer will review this issue soon and provide personalized assistance. Thank you for your patience and for using Ultralytics HUB! 🚀✨

@anunay1
Copy link
Author

anunay1 commented Jan 16, 2025

Dataset specifics (e.g., number of images per class, balance between classes, etc.)

Image

Model configuration or training parameters (e.g., epochs, batch size, learning rate, etc.)

Image

Environment details (HUB settings or local training setup)
Local agent with NVIDIA T1000 8GB GPU
Any relevant logs or metrics observed during training (e.g., class-wise precision/recall)

Image

Image

@pderrenger
Copy link
Member

Thank you for providing detailed information about your dataset, model configuration, and environment. Based on your query and the provided context, here are some recommendations to improve the prediction accuracy for the class currently at 71% while maintaining or improving overall performance:

1. Dataset Quality and Balance

  • Class Imbalance: If the class with 71% accuracy has fewer labeled instances compared to other classes, this could explain its lower performance. Review your dataset's class distribution and ensure a sufficient number of labeled examples for this class. Ideally, each class should have at least 1500 images and 10,000 instances (objects) for optimal results, as described in the Tips for Best Training Results.
  • Label Accuracy: Double-check the annotation quality for this class. Ensure bounding boxes are tight and accurate, as poor labeling can significantly impact model performance.
  • Augmentation: Use data augmentation techniques (e.g., flipping, rotation, scaling) to artificially increase the diversity of samples for this specific class.

2. Training Parameters

  • Class Weights: Assign higher weights to the underperforming class in the training configuration to prioritize its learning. You can modify the class_weights parameter in your training YAML file.
  • Epochs: Consider increasing the number of training epochs, especially if the model has not fully converged. Monitor the validation loss and metrics to ensure you're not overfitting.
  • Learning Rate: Adjust the learning rate to ensure the model is learning effectively. A cyclic or fine-tuned learning rate schedule may help.

3. Model Configuration

  • Anchor Tuning: If you're using anchor-based models, ensure that the anchors are well-fitted to your dataset. Use the auto-anchor option during training to optimize anchor sizes for your data.
  • Check Model Size: If the T1000 GPU limits you to a smaller model (e.g., YOLOv8n), consider experimenting with a larger model (e.g., YOLOv8m) if memory permits. Larger models can capture more complex patterns but require more computational resources.

4. Evaluation and Metrics

  • Confidence Threshold: While you mentioned using a 0.5 confidence threshold, experiment with lower thresholds (e.g., 0.3 or 0.4) to ensure the model isn't discarding valid detections. This can be particularly useful for underperforming classes.
  • IoU Threshold: Fine-tune the Intersection over Union (IoU) threshold to balance precision and recall. Lowering the IoU threshold might help detect objects more inclusively at the cost of slight localization errors.

5. Additional Suggestions

  • Image Tiling: If the underperforming class consists of small objects, consider using image tiling. This technique splits large images into smaller tiles to improve small object detection. See more in Insights on Model Evaluation and Fine-Tuning.
  • Background Images: Incorporate background images (images with no objects) to reduce false positives for this class and help the model generalize better.

6. Using Ultralytics HUB

If you're training and evaluating your model on the Ultralytics HUB:

  • Class-wise Metrics: HUB provides detailed metrics like precision, recall, and mAP per class. Use these to identify specific weaknesses and iteratively improve.
  • Pro Features: If you're using the HUB Pro version, leverage cloud training for faster experimentation with larger models or datasets.
  • Inference API: Test the model's predictions using the HUB's Inference API to validate improvements.

7. Community Resources

If issues persist, feel free to share specific metrics or logs (e.g., confusion matrix, PR curves) to get further insights. Additionally, you can connect with the Ultralytics community through the GitHub Issues or Discord server for collaboration and advice.

Let me know how these suggestions work for you or if you need further clarification. The YOLO and Ultralytics community is here to support you! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
detect Object Detection issues, PR's question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants