Replies: 3 comments 1 reply
-
During training, images with no annotations are filtered and excluded. In validation and testing they are included. Same applies for too small images. The filtering takes place here if your dataset is for example based on COCO whose dataset class is inherited from the custom dataset class. If your dataset or its super classes does not use the custom dataset you might find something similar in those classes, too. So if you want to use it, you could create a customized dataset class which inherits the COCO dataset class and overwrite the init with just excluding this part of code, something like this: Code example
|
Beta Was this translation helpful? Give feedback.
-
But usually your algorithm should find regions of interests, before the classifcation of those predicted RoIs is run. These can then either be classified as one of your defined classes or as background. So it should use the background of your 1,300 images nonetheless. So, you can try this approach, but I don't think that it'll fix your problem and that this is situated somewhere else. |
Beta Was this translation helpful? Give feedback.
-
Hi thanks for taking out the time to answer my question. I found another way of including the negative samples by setting train=dict(
img_prefix=path_train_data,
classes=classes,
ann_file=path_train_anno,
pipeline=train_pipeline,
filter_empty_gt=False), For my dataset, the appearance of background texture highly resembles to the appearance of the objects. So I think it will help me eliminate the false positives that the previous model is producing. |
Beta Was this translation helpful? Give feedback.
-
Hi first of all thank you for this awesome repo.
I have a highly unbalanced dataset and only 1300/64,000 images contain objects.
I saw that while training only the 1300 images are being used.
Is there a way where I can train on images that only contain background?
I think this will help me eliminate the false positives
Thank you
Beta Was this translation helpful? Give feedback.
All reactions