Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeking Help with Splitting the Validation Set for Mediaeval2016 #6

Open
Qiaojiao-225 opened this issue May 22, 2023 · 0 comments
Open

Comments

@Qiaojiao-225
Copy link

Hello, I used the Mediaeval2015 training set as my training set, the Mediaeval2015 test set as my validation set, and the Mediaeval2016 test set as my test set to prevent the occurrence of the same event samples in different splits. However, the results on the test set are noticeably worse compared to the training and validation sets. I suspect that this is due to a significant difference in data distribution between Mediaeval2015 and Mediaeval2016. Could you please suggest any good methods for splitting a validation set when using the Mediaeval2016 dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant