-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine tuning AST model to Music Emotion Classification Overfit #120
Comments
Hi, I am facing a similar overfitting issue as yours. Can you tell me how did you solve it if you managed to solve it. I am fine-tuning the model for real and fake audio classification. I got a better accuracy by reducing the learning rate. But the validation loss is still too high. These are the metrics I am getting. acc: 0.936561 |
Sorry I haven't solved this overfitting problem yet. |
i have 40+ audio data for deep fake detection classification |
Hello, I'm also fine tuning the AST model to make it suitable for audio tamper detection tasks and I'm also experiencing a high valid_loss. If you solved it, can you tell me how you solved it?
|
Hi, @YuanGongND,
Perfect job! When I asked the Newbing what's the music classification SOTA model, it told me the AST!
I used your model for music emotion classification, but It's always overfit, the training set loss goes down normally but the test set loss almost stays the same.
I cheated by putting the validation set into the training set when training, I was able to achieve 99% accuracy at the seventh epoch.
The dataset I use is mtg-jamendo-dataset, The dataset includes 56 moods and themes, I have recalculated the mean and std of the training set.
whether I use the full 56 labels or combine them into 8 categories and whether I use the audioset pre-training or not, the overfitting described above will occur.
This is my run.sh
--Mingyu Xiong
The text was updated successfully, but these errors were encountered: