Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue about accuracy #93

Open
ElectroMaster-level5 opened this issue Oct 31, 2018 · 5 comments
Open

Issue about accuracy #93

ElectroMaster-level5 opened this issue Oct 31, 2018 · 5 comments

Comments

@ElectroMaster-level5
Copy link

ElectroMaster-level5 commented Oct 31, 2018

When I set use_pretrained_model = False, all values of accuracy I obtained is 0.0000, and validation data accuracy is also 0. Then I set use pretrained model and download sports 1M pretrained model from the dropbox, but all train accuracy the Validation Data Eval begin descend fast. What's wrong, I try to used older version of this repository, but there is still the problem. I noticed someone has encountered the same problem with me from the history of Issues. I hope you can help me.

@ElectroMaster-level5
Copy link
Author

ElectroMaster-level5 commented Jan 16, 2019 via email

@PBRAOS
Copy link

PBRAOS commented Jan 17, 2019

I think the problem with precision occurs to everyone with this specific release.
It steams from the way the train and test datasets are created.
If one looks carefully the two files he will realise that often one category is missing from the other file.
Ie. Category swimming exists in test but not in train. Therefore obviously the model never learns how to detect swimming.... Can you confirm that this happens to your datasets too ?
Does this make sense to you ?
-- referring to the case where we train the model from scratch hence use_pretrained_model = False

@xieqingxing
Copy link

have you solved this problem?i have same problem,maybe the different of gpu?

@PBRAOS
Copy link

PBRAOS commented Feb 3, 2019

  1. Check that your dataset is correctly prepared before you train. (Means that you re training apples and testing apples, not oranges for instance). Thats the most important step actually. I figured out that the function that prepares the dataset was not working properly for me.
  2. Then i referenced the correct models and switched to python 2.7 ( python3 needs to change some ranges to lists...)
    2b. Check precision from tensor board not from the output of the code...this is not a smoothed precision, it refers to a random batch and its not representative.
  3. Make sure you train for at least 20K steps. You should arrive to 70-73% on the test set precision)
  4. If you do the SVM step as in the reference paper you should easily reach 80-83%.
  5. Finally i tested it and it works quite well for live video feeds using OpenCV. There are some categories that are particularly creating false positives that maybe you want to "hide".
  6. You may want to put a probability filter so that it does not shoot irrelevant things with small probabilities.
  7. There are smart ways to sample 16frames on random videos to detect actions but its more an art than a science...
  8. There are better models out there T3D for instance is much better.

good luck!
pb

@JoseponLee
Copy link

  1. Check that your dataset is correctly prepared before you train. (Means that you re training apples and testing apples, not oranges for instance). Thats the most important step actually. I figured out that the function that prepares the dataset was not working properly for me.
  2. Then i referenced the correct models and switched to python 2.7 ( python3 needs to change some ranges to lists...)
    2b. Check precision from tensor board not from the output of the code...this is not a smoothed precision, it refers to a random batch and its not representative.
  3. Make sure you train for at least 20K steps. You should arrive to 70-73% on the test set precision)
  4. If you do the SVM step as in the reference paper you should easily reach 80-83%.
  5. Finally i tested it and it works quite well for live video feeds using OpenCV. There are some categories that are particularly creating false positives that maybe you want to "hide".
  6. You may want to put a probability filter so that it does not shoot irrelevant things with small probabilities.
  7. There are smart ways to sample 16frames on random videos to detect actions but its more an art than a science...
  8. There are better models out there T3D for instance is much better.

good luck!
pb
I meet the same problem, and I checked my test.list and train.list, its ok,so,what should i do?my be i should not use python3?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants