-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about result on DAVIS16 #33
Comments
That doesn't look right... Can I see some output images? |
Specifically, you can compare them with our pre-computed results. |
I also think this J&F result is not right but I can not find out what went wrong... |
"blackswan" will almost always be good because it is so easy... Can you download our pre-computed results and see if there are any differences? I want to check whether the problem lies in generation or evaluation. |
Ok, I will do it now |
I find that my result is as same as the D16_s012. And the above result belongs to D16_s012_notop. |
DAVIS 2016 evaluation code is not very well maintained. It was not easy to get it right for me back then... I recalled there are some discussion threads about a proper implementation. I am going to check on those and get back to you. |
Here: davisvideochallenge/davis2017-evaluation#4 You can always check the numbers with ours/STM's. |
Well yeah if your evaluation script expects 0/1 outputs... The ground truths in DAVIS 2016 are 0/255 so I'm sticking with that. Glad that it has been fixed. |
Hmm I think I can actually modify the code a bit to make both happy. Gonna do that. Thanks. |
My previous problem is that the output values of the foreground pixels are not same. And thanks for your help! |
I want ask why I get this result using your pre-trained model? Thanks!
![image](https://user-images.githubusercontent.com/72915904/120797989-4b484000-c56f-11eb-97da-38f012189039.png)
The text was updated successfully, but these errors were encountered: