You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's a remarkable work. However, I had some problems in the reproduction. The paper reports 0.62M counts and FLOPS:9.7G at 1024x512 scale, but the model code provided for testing has 0.61M counts and FLOPS:15.941G. Can you check the model file? I am looking forward to your reply!
The text was updated successfully, but these errors were encountered:
Do you solve the problem? If you really can't find the issue, you can follow your current path to modify the code. By the way, I found that two 2080ti can't run successfully (the paper says single 2080ti)? We can add a friend to discuss together (are you Chinese?), I am also a beginner who just got into semantic segmentation not long ago.
It's a remarkable work. However, I had some problems in the reproduction. When I execute this code, I keep getting an error。The code is" with open(self.data_dir + '/' + fileName, 'r') as textFile:"I would like to ask what causes this. I am looking forward to your reply.Thank you.
hi,did u solve this problem,i think the dataset path was wrong.
It's a remarkable work. However, I had some problems in the reproduction. The paper reports 0.62M counts and FLOPS:9.7G at 1024x512 scale, but the model code provided for testing has 0.61M counts and FLOPS:15.941G. Can you check the model file? I am looking forward to your reply!
The text was updated successfully, but these errors were encountered: