Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about relaxed deep supervison #134

Open
ForawardStar opened this issue Jun 5, 2021 · 6 comments
Open

Questions about relaxed deep supervison #134

ForawardStar opened this issue Jun 5, 2021 · 6 comments

Comments

@ForawardStar
Copy link

ForawardStar commented Jun 5, 2021

Hi, very great work. I am very interested in your work and have been following this project for a long time. Here I would like to discuss an implementation detail. Since I am not familiar to Caffe, I use your PyTorch implementation (Link: https://github.com/meteorshowers/RCF-pytorch and https://github.com/balajiselvaraj1601/RCF_Pytorch_Updated ) , and successfully reproduce your reported ODS-F score 0.806 by training with augmentated BSDS and PASCAL VOC datasets. After thoroughly reading the code, I found the following code exists in 'data_load.py':

lb = lb[np.newaxis, :, :]

lb[lb == 0] = 0

lb[np.logical_and(lb>0, lb<127.5)] = 2

lb[lb >= 127.5] = 1

I guess this code is to introduce the relaxed deep supervison proposed by [1]. From the above code, we can infer that the pixels with values between 0 and 127.5 are simply set to be 2, which means these pixels are regarded as the so-called relaxed labels. However, the paper [1] says that the relaxed labels belong to the positive labels produced by Canny Operator or SE [2], but not included in the positive labels of the original manually annotated ground truth. I have thoroughly read the code and have not found the usage of Canny or SE. I would like to ask that is this part of the code for achieving relaxed deep supervision? Does this naive manner of setting relaxed labels reasonable? Will it get better performance if Canny or SE are utilized to generated relaxed labels? and how to do it? Thanks. if there's something I didn't make clear, please let me know.

References:
[1] Liu Yu, Michael S. Lew, Learning Relaxed Deep Supervision for Better Edge Detection. In CVPR, 2016.
[2] P. Dolla^{'}r and C. L. Zitnick, Fast Edge Detection using Structured Forests. TPAMI, 37(8):1558-1570, 2015,

@yun-liu
Copy link
Owner

yun-liu commented Jun 5, 2021

No, this code is not to introduce the relaxed deep supervison proposed by [1]. It is the implementation of our proposed loss function. Please see Section 3.2 of our paper for more details.

BTW, we will highly appreciate if you can release your PyTorch implementation of our method.

@ForawardStar
Copy link
Author

ForawardStar commented Jun 6, 2021

Thanks for your answer. I simply adopt the PyTorch implementation by https://github.com/balajiselvaraj1601/RCF_Pytorch_Updated without any extra modification, and train the model with agumentated BSDS and PASCAL VOC datasets. If solely trained with augmentated BSDS-HED, the final ODS-F score is 0.798, once the augmentated PASCAL VOC dataset is further added, the ODS-F can reach 0.806.

@yun-liu
Copy link
Owner

yun-liu commented Jun 6, 2021

Thanks!

@ForawardStar
Copy link
Author

ForawardStar commented Jun 7, 2021

By the way, I notice that you submit an issue in the released code of [1] (link: pkuCactus/BDCN#31). Now I am in trouble with reproducing their reported ODS-F score.,which makes my current research progress stagnant but I have not gotten the satisfied replie from the author. So I wonder if you can successfully reproduce their results, and is there anything that I need to pay attention to.

[1] Jianzhong He, Shiliang Zhang, Ming Yang, et. al. Bi-Directional Cascade Network for Perceptual Edge Detection. In CVPR, 2019.

@yun-liu
Copy link
Owner

yun-liu commented Jun 7, 2021

@ForawardStar No, I didn't reproduce their results, so I gave up.

@ForawardStar
Copy link
Author

ForawardStar commented Aug 17, 2021

Hello, Have you ever test the F-score of each single scale (5 in total except the final fusion result), especifically the 4-th and 5-th scales, I want to know the impact of the final fusion operation? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants