-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About preprocess code #2
Comments
Can you share the code of sub-graph sampling or sub-graph extraction? Thanks a lot! |
Thanks for your interest in our work! We'll try to get the preprocessing code released after CVPR. |
Thanks for your great work! |
Please refer to the section "Top-1 Accuracy Evaluation" in README, that is, "set --only_sent_eval to 1 and add --orcle_num 1000 in test.sh, and rerun the bash file". |
Yes, The upper bound is promising!!! Can you share the code of sub-graph sampling or sub-graph extraction now? Thanks a lot! |
hi, thanks for the sharing, I had some problems about generating caption with this code: Traceback: An error is generated every time when the caption for the NO.100432 image is generated. |
Hi, can we get the preprocessing code for sub-graph sampling? Also, I'd like some insights into getting custom images captioned with this code. I know there is an option for custom image captioning in the evaluation script but it does not have the accompanying code to produce it. Any help would be appreciated. |
Hi @YiwuZhong , |
Hi, is available somewhere the code to produce scene graphs from a set of images? In case we want to use your model on other datasets that is not COCO or Flickr. If not yours, could you explain how you produced them? Thank you very much in advance! |
Hi @AleDella, thanks for your interests in our work. As mentioned in the Implementation Details of paper, we first used Bottom-up object detector to detect objects from images and to extract region features. Using these region features as inputs, Motif-Net was trained to generate scene graphs from images. This is the model checkpoint of Motif-Net I trained and used to generate scene graphs. As a reference for saving scene graphs into local files, you might be able to use my script to replace the original file in Motif-Net codebase with some adaption as needed. PS: There is another codebase for Bottom-up object detector to extract region features (bottom-up-attention.pytorch). This is my work for scene graph generation with image captions as only supervision (SGG_from_NLS). |
Is there any preprocess code(like sub-graph sampling or gt sub-graph extraction) that you can share with us?
Thanks a lot!!
The text was updated successfully, but these errors were encountered: