Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the ground truth we used for training semantic global policy? #22

Open
rginpan opened this issue Apr 28, 2023 · 1 comment
Open

Comments

@rginpan
Copy link

rginpan commented Apr 28, 2023

Does anybody can give me some explanations about:

  • I think the semantic mapping module is not a learnable module, am I right?
  • What is the ground truth we need to train the semantic global policy, the top-view 2D semantic map?

Thanks!!

@FUIGUIMURONG
Copy link

I am reading this paper and want to do the experiment.
For the first question: As it says in the paper, the semantic mapping module should be trained. The Mask RCNN weights are frozen but the denoising network should be trained.
For the second question: As it says in the paper, The Goal-Oriented Semantic Policy decides a long-term goal
based on the current semantic map to reach the given object goal. It takes the semantic map, the agent’s current and
past locations, and the object goal as input and predicts a long-term goal in the top-down map space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants