Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameters for training HyperNeRF dataset #28

Open
yunjinli opened this issue Apr 15, 2024 · 7 comments
Open

Parameters for training HyperNeRF dataset #28

yunjinli opened this issue Apr 15, 2024 · 7 comments

Comments

@yunjinli
Copy link

Hi,

First of all, Thanks for sharing your amazing work :) I noticed that in the paper you didn't present the result from the HyperNeRF dataset, I am curious if you did experiments on HyperNeRF dataset as well. If so, do you happen to have the arguments to train the scene in HyperNeRF. I look forward to your reply, many thanks :)

@yihua7
Copy link
Owner

yihua7 commented Apr 15, 2024

Hi,
Thank you for your interest!

  • We have shown both novel view synthesis and editing cases of HyperNeRF datasets on our homepage.

  • The arguments can be like --gt_alpha_mask_as_dynamic_mask --gs_with_motion_mask --deform_type node --node_num 512 --hyper_dim 2 --eval --local_frame --W 800 --H 800 --white_background --gui

  • You can follow the instructions in here to prepare the data (estimate camera poses and obtain dynamic masks) and run the code (the same as Self-Captured Videos).

  • I strongly recommend using COLMAP to re-estimate the camera poses since the original cameras are quite inaccurate. As mentioned in the original HyperNeRF paper, higher PSNR on the dataset may not mean a higher quality due to inaccurate cameras, and this is why we do not perform numerical experiments on it.

@yunjinli
Copy link
Author

Hi,
Thanks for the quick reply :)

@yunjinli
Copy link
Author

Hi,

So to reproduce the rendering results of HyperNeRF shown here, I'll have to run MiVOS to re-estimate camera poses and obtain dynamic masks, right? I hope I understand it correctly :)

@yihua7
Copy link
Owner

yihua7 commented Apr 16, 2024

Hi,
There is no need to re-estimate camera poses for the scenes shown on our homepage because their cameras are roughly correct. You can optionally choose to run MiVOS to mask dynamic parts on them. If using dynamic masks, just remember to use gt_alpha_mask_as_dynamic_mask --gs_with_motion_mask instead of --gt_alpha_mask_as_scene_mask.
For other HyperNerf scenes, a COLMAP estimation is recommended.

@yunjinli
Copy link
Author

Hi,

Thanks for your reply 😀
As for the dynamic mask, what is the intuition of using it? Would it change the result significantly? Maybe I miss this part in the paper but I didn't find the explanation about it though. I look forward to your reply, thanks!

@yihua7
Copy link
Owner

yihua7 commented Apr 16, 2024

  • The dynamic masks make the representation more efficient. Since the prior dynamics are provided, control points and MLP only need to model the dynamics of unmasked regions (with --gt_alpha_mask_as_dynamic_mask --gs_with_motion_mask).

  • Otherwise, the whole scene is treated as dynamic if you do not provide dynamic masks and remove --gs_with_motion_mask, which is a waste.

  • But you can also add --gs_with_motion_mask and train the model without dynamic masks, in this way the model will learn where is dynamic and where is static accordingly. At this point, a binary loss forcing the mask closer to either 0 or 1 is recommended to sharpen the GS's motion mask and avoid editing artifacts. (If you do not do this, the velocity of motion may be entangled into the values of motion masks of GS)

@diamond0910
Copy link

Hi, There is no need to re-estimate camera poses for the scenes shown on our homepage because their cameras are roughly correct. You can optionally choose to run MiVOS to mask dynamic parts on them. If using dynamic masks, just remember to use gt_alpha_mask_as_dynamic_mask --gs_with_motion_mask instead of --gt_alpha_mask_as_scene_mask. For other HyperNerf scenes, a COLMAP estimation is recommended.

Hi, If only dynamic objects are reconstructed, how is the static part reconstructed? I see the hypernerf shown on your homepage is the complete scenario result, and how should it run the code if you want to reproduce it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants