Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions About the Implementation of Enhancement Module. #201

Open
Jackiemin233 opened this issue Jan 15, 2025 · 0 comments
Open

Questions About the Implementation of Enhancement Module. #201

Jackiemin233 opened this issue Jan 15, 2025 · 0 comments

Comments

@Jackiemin233
Copy link

Thanks for your great work!
I am trying to train the enhancement module on my dataset. However, the implementation of this module is confusing.
In my view, during training stage, the input of the ControlNet is the low-quality GT images, then the scheduler adds noise to GT images, and the diffusion UNet predicts the noise. During inference stage, the output of MV Diffusion generates the low-quality images as the input of the controlnet, and the UNet recovers the normal maps and RGB images from the Gaussian noise (The same as the pipeline figure of the Wonder3D++). However, in your released code, the generated images from the MV Diffusion are never used in the enhancement module. Instead, the input of the ControlNet is the rendered RGB images, normal maps from the textured coarse mesh, and the output of MV Diffusion model seems never used, which is different from the paper. Is there anything wrong?
微信图片_20250116030649
Looking forward to your reply and appreciate your time!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant