-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about fine-tuning ControlNet-Tile to achieve super-resolution #73
Comments
We use multiview high resolution and low resolution pairs. Multiview images comes from blender's rendering results for the objaverse dataset. |
Thank you for your reply! Do you mean that rendering the Objaverse 3D dataset in two different resolutions (one is relatively high and another one is relatively low) in order to construct data pairs? |
Yes, we use a (256,512) resolution pair for the first stage of super-resolution training, where the 256 resolution portion is augmented using downsampling to a random resolution and then upsampled back to 256, along with some random noise, to get a 256 resolution image with artifacts. This allows the super-resolution model at this step to correct some minor errors in generation. |
@wukailu ,Hi, I want to ask some details about training controlnet-tile. Then I came across the color change problem, similar as mentioned in lllyasviel/ControlNet-v1-1-nightly#125 (comment) Could you please give some suggestions to solve the problem? Thank you. |
Hello, thank you for your great work on high-resolution image-to-3D generation!
I noticed that you utilized a ControlNet-Tile based on SD1.5 to achieve the first stage of super-resolution. I am curious about which data you used to fine-tune. It seems that fine-tuning a ControlNet usually needs data pairs (e.g. image-normal pair, image-depth pair, LR-HR pair).
Thank you :)
The text was updated successfully, but these errors were encountered: