-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I ask how you trained the IP2P model? #128
Comments
I was wrong. |
if you are new to controlnet training I suggest to start out with a simpler one like an edge model to understand how parameters and image preparation effects training, see here lllyasviel/ControlNet#318 (comment) and https://civitai.com/articles/2078 . 1-2 epochs are usually fine from my experience (assuming 200k samples a32). I quite don't understand what the point of using the same prompt for all samples is and you might be better off with fine-tuninng a base model. |
I have read this article and used the basic tutorial, but I am still having trouble. I want to train ip2p, but the images in Sample_002 and Reconstruction_002 have absolutely no relationship.Their relationship should be fixed.I have tried many different prompt words, but the results are all the same. I sincerely hope to receive your help. I would like to know how you trained ip2p.Could you explain your training process? |
https://www.timothybrooks.com/instruct-pix2pix btw your images are not visible |
I'm here again.
I recently tried training an IP2P model using the v1-5-pruned.ckpt as the base model. I wanted to train real-life images into cartoon images, using the source, target, and prompt.json data formats, with the prompt command "Turn it into a corresponding cartoon portrait".In addition, I read another comment and changed the training file to model = create_model('PATH/TO/control_v11e_sd15_ip2p.yaml')model.load_state_dict(load_state_dict(PATH/TO/v1-5-pruned.ckpt'), strict = False)model.load_state_dict(load_state_dict(PATH/TO/control_v11e_sd15_ip2p.pth) , strict = False), I haven't changed anything else, but the training results are not satisfactory. The generated images are completely different from the images in the target file, they are random and chaotic. I tried training for 50 epochs, but it was still as bad as before, and it seems to have stabilized at the second epoch.
I don't know if there is something wrong with my training method. I have seen comments about using it in conjunction with gradio_ip2p.py, but I don't quite understand what that means. Is it just the part about model settings, or does it require image preprocessing code to be migrated into the training code?
Sorry, I have little knowledge of code and am still a newbie, so I would like to ask you some detailed questions about training ip2p. I would like to know how to fix the generated image to the image in target.
The text was updated successfully, but these errors were encountered: