-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How multi-gpu latent caching works? #1271
Comments
At the moment, caching of latents is only done on the main process, ie on one GPU only. I do not know if there is any issues that would prevent using accelerate to split the load, but I do not understand enough of caching logic to say for sure. |
try to split your data instead :) |
As DKnight54 wrote, currently the caching of latents is done on the main process. It doesn't seem to be easy to distribute the images for multiple GPUs with accelerate. So the easy way is splitting data to multiple dataset, as tristanwqy wrote. And then we can run |
i think that would be nice |
Currently I am trainning on Kaggle with 15 train images and 100 repeat and thus 1500 reg images
Actually ideally I would like each GPU to get different set of reg images but i doubt that the system is designed that way
So currently I see that both gpu is caching same images?
I think at the very least each GPU can cache a portion of the reg images pool
How it works exactly right now @kohya-ss
The text was updated successfully, but these errors were encountered: