Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How multi-gpu latent caching works? #1271

Open
FurkanGozukara opened this issue Apr 17, 2024 · 4 comments
Open

How multi-gpu latent caching works? #1271

FurkanGozukara opened this issue Apr 17, 2024 · 4 comments

Comments

@FurkanGozukara
Copy link

Currently I am trainning on Kaggle with 15 train images and 100 repeat and thus 1500 reg images

Actually ideally I would like each GPU to get different set of reg images but i doubt that the system is designed that way

So currently I see that both gpu is caching same images?

I think at the very least each GPU can cache a portion of the reg images pool

How it works exactly right now @kohya-ss

image

@DKnight54
Copy link
Contributor

At the moment, caching of latents is only done on the main process, ie on one GPU only. I do not know if there is any issues that would prevent using accelerate to split the load, but I do not understand enough of caching logic to say for sure.

@tristanwqy
Copy link

try to split your data instead :)

@kohya-ss
Copy link
Owner

As DKnight54 wrote, currently the caching of latents is done on the main process. It doesn't seem to be easy to distribute the images for multiple GPUs with accelerate.

So the easy way is splitting data to multiple dataset, as tristanwqy wrote. And then we can run tools/cache_latents.py for each dataset and GPU.

@FurkanGozukara
Copy link
Author

As DKnight54 wrote, currently the caching of latents is done on the main process. It doesn't seem to be easy to distribute the images for multiple GPUs with accelerate.

So the easy way is splitting data to multiple dataset, as tristanwqy wrote. And then we can run tools/cache_latents.py for each dataset and GPU.

i think that would be nice

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants