-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'Convert Model' shelf tool broken in H20/MLOPs 3.0 #142
Comments
Hi from France here! I have passed some time trying to debug the convert model node. I'm now able to convert my sdxl and even turbo models ! So here what i have done, hope it'll work for you :
Finally, with my old GPU 8gb Vram i had the famous " out of memory message" and not able to cook an image even in 512x512 in XL pipeline so i searched how to put a --lowvram argument somewhere like in comfyui but i didn't find and my skills in programming in python are too limited . Instead , but i'm not really sure of this , i have just reinstalled some dependencies(accelerate , transformers and xformers) by doing "pip install dependency -U" ( I'm on manjaro linux ) and after that my gpu memory behave like in comfyui in --lowvram mode. It just work even in 1024x1024. magical. I'm sure there is a lot of things to fo with the mlops data and also with the dependency file to get it work better, maybe we have to just wait for a new update from the mlops team. Cheers! |
The Convert Model shelf tool appears broken in the newer version of MLOPs. Inputting a checkpoint directory (absolute or relative path, with or without config file) causes the same error as inputting no checkpoint file at all:
Traceback appears consistently when the 'Convert' button is clicked.
The text was updated successfully, but these errors were encountered: