Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Convert Model' shelf tool broken in H20/MLOPs 3.0 #142

Open
spelafort opened this issue Feb 16, 2024 · 2 comments
Open

'Convert Model' shelf tool broken in H20/MLOPs 3.0 #142

spelafort opened this issue Feb 16, 2024 · 2 comments

Comments

@spelafort
Copy link

The Convert Model shelf tool appears broken in the newer version of MLOPs. Inputting a checkpoint directory (absolute or relative path, with or without config file) causes the same error as inputting no checkpoint file at all:

Traceback (most recent call last):
  File "C:\Users/USER/Documents/GitHub/MLOPs/scripts/python\mlops_utils.py", line 557, in on_accept
    convert_model.convert(
  File "C:\Users/USER/Documents/GitHub/MLOPs/scripts/python\sdpipeline\convert_model.py", line 12, in convert
    pipe = download_from_original_stable_diffusion_ckpt(
TypeError: download_from_original_stable_diffusion_ckpt() got an unexpected keyword argument 'checkpoint_path'

Traceback appears consistently when the 'Convert' button is clicked.

@Tr1dae
Copy link

Tr1dae commented Mar 3, 2024

Same for me - MLOPS3 - Houdini 20.0.625
image

@olop2
Copy link

olop2 commented Mar 7, 2024

Hi from France here! I have passed some time trying to debug the convert model node. I'm now able to convert my sdxl and even turbo models !

So here what i have done, hope it'll work for you :

  1. in "MLOPS/scripts/python/sdpipeline/convert_model.py" change the text line 13 : "checkpoint_path=checkpoint_file," by "checkpoint_path_or_dict=checkpoint_file,"

  2. go to "/MLOPS/data/dependencies/python/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py" at line 1212 there is "StableDiffusionPipeline," ,add a new line with "StableDiffusionXLPipeline,"

  3. and just after in line 1220 replace "pipeline_class = StableDiffusionPipeline" by "pipeline_class = StableDiffusionXLPipeline"

  4. After that convert model will normally works but if you choose to work with an XL model , don't forget in your pipeline node parameters , in solver tab , to choose " StableDiffusionXL - Autodetect " or select it manually.

Finally, with my old GPU 8gb Vram i had the famous " out of memory message" and not able to cook an image even in 512x512 in XL pipeline so i searched how to put a --lowvram argument somewhere like in comfyui but i didn't find and my skills in programming in python are too limited . Instead , but i'm not really sure of this , i have just reinstalled some dependencies(accelerate , transformers and xformers) by doing "pip install dependency -U" ( I'm on manjaro linux ) and after that my gpu memory behave like in comfyui in --lowvram mode. It just work even in 1024x1024. magical.
For now i have not tested other workflow so i don't what is working or not.

I'm sure there is a lot of things to fo with the mlops data and also with the dependency file to get it work better, maybe we have to just wait for a new update from the mlops team.

Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants