Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying To Convert Paligemma model in npz to hf model format #35632

Open
Shaka42 opened this issue Jan 12, 2025 · 1 comment
Open

Trying To Convert Paligemma model in npz to hf model format #35632

Shaka42 opened this issue Jan 12, 2025 · 1 comment

Comments

@Shaka42
Copy link

Shaka42 commented Jan 12, 2025

https://github.com/huggingface/transformers/blob/main/src/transformers/models/paligemma/convert_paligemma_weights_to_hf.py
I use the back script but keep on getting this error can someone help me please

Error:
2025-01-12 00:37:01.755018: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2025-01-12 00:37:01.786891: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2025-01-12 00:37:01.795171: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-12 00:37:03.684351: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
loading configuration file preprocessor_config.json from cache at /root/.cache/huggingface/hub/models--google--siglip-so400m-patch14-384/snapshots/9fdffc58afc957d1a03a25b10dba0329ab15c2a3/preprocessor_config.json
Image processor SiglipImageProcessor {
"do_convert_rgb": null,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.5,
0.5,
0.5
],
"image_processor_type": "SiglipImageProcessor",
"image_std": [
0.5,
0.5,
0.5
],
"processor_class": "SiglipProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"height": 384,
"width": 384
}
}

Traceback (most recent call last):
File "/content/convert_paligemma_weights_to_hf.py", line 340, in
convert_paligemma_checkpoint(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/content/convert_paligemma_weights_to_hf.py", line 255, in convert_paligemma_checkpoint
state_dict_transformers = slice_state_dict(state_dict, config)
File "/content/convert_paligemma_weights_to_hf.py", line 171, in slice_state_dict
q_proj_weight_reshaped = llm_attention_q_einsum[i].transpose(0, 2, 1).reshape(config.text_config.num_attention_heads * config.text_config.head_dim, config.text_config.hidden_size)
ValueError: cannot reshape array of size 4718592 into shape (2048,2048)

@Shaka42 Shaka42 changed the title Training To Convert Paligemma model in npz to pytorch format Trying To Convert Paligemma model in npz to hf model format Jan 12, 2025
@Rocketknight1
Copy link
Member

Rocketknight1 commented Jan 13, 2025

Hi @Shaka42, this seems like a problem in the checkpoint you're trying to convert! The specific issue here is that the Transformers model thinks the q_proj weight should be (2048, 2048) but it actually contains 4718592 elements, which is 2048 * 2304. Therefore, either the config or the weights in your original model don't match what Transformers is expecting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants