Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to evaluate your LLaVA-UHD model? #26

Open
Gaffey opened this issue Jun 7, 2024 · 1 comment
Open

How to evaluate your LLaVA-UHD model? #26

Gaffey opened this issue Jun 7, 2024 · 1 comment

Comments

@Gaffey
Copy link

Gaffey commented Jun 7, 2024

It is said that you use the same scripts as LLaVA in README. However, all codes about UHD are writen in llava/train/llava_uhd, but the original script cannot use these code since llava/model/builder.py did not even import your UHD part. Could you please provide the corresponding evaluation code? Directly import these codes cannot make it since the data preprocesser is not correct.

@guozonghao96
Copy link
Collaborator

Our repository has been fully improved, and almost all bugs have been eliminated. For details, please refer to the main branch and the LLaVA-UHD v1 branch.

You can use the scripts to evaluate the model like llava-uhd v1 or v2. The eval datasets can be downloaded like llava-1.5. Besides, you can also use the VLMEval-kit to evaluate all the benchmark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants