Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on model weights for EgoSchema eval #8

Open
yukw777 opened this issue Sep 9, 2024 · 1 comment
Open

Clarification on model weights for EgoSchema eval #8

yukw777 opened this issue Sep 9, 2024 · 1 comment

Comments

@yukw777
Copy link

yukw777 commented Sep 9, 2024

Could you specify the exact model used for EgoSchema eval? The paper states that the LLM backbone used for EgoSchema eval is LLaMA-2, but README states that Vicuna weights were used. If LLaMA-2 was indeed used for EgoSchema eval, I'm assuming llama-2-7b-chat-hf and the corresponding minigpt4 weights were used (the minigpt4 weights linked in README seem to be for Vicuna 13b v0). Does this also mean that the provided pre-trained checkpoint is for llama-2-7b-chat-hf?

@rxtan2
Copy link
Owner

rxtan2 commented Sep 30, 2024

Yes, sorry I will update the README to reflect this. In the meantime, please use 'llama-2-7b-chat-hf'. Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants