You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you specify the exact model used for EgoSchema eval? The paper states that the LLM backbone used for EgoSchema eval is LLaMA-2, but README states that Vicuna weights were used. If LLaMA-2 was indeed used for EgoSchema eval, I'm assuming llama-2-7b-chat-hf and the corresponding minigpt4 weights were used (the minigpt4 weights linked in README seem to be for Vicuna 13b v0). Does this also mean that the provided pre-trained checkpoint is for llama-2-7b-chat-hf?
The text was updated successfully, but these errors were encountered:
Could you specify the exact model used for EgoSchema eval? The paper states that the LLM backbone used for EgoSchema eval is LLaMA-2, but README states that Vicuna weights were used. If LLaMA-2 was indeed used for EgoSchema eval, I'm assuming
llama-2-7b-chat-hf
and the corresponding minigpt4 weights were used (the minigpt4 weights linked in README seem to be for Vicuna 13b v0). Does this also mean that the provided pre-trained checkpoint is forllama-2-7b-chat-hf
?The text was updated successfully, but these errors were encountered: