Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
xiangyao-6e authored Oct 11, 2024
1 parent af7395e commit 546a742
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@ b. Open-source Models
- mPLUG-Owl3
- Ferret

To ensure a smooth evaluation process, make sure to install the specific version of the Transformers library as specified in the [**repository**](https://github.com/open-compass/VLMEvalKit/tree/main#-datasets-models-and-evaluation-results) for the model you wish to evaluate.

Some models are not supported by VLMEvalKit during our benchmark development.

We employed the models and the associated Transformers version for the benchmark as follows:
Expand All @@ -70,8 +72,6 @@ Model configuration file can be found at `MMDocBench/vlmeval/config.py`.

Model paths should be specified in the configuration file. Additionally, we set the model's `max_new_tokens` parameter to 4096 in the config file as there are some tasks in our benchmark that requires long model prediction output. For some models such as llava and mplug_owl3, where the generation configuration is fixed, we modify the model files to support the usage of the `max_new_tokens` parameter.

To ensure a smooth evaluation process, make sure to install the specific version of the Transformers library as specified in the [**repository**](https://github.com/open-compass/VLMEvalKit/tree/main#-datasets-models-and-evaluation-results) for the model you wish to evaluate.

### Run the evaluation

To run evaluation with either `python` or `torchrun` as follows to evaluate one model with MMDocBench.
Expand Down

0 comments on commit 546a742

Please sign in to comment.