title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | tags | startup_duration_timeout | fullWidth | |
---|---|---|---|---|---|---|---|---|---|---|---|
MTEB Leaderboard |
🥇 |
blue |
indigo |
gradio |
4.20.0 |
app.py |
false |
|
1h |
true |
As of February 4, 2025, this repository will no longer be under active maintenance, and will be replaced by the new version of the leaderboard, which is integrated into the mteb package. All requested modifications to the leaderboard or model submissions should start in the MTEB repository.
This repository contains legacy code for pushing and updating the MTEB leaderboard daily.
Relevant Links | Decription |
---|---|
mteb | The implementation of the benchmark. Here you e.g. find the code to run your model on the benchmark. |
leaderboard | The leaderboard itself, here you can view results of model run on MTEB. |
results | The results of MTEB is stored here. To learn how to add results to the Leaderboard, refer to the documentation: Adding a Model to the Leaderboard. |
|
To setup the repository:
git clone https://github.com/embeddings-benchmark/leaderboard.git
cd leaderboard
# install requirements
pip install -r requirements.txt
# fetch new results
# python refresh.py
# if you'd like to add results to previously cached models, you may have to remove these models in `EXTERNAL_MODEL_RESULTS.json`
# you can also directly delete `EXTERNAL_MODEL_RESULTS.json` and it will recreate it (but be much slower)
# run the leaderboard
python app.py