Skip to content
This repository has been archived by the owner on Feb 4, 2025. It is now read-only.

embeddings-benchmark/leaderboard

Repository files navigation

title emoji colorFrom colorTo sdk sdk_version app_file pinned tags startup_duration_timeout fullWidth
MTEB Leaderboard
🥇
blue
indigo
gradio
4.20.0
app.py
false
leaderboard
1h
true

As of February 4, 2025, this repository will no longer be under active maintenance, and will be replaced by the new version of the leaderboard, which is integrated into the mteb package. All requested modifications to the leaderboard or model submissions should start in the MTEB repository.

The Legacy MTEB Leaderboard repository

This repository contains legacy code for pushing and updating the MTEB leaderboard daily.

Relevant Links Decription
mteb The implementation of the benchmark. Here you e.g. find the code to run your model on the benchmark.
leaderboard The leaderboard itself, here you can view results of model run on MTEB.
results The results of MTEB is stored here. To learn how to add results to the Leaderboard, refer to the documentation: Adding a Model to the Leaderboard.

|

Developer setup

To setup the repository:

git clone https://github.com/embeddings-benchmark/leaderboard.git
cd leaderboard
# install requirements
pip install -r requirements.txt
# fetch new results
# python refresh.py
# if you'd like to add results to previously cached models, you may have to remove these models in `EXTERNAL_MODEL_RESULTS.json`
# you can also directly delete `EXTERNAL_MODEL_RESULTS.json` and it will recreate it (but be much slower)
# run the leaderboard
python app.py