Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] Benchmark across different supported models #46

Open
meet-minimalist opened this issue Jun 26, 2023 · 1 comment
Open

[Request] Benchmark across different supported models #46

meet-minimalist opened this issue Jun 26, 2023 · 1 comment

Comments

@meet-minimalist
Copy link

Hi,
I saw that you support multiple models to get the embeddings.
https://visual-layer.readme.io/docs/using-your-own-model#running-fastdup-with-a-preconfigured--model

Do you have any benchmarks or any reference for papers which compares these models?

Any guide on when to use what type of model. This would help a lot.

@dnth
Copy link
Collaborator

dnth commented Jun 27, 2023

Hi @meet-minimalist as far as I know the best model varies from one case to another. For example, the dinov2 model might perform well on natural images, but maybe not on some niche datasets like medical X-ray dataset.

For niche domains, you may find that the models used to train on the dataset may perform better than general off the shelves model. We don't have a benchmark comparison currently as the result may vary from one domain to another.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants