You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Keeping everything the same and just switching from 8-bit codes to 16-bit codes increases the time to build and train the index by more than 25x. The index contains around 25 million vectors. I only use GPU to train K-Means and train the IVFPQ on an Intel i9 processor 14900KS using 22 threads. Do you have any idea why there is such a significant difference in runtime?
The text was updated successfully, but these errors were encountered:
anfatima
changed the title
Performance of 16 bit codes vs 8 bits in IVFPQ
25x performance of 16 bit codes vs 8 bits in IVFPQ
Jan 17, 2025
anfatima
changed the title
25x performance of 16 bit codes vs 8 bits in IVFPQ
25x performance difference in 16 bit codes vs 8 bits in IVFPQ
Jan 17, 2025
anfatima
changed the title
25x performance difference in 16 bit codes vs 8 bits in IVFPQ
25x performance difference in 16-bit vs 8-bit codes in IVFPQ
Jan 17, 2025
@anfatima 16-bit codes is an enormous overhead indeed. MUCH slower is expected, bcz 8-bit codebook fits in all kinds of caches, plus I expect that there are special dedicated code branches cases for 8-bit codebook,
Keeping everything the same and just switching from 8-bit codes to 16-bit codes increases the time to build and train the index by more than 25x. The index contains around 25 million vectors. I only use GPU to train K-Means and train the IVFPQ on an Intel i9 processor 14900KS using 22 threads. Do you have any idea why there is such a significant difference in runtime?
Here is the config I am using,
{
"batch_size": 128,
"use_gpu": true,
"verbose": true,
"num_kmeans_iters": 50,
"num_centroids": 2000,
"num_bits": 16,
"num_quantizers": 16,
"skip_codebook_tables": false,
"use_precomputed_table": true
}
The text was updated successfully, but these errors were encountered: