Core i9 / RTX3080 better than Xeon/V100 for nnUNet? #1261
vincenzoml
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've been running NNUnet for a while for some "conceptual experiments" combining model checking and machine learning, on a desktop machine equipped with Core i9 9900 and RTX3080 with 12Gb of RAM.
Then I hoped to accelerate the training using first a T4 (which indeed, was slower in my tests) but also on a xeon processor with a V100 GPU, where my 2D training benchmarks run 0.5x slower or worse than on the RTX: each epoch takes 25 seconds instead of about 11.
Q0: very odd in my opinion but still maybe entirely fine and expected? What GPU will beat my RTX3080 in nnUNet training?
Q1: I've been asked if "I use the tensor cores". Does nnUNet use them? Is there any particular thing one has to do at installation to enable them? I just did "pip install nnUNet" on a recent python3 installation".
Q2: looking at the attached graphs, isn't it odd that the machine has plenty of headroom both in GPU, CPU, and memory? Where's the bottleneck? I'm running from a subdirectory of /dev/shm.
Thanks!
Vincenzo
Beta Was this translation helpful? Give feedback.
All reactions