-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use HAS_GPU to determine of cuda is available #364
base: main
Are you sure you want to change the base?
Conversation
Documentation preview |
Although this is tripping that block, I would suggest always using PyNVML to query GPU information, specifically what I mention in #363 (comment) can be dangerous with Dask if for some reason the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Linting may be off but content looks good to me. Thanks @jperez999 !
This is not ready, the failures during writing have to do with when you are writing a file with a client available. Will continue investigating. |
Investigated seems that the logic for int_slice_size was not full proof. Because of the floor divide you can find yourself in a scenario where you have less records in the df than the int_slice_size and that can result in a zero. Then when you go to mod on zero the thread raises an exception. I do wonder how we hit this now and not before. |
/ok to test |
I agree that this is strange - I wonder if I was wrong about |
/ok to test |
/ok to test |
This PR changes how we determine if cuda is available on the system. We move from numba to using HAS_GPU which uses nvml device count. If there are no devices, then cuda is not available. Otherwise cuda is available.