-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use dynamic CUDA wheels on CUDA 11 #2548
use dynamic CUDA wheels on CUDA 11 #2548
Conversation
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
/ok to test |
"$ORIGIN/../../nvidia/cusolver/lib" | ||
"$ORIGIN/../../nvidia/cusparse/lib" | ||
"$ORIGIN/../../nvidia/nvjitlink/lib" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notice I've added an additional ../
in here... these RPATHs were wrong in #2531 (confirmed by inspecting the paths more closely tonight).
Also confirmed these will work for both CUDA 11 and CUDA 12 wheels.
I suspect we didn't notice this in CI before because the corresponding system-installed libraries were being found instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to ensure CI will fail in the future if these are incorrect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been thinking about that... there are options but I don't love any of them.
Ways I can think of to test that an RPATH actually ends up getting used:
- set
LD_DEBUG=libs LD_DEBUG_OUTPUT=/tmp/logs
or something to generate debug logs fromld
, thengrep
those to figure out if libraries were loaded from the expected place - ensure there aren't any more of these libraries on the system by deleting them (we used to do something similar for UCX)... if loading works, then it must be that the one remaining copy of e.g.
libcublas.so
was used - hide the those system locations by manipulating
LD_LIBRARY_PATH
, forcingld
to either find the wheel-installed libraries or raise an error - start a long-lived process that loads the library, then use
lsof
to check that the one from site-packages was loaded (https://superuser.com/a/310205/1798685
I definitely don't think we should hold up this PR over this question, but it would be helpful to find a way to catch things like this in CI if we could find a way that isn't too complex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we might be able to try building CI images without any system CUDA libraries? If we migrate all of RAPIDS to use CUDA wheels, that could be a step in the right direction, but it might also require other non-RAPIDS packages like cupy
to use CUDA wheels, too. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we first put together the wheels and the images, I actually pushed for us to base the citestwheel images on a base image that did not contain the CUDA libraries. I don't know if that's easily discoverable in the history anywhere. In any case, @bdice's suspicion here is correct; this wound up not working because you couldn't use cupy (or numba) in those images since they require the system libraries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All looks fine to me. If this works, I am happy to merge it soon so we can unblock CUDA 11 wheel CI for other repos.
I'm going to trigger a |
/merge |
Contributes to rapidsai/build-planning#137 Follow-up to #4804 Wheel builds here currently list out some shared library to exclude in `auditwheel repair`, which they pick up transitively via linking to `libraft`. https://github.com/rapidsai/cugraph/blob/a9c923bb3f4a6a6f5a9d46337adc65d969717567/ci/build_wheel.sh#L42-L49 The version components of those library names can change when those libraries have ABI breakages, for example across CUDA major version boundaries. This proposes replacing specific versions with wildcards, to exclude *all* versions of those libraries. ## Notes for Reviewers This is especially relevant given this: rapidsai/raft#2548 For example, the latest `nvidia-cublas-cu11` has `libcublas.so.11` while `nvidia-cublas-cu12` has `libcublas.so.12`. Authors: - James Lamb (https://github.com/jameslamb) Approvers: - Bradley Dice (https://github.com/bdice) URL: #4877
Reverts #4876 now that rapidsai/raft#2548 has landed.
Reverts #599 now that rapidsai/raft#2548 has landed. Authors: - Bradley Dice (https://github.com/bdice) Approvers: - James Lamb (https://github.com/jameslamb) URL: #601
Contributes to rapidsai/build-planning#137
Follow-up to #2531 .
See the linked issue for many more details, but in short... using a dynamically-loaded libraft which has statically-linked cuBLAS causes issues for other libraries.
There are now aarch64 CUDA 11 wheels for cuBLAS and other CUDA libraries, so it's possible to have RAFT wheels dynamically link against them. This PR does that.
Notes for Reviewers
This has other side benefits in addition to fixing runtime issues... it also simplifies the wheel-building scripts and CMake, and makes CUDA 11 wheels noticeably smaller 😊