Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add standardized extended floating point types with complete implementations #1011

Closed
2 tasks done
jrhemstad opened this issue Mar 6, 2023 · 5 comments
Closed
2 tasks done

Comments

@jrhemstad
Copy link
Collaborator

jrhemstad commented Mar 6, 2023

As a CUDA user, I care about extended floating point types like __half or __nv_bfloat16. These types are deficient in a variety of ways as they may be lacking certain functionality one would expect from such a type.

We should add equivalent types to libcu++ that have both host and device implementations.

Ideally, these should provide seamless transition to a built-in compiler provided type such that if I write my code using cuda::float16 that it will use the built-in type when available and fall back to a library implementation if not.

CUTLASS has implementations that we should use as a starting point.

See also:

Tasks

Preview Give feedback
  1. jrhemstad
  2. griwes
@jarmak-nv jarmak-nv transferred this issue from NVIDIA/libcudacxx Nov 8, 2023
@github-project-automation github-project-automation bot moved this to Todo in CCCL Nov 8, 2023
@cliffburdick
Copy link

In the MatX we had to create our own wrappers to support __half, and __nv_bfloat16 types with wrappers so all normal C++ math operations are supported. It would be very useful if we could drop our types and use a standard version.

@jrhemstad jrhemstad changed the title Add new extended floating point types Add standardized extended floating point types with complete implementations Feb 27, 2024
@miscco
Copy link
Collaborator

miscco commented Feb 27, 2024

@cliffburdick if you drop me a link I am happy to do the leg work of porting those

@jrhemstad
Copy link
Collaborator Author

@cliffburdick if you drop me a link I am happy to do the leg work of porting those

To be clear, we won't just be trivially porting the MatX types. We'd need to evaluate the diff between CUTLASS, MatX, and the various other solutions to design a standard solution and then build that.

@cliffburdick
Copy link

@miscco our half file is here: https://github.com/NVIDIA/MatX/blob/main/include/matx/core/half.h
half complex is here: https://github.com/NVIDIA/MatX/blob/main/include/matx/core/half_complex.h

I agree they should not be ported as-is. I don't know if the CUTLASS ones contain all the stuff we have, but I think the end goal would be once cuda:: has the types we can alias them using our old names and delete the files.

@jrhemstad
Copy link
Collaborator Author

Closing this as superseded by #1665

@jrhemstad jrhemstad closed this as not planned Won't fix, can't repro, duplicate, stale Aug 2, 2024
@github-project-automation github-project-automation bot moved this from Todo to Done in CCCL Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

3 participants