You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to better support users in using extended floating-point types like __half and bfloat16 in our libraries.
One of the major projects would be adding new extended floating-point library types to libcu++ for things like __half and bfloat16. The main goals would be:
Works in host/device code
Seamlessly uses equivalent core language type if/when the compiler supports it
Conversion to/from existing cudart types
Eventually supersede the cudart types
CUTLASS already has implementations of these types we could use as a starting point.
Would be eager to see this; in particular also specializations complex<__half> and complex<bfloat16>.
Are there any plans on whether or when this might be available?
jrhemstad
changed the title
[EPIC] Extended Floating-Point Support in libcu++
[EPIC] Extended Floating-Point Support
Oct 5, 2023
We want to better support users in using extended floating-point types like
__half
andbfloat16
in our libraries.One of the major projects would be adding new extended floating-point library types to libcu++ for things like
__half
andbfloat16
. The main goals would be:CUTLASS already has implementations of these types we could use as a starting point.
Tasks for new types
numeric_limits
forfloating_point<M, E>
#2186<atomic>
specializations forfloating_point<M, E>
#2184<complex>
specializations forfloating_point<M,E>
#2185cuda/type_traits
forcuda::is_floating_point
trait #2187Tasks for existing __half/bfloat types
cuda::(std::)
types for__half/bfloat16/fp8
#525Tasks for extended floating-point vector types
The text was updated successfully, but these errors were encountered: