Poll: Do you use non-IEEE floating-point types in machine learning?

In HDF5 1.14.4, we added support for FP16 as a predefined datatype. We’re thinking of adding further support for additional machine-learning-oriented floating point types, like BFLOAT16 and FP8, to HDF5 1.14.5. Let us know if you use non-IEEE reduced-precision floating point types in your machine learning workflows, and please comment below if you have additional thoughts to share.

  • FP16
  • BFLOAT16
  • FP8 (E5M2, E4M3, or other)
  • FP4 (LLM-FP4 or other)
  • Other non-IEEE floating-point format (describe below)

0 voters

1 Like