Future AMD GPUs to adopt BFloat16 floating point support?

Posted on Tuesday, October 22 2019 @ 10:33 CEST by Thomas De Maesschalck
AMD logo
TechPowerUp came across evidence that suggests that future AMD graphics architectures will support BFloat16 floating point capability. This feature is primarily useful for machine learning applications, BFloat16 enables a much higher range than FP16. This enables higher performance as it ensures that AI researchers don't have to fallback to FP32:
BFloat16 offers a significantly higher range than FP16, which caps out at just 6.55 x 10^4, forcing certain AI researchers to "fallback" to the relatively inefficient FP32 math hardware. BFloat16 uses three fewer significand bits than FP16 (8 bits versus 11 bits), offering 8 exponent bits, while FP16 only offers 5 bits. BFloat16 is more resilient to overflow and underflow in conversions to FP32 than FP16 is, since BFloat16 is essentially a truncated FP32.


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments