The problem with using TOPS to measure AI accelerator performance

Posted on Tuesday, December 10 2019 @ 11:47 CET by Thomas De Maesschalck
EE Times has an interesting article about the measurement of performance of AI accelerators. The piece, which you can read over here, argues that the frequently cited TOPS is not a good measure for performance:
“What customers really want is high throughput per dollar,” said Geoff Tate, CEO of AI accelerator company Flex Logix.

Tate explained that having more TOPS doesn’t necessarily correlate with higher throughput. This is particularly true in edge applications where the batch size is 1. Applications such as data centers may increase their throughput by processing multiple inputs in parallel using larger batches (since they have TOPS to spare), but this is not often suitable for edge devices.


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments