Google touting second-generation Cloud TPU board with 180 teraflops

Posted on Thursday, May 18 2017 @ 13:37 CEST by Thomas De Maesschalck
gOOG logo
At yesterday's Google I/O event, the search giant showed off its second-generation tensor processing unit (TPU). This new model can be used for both training and inference and a board with four of the new Cloud TPUs promises up to 180 teraflops for machine learning tasks. Google claims its custom chip is significantly faster than GPU-based solutions and plans to offer a Cloud platform for commercial developers.
Google said that the new ASIC handily beats GPUs on training. The company’s latest large language-translation models take a full day to train on 32 of the current top-end GPUs. The same job runs in six hours on one-eighth of a pod, presumably eight TPUs.

Google started deploying the first-generation TPU in 2015. It is used for a wide variety of the company’s cloud services, including search, translation, and Google Photos.
Full details EE Times.

Google board with four TPUs


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments