Google said that the new ASIC handily beats GPUs on training. The company’s latest large language-translation models take a full day to train on 32 of the current top-end GPUs. The same job runs in six hours on one-eighth of a pod, presumably eight TPUs.Full details EE Times.
Google started deploying the first-generation TPU in 2015. It is used for a wide variety of the company’s cloud services, including search, translation, and Google Photos.
Google touting second-generation Cloud TPU board with 180 teraflops
Posted on Thursday, May 18 2017 @ 13:37 CEST by Thomas De Maesschalck