Microprocessor designers need to adopt a balance of specialized and general-purpose architectures to succeed in deep learning, according to a talk at the inaugural SysML event by Nvidia’s chief scientist. He dismissed competing efforts in compute-in-memory, analog computing and neuromorphic computing.More about what NVIDIA luminaries said at the machine learning event can be read at EE Times.
Processors with memory hierarchies optimized for specialized instructions and data types like the Nvidia Volta are the best approach in the data center, said Bill Dally. At the edge, SoCs need accelerator blocks to speed neural network processing, he said.
NVIDIA disses alternative architectures at machine learning event
Posted on Monday, February 19 2018 @ 14:27 CET by Thomas De Maesschalck