NVIDIA disses alternative architectures at machine learning event

Posted on Monday, February 19 2018 @ 14:27 CET by Thomas De Maesschalck
NVIDIA logo
Over at the SysML conference at Stanford, NVIDIA Chief Scientist Bill Dally criticized alternative architectures. Dally said he doesn't believe in compute-in-memory, analog computing or neuromorphic computing efforts. Dally explained that engineers need to focus on a balance of specialized and general-purpose architetures to meet the needs of the deep learning segment. He added that the competition may be too specialized and thus too limited in functionality.
Microprocessor designers need to adopt a balance of specialized and general-purpose architectures to succeed in deep learning, according to a talk at the inaugural SysML event by Nvidia’s chief scientist. He dismissed competing efforts in compute-in-memory, analog computing and neuromorphic computing.

Processors with memory hierarchies optimized for specialized instructions and data types like the Nvidia Volta are the best approach in the data center, said Bill Dally. At the edge, SoCs need accelerator blocks to speed neural network processing, he said.
More about what NVIDIA luminaries said at the machine learning event can be read at EE Times.


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments