Is it time for Huang's Law?

Posted on Tuesday, April 03 2018 @ 16:03 CEST by Thomas De Maesschalck
NVDA logo
Libraries full have been written about Moore's Law, but will be soon be talking about Huang's Law? Over at last weeks GPU Technology Conference, NVIDIA boss Jen-Hsun Huang repeatedly stressed that GPUs are governed by a law of their own. He didn't refer to it as "Huang's Law", but expressed his opinion that there's a new, supercharged law going on.

Basically, the performance of GPUs advances much more quickly because they rely upon a parallel architecture. The innovation also isn't just about chips, but about the entire stack. As observed by IEEE Spectrum:
Just how fast does GPU technology advance? In his keynote address, Huang pointed out that Nvidia’s GPUs today are 25 times faster than five years ago. If they were advancing according to Moore’s law, he said, they only would have increased their speed by a factor of 10.

Huang later considered the increasing power of GPUs in terms of another benchmark: the time to train AlexNet, a neural network trained on 15 million images. He said that five years ago, it took AlexNet six days on two of Nvidia’s GTX 580s to go through the training process; with the company’s latest hardware, the DGX-2, it takes 18 minutes—a factor of 500.
Huangs Law


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments