MLPerf: A new benchmark for machine learning

Posted on Thursday, May 03 2018 @ 10:15 CEST by Thomas De Maesschalck
Chip makers are starting to back MLPerf, a new suite of benchmarks for AI and machine learning computing jobs. The benchmark was created via a collaboration from Google and Baidu with researchers at Harvard and Stanford. Other major backers include AMD, Intel, two AI startups, and two other universities. NVIDIA is missing from the list though, even though its P100 Volta chip will be a reference standard due to its very broad use in data centers for AI training.

EE Times writes the first release of MLPerf will focus on training jobs, while later versions will add inferencing benchmarks. There is still a need for a lot more performance. Below is a pretty interesting quote from Baidu, it reveals one AI model the company really wants to train currently requires two years of computing on all the GPUs the company currently has!
“To train one model we really want to run would take all GPUs we have for two years,” given the size of the model and its data sets, said Greg Diamos, a senior researcher in Baidu’s deep-learning group, giving an example of the issue for web giants.

“If systems become faster, we can unlock the potential of machine learning a lot quicker,” said Peter Mattson, a staff engineer on the Google Brain project who announced MLPerf at a May 2 event.


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments