NVIDIA CEO comments on importance of process technology

Posted on Thursday, August 20 2020 @ 15:14 CEST by Thomas De Maesschalck
NVDA logo
During yesterday's second-quarter earnings call with financial analysts, NVIDIA CEO Jen-Hsun Huang received a question about the importance of process technology. As you may know, there are rumors that NVIDIA's initial Ampere GPUs aren't made by TSMC because the company failed to secure enough production capacity. Word on the street is that AMD booked a lot more capacity at TSMC than NVIDIA expected, as Ryzen CPUs are now made by TSMC, and that this forced NVIDIA to rely on Samsung's 8nm process. It's not officially confirmed yet, but Huang's statement below does hint that there may be some truth to it. Huang points out architecture is more important than process technology, and explicitly mentions NVIDIA is working with world's best foundries. Of course, NVIDIA has used Samsung before, so perhaps we're reading a bit too much into this. We'll find out more in the coming weeks.
Joseph Moore [Morgan Stanley analyst]
Great. Thank you. I wonder if I could ask a longer-term question about the – how you guys see the importance of process technology. There’s been a lot of discussion around that in the CPU domain. But you guys haven’t really felt the need to be first on seven-nanometer, and you have done very well. Just how important do you think it is to be early in the new process node? And how does that factor into the cycle of innovation at NVIDIA?

Jensen Huang

Yes. First of all, thanks, Joe. The process technology is a lot more complex than a number. I think people have simplified it down to almost a ridiculous level, alright? And so, process technology we have a really awesome process engineering team. World-class. Everybody will recognize that it’s absolutely world-class. And we work with the foundries, we work with TSMC really closely, to make sure that we engineer transistors that are ideal for us and we engineer metallization systems that is ideal for us. It’s a complicated thing, and we do it at high part. Then the second part of it is where architecture, where the process technology and the rest of the design process, the architecture of the chip, and the final analysis, what NVIDIA paid for, is architecture, not procurement of transistors. We are paid for architecture. And there’s a vast difference between our architecture and the second-best architecture and the rest of the architectures. The difference is incredible. We are easily twice the energy efficiency all the time, irrespective of the number of the – in the transistor side. And so, it must be more complicated than that. And so, we put a lot of energy into that. And then the last thing I would say is that going forward, it’s really about data center-scale computing.

Going forward, you optimize at the data center scale. And the reason why I know this for a fact is because if you’re a software engineer, you would be sitting at home right now and you will write a piece of software that runs on the entire data center in the cloud. You have no idea what’s underneath it, nor do you care. And so, what you really want is to make sure that, that data center is as high throughput as possible. There are lot of code in there. And so, what NVIDIA has decided to do over the years is to take our game to a new level. Of course, we start with building the world’s best processors, and we use the world’s best foundries, and we partnered them very closely to engineer the best process for us. We partner with the best packaging companies to create the world’s best packaging. We’re the world’s first user of cobots. And whether it’s – I think we are– I’m pretty sure we are still the highest volume by far of 2.5D and 3D packaging. And so, we start from a great chip. We start from a great chip, but we don’t end there. That’s just the beginning for us. Now we take this thing all the way through systems, the system software, algorithms, networking, all the way up to the entire data center. And the difference is absolutely shocking.

We built our data center, Selene, and it took us four weeks. We put up Selene in four weeks’ time. It is the seventh-fastest supercomputer in the world, one of the fastest AI supercomputers in the world. It’s the most energy-efficient supercomputer in the world, and it broke every single record in MLPerf. And that kind of shows you something about the scale that we work and the complexity of the work that we do. And this is our – the future. It’s for – the future is about data centers.


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments