AMD to make more multi-GPU cards for the high-end segment

Posted on Thursday, May 29 2008 @ 04:15 CEST by Thomas De Maesschalck
As NVIDIA keeps pumping out very fast single-GPU graphics cards, AMD says it doesn't want to build "huge" chips like its rival:
"We took two chips and put it on one board (X2). By doing that we have a smaller chip that is much more power efficient," said Matt Skynner, vice president of marketing for the graphics products group at AMD.

"We believe this is a much stronger strategy than going for a huge, monolithic chip that is very expensive and eats a lot of power and really can only be used for a small portion of the market," he said. "Scaling that large chip down into the performance segment doesn't make sense--because of the power and because of the size."

Skynner said that AMD tries to design GPUs (graphics processing units) for the mainstream segment of the market, then ratchet up performance by adding GPUs rather than designing one large, very-high-performance chip.

Nvidia's "strategy is to design for the highest performance at all cost. And we believe designing for the sweet spot and then leveraging for the extreme enthusiast market with multiple GPUs is the preferred approach," Skynner said.

This applies to memory too. AMD thinks support for technologies like GDDR5 memory is another way to deliver good performance at a reasonable cost. "You don't need a huge chip with a huge data path to get the bandwidth. You can utilize a technology like GDDR5 to get that bandwidth," Skynner said.

Nvidia tends to favor very-fast, single-chip solutions.

Nvidia, of course, has a different take on why it chooses to develop big, fast chips.

"If you take two chips and put them together, you then have to add a bridge chip that allows the two chips to talk to each other...And you can't gang the memory together," said Ujesh Desai, general manager for GeForce products at Nvidia.

"So when you add it all up, you now have the power of two GPUs, the power of the bridge chip, and the power that all of that additional memory consumes. That's why it's too simplistic of an argument to say that two smaller chips is always more efficient."

Desai takes this argument a bit further. "They don't have the money to invest in high-end GPUs anymore. At the high end, there is no prize for second place. If you're going to invest a half-billion dollars--which is what it takes to develop a new enthusiast-level GPU--you have to know you're going to win. You either do it to win, or you don't invest the money."
More info at CNET.

About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.

Loading Comments