AI chips and not GPUs are driving HBM demand

Posted on Tuesday, January 16 2018 @ 11:30 CET by Thomas De Maesschalck
While PC enthusiasts best know HBM and HBM2 due to their inclusion in high-end video card designs from AMD, it seems it's primarily AI and machine learning that are driving new demand for this type of memory. HBM is not only the fastest form of DRAM on the market, it also offers the best space and power characteristics. All this makes it very attractive for next-gen supercomputers and AI systems. The big reason why HBM is still a niche technology is the high cost. The problem at the moment is that volumes are low, which makes it hard to get the costs down. At the same time, the low volume is keeping the pricing high so we get a classic chicken-and-egg problem.
HBM is often discussed in the same breadth of Hybrid Memory Cube (HMC) as an avenue for getting the fastest DRAM performance. There's not a great deal of difference between the two technologies, but given that HBM has been getting wider adoption, it may win out over HMC, just as VHS eclipsed Beta.

But even if HBM is the winner, it's still a niche technology, said Jim Handy, principal analyst with Objective Analysis. "I do see it eventually becoming mainstream, but today it's really expensive technology. That's because TSVs are expensive thing to put silicon wafers,” Handy said.
Full details at EE Times.

Samsung HBM2


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments