While PC enthusiasts best know HBM and HBM2 due to their inclusion in high-end video card designs from AMD, it seems it's primarily AI and machine learning that are driving new demand for this type of memory. HBM is not only the fastest form of DRAM on the market, it also offers the best space and power characteristics. All this makes it very attractive for next-gen supercomputers and AI systems. The big reason why HBM is still a niche technology is the high cost. The problem at the moment is that volumes are low, which makes it hard to get the costs down. At the same time, the low volume is keeping the pricing high so we get a classic chicken-and-egg problem.
HBM is often discussed in the same breadth of Hybrid Memory Cube (HMC) as an avenue for getting the fastest DRAM performance. There's not a great deal of difference between the two technologies, but given that HBM has been getting wider adoption, it may win out over HMC, just as VHS eclipsed Beta.
But even if HBM is the winner, it's still a niche technology, said Jim Handy, principal analyst with Objective Analysis. "I do see it eventually becoming mainstream, but today it's really expensive technology. That's because TSVs are expensive thing to put silicon wafers,” Handy said.