With the industrial semiconductor market expected to grow to $55.2 billion by 2018, the growth of the sector is unimaginable. And as the industry gets competitive, DRAM chip manufacturers are constantly plagued by rising demands of increased bandwidth, and low power consumption at a relatively low price. To overcome these challenges, transformational changes are happening in the memory interface world, with companies like AMD, Samsung, Micron Technology, Xilinx, and Nvidia, showing keen interest in HBM and HMC standards. So what's all the hype about? Let's have a look.
High Bandwidth Memory (HBM)
The development of HBM began as a solution to the problem of ever increasing power usage and form factor of computer memory. HBM is a high-performance RAM interface that is increasingly being used with high-performance graphics accelerators and network devices in the semiconductor hardware design service markets. By stacking up to 8 DRAM dies on a circuit, and interconnecting them by TSVs, HBM offers substantially higher bandwidth while using less power in a relatively smaller form factor. Many product companies are adopting the HBM to build next-gen GPUs.
- The HBM DRAM employs a wide-interface architecture to provide high-speed, low-power operation.
- Designed explicitly for high performance GPU environments, the HBM is expected to be cheaper than its incompatible rival, the HMC
- The interface is divided into independent channels, with each channel completely independent of the other
- HBM yields an overall package bandwidth of 128 GB/s.
- With 128-bit channels and a total of 8 channels, the HBM offers a stunning a 1024-bit interface; a GPU with four 4 HBM stacks would therefore offer a memory bus with 4096 bits.
- The bandwidth capacity is in the 128GBPS to 256GBPS range
- Future GPUs built with HBM might reach 512GBPS to 1TBPS of main memory bandwidth
Hybrid Memory Cube (HMC)
HMC is also a high-performance RAM interface designed for TSV based stacked DRAM memory. One of the major goals of HMC is to eliminate the duplicative control logic of modern DIMMS, and in the process streamline design, link the entire stack in a 3D configuration, then use one control logic layer to cater to all traffic. HMC is explicitly designed to respond to multi-core scenarios and deliver data with substantially higher bandwidth and lower overall latency.
- A 3D stack that places DRAMs on top of logic, HMC combines high-speed logic process technology and TSVs to connect multiple dies on top of each other.
- The typical bandwidth of a single 16-lane link with 10GB/s is 40GB/s
- A 4-link cube can reach 240 GB/s memory bandwidth while an 8-link cube can reach 320 GB/s bandwidth
- HMC is expected bandwidth as high as 400G, approximately 15 times the performance of a DDR3 module
|Applications||High End Servers||Graphics, Computing|
|DRAM interface||chip to chip||wide parallel, multi-channel|
|Interface Width||Upto 4 links with up to 16 lanes each||128 per channel, upto 8 channels each|
|Maximum Bandwidth||upto 480GB/s||upto 256GB/s|
|System Configuration||Point to Point||2.5 TSV band silicon|
Smaller physical footprint
Pliable to multiple flatforms
In a Nutshell
Choosing the right DRAM technology requires careful consideration, and highly depends on the application that is being built. HBM and HMC together are set to revolutionize memory access speed and overall performance, with their combined market expected to be worth USD 953.8 Million by 2022. Since they allow for simultaneous access to several memory blocks, HBM and HMC could drastically cut down latency. Devices using GPUs can expect to see a profound performance boost, as HBM and HMC are set to overcome the critical bandwidth shortage that has been affecting the performance of modern GPUs. So with HBM and HMC, expect a massive 40-50% performance improvement in GPUs!