Samsung announced its new Flashbolt High Bandwidth Memory (HBM) at Nvidia’s GPU Technology Conference (GTC).
Flashbolt is the industry’s first product that complies with the HBM2E specification, which improves bandwidth per pin by 33%, an increase from 2.4Gbps to 3.2Gbps. Capacity per die also doubles to 16Gb.
As a result, a Flashbolt package with a 1024-bit bus will offer up to 410GB/s of bandwidth and 16GB of capacity in an 8-Hi stack configuration.
The company aims the product at next-generation data centers, AI/ML, and graphics applications. Evidently, by using four stacks and a 4096-bit memory interface, the maximum bandwidth and capacity quadruples to 1.64TB/s and 64GB, respectively. For comparison, last year at GTC, Nvidia updated the Tesla V100 with a version with 32GB and 1.75Gbps of HBM2 memory, delivering 900GB/s. AMD’s more recent 7nm Radeon VII uses four HBM2 packages at 2.0Gbps for 1TB/s of memory bandwidth.
|Bandwidth per pin||3.2Gb/s||2.4Gb/s||2.0Gb/s||2.0Gb/s||1.6Gb/s||1.6Gb/s|
|Number of dies per stack||8||8||8||4||8||4|
|Bandwidth per stack||410GB/s||307.2GB/s||256GB/s||256GB/s||204.8GB/s||204.8GB/s|
Few details of the product have been announced, such as its operating voltage or manufacturing node. Last year’s Aquabolt operated at 1.2V and was built with 20nm DRAM technology. The effective bus width per package remains at 1024-bit.
In general, HBM uses much less power and also has a significantly smaller footprint than DDR4 or GDDR6 by stacking multiple DRAM dies on top of each other and interconnecting them with through-silicon vias (TSVs). Because of the large number of pins, HBM is usually connected to the compute die via an interposer. However, this adds cost, so HBM has been constrained to the high-end. Intel, on the other hand, uses its EMIB technology in their Stratix 10 FPGA and Kaby Lake-G products to connect the memory to the FPGA or GPU.
Samsung has not announced volume production of this new HBM2E memory yet, but it’s safe to assume that Flashbolt will find its way into next-gen 7nm GPUs at some point.