1. Home >
  2. Extreme

Samsung's New GDDR6W Graphics Memory Rivals HBM2

Samsung's innovation offers double the capacity, and bandwidth, in the same footprint as GDDR6.
By Josh Norem
a-bridge-between-worlds-how-samsungs-gddr6w_2

In the past, chip companies such as AMD have dabbled in High-Bandwidth Memory (HBM) instead of GDDR to increase memory bandwidth for GPUs. This vertically stacked memory boasts incredible bandwidth, but it's a costly endeavor. AMD abandoned it in favor of GDDR memory after its ill-fated R9 Fury and Vega GPUs. Now Samsung has created a new type of GDDR6 memory it says is almost as fast as HBM without needing an interposer. Samsung says GDDR6W is the first "next-generation" DRAM technology, and that it will empower more realistic metaverse experiences(Opens in a new window).

Samsung took its existing GDDR6 platform and built it with Fan-Out Wafer-Level Packaging (FOWLP). With this technology, the memory die is mounted to a silicon wafer instead of a printed circuit board (PCB). Redistribution layers are fanned out around the chip allowing for more contacts and better heat dissipation. Memory chips are also double-stacked. Samsung says this has allowed it to increase bandwidth and capacity in the exact same footprint as before. Since there's no increase in die size, its partners can drop GDDR6W into existing and future designs without any modifications. This will theoretically reduce manufacturing time and costs.

Samsung's Fan-Out, Wafer-Level Packaging allows for a smaller package thanks to the absence of a PCB. (Credit: Samsung)

The new memory offers double the I/O and bandwidth of GDDR6. Using its existing 24Gb/s GDDR6 as an example, Samsung says the GDDR6W version has twice the I/O as there are more contact points. It also doubles capacity from 16Gb to 32Gb per chip. As shown above, the height of the FOWLP design is just 0.7mm, which is 36 percent lower than its DDR package. Even though I/O and bandwidth have been doubled, it says it has the same thermal properties as existing DDR6 designs.

Samsung says these advancements have allowed its GDDR6W design to compete with HBM2. It notes that second-generation HBM2 offers 1.6TB/s of bandwidth, with GDDR6W coming close with 1.4TB/s. However, that number from Samsung is using a 512-bit wide memory bus with 32GB of memory, which isn't something found in current GPUs. Both the Nvidia RTX 4090 and the Radeon RX 7900 XTX have a 384-bit wide memory bus and offer just 24GB of GDDR6 memory. AMD uses GDDR6 while Nvidia has opted for the G6X variant made by Micron. Both cards have around 1TB/s of memory bandwidth, though, so Samsung's offering is superior.

The big news here is that thanks to Samsung's chip-stacking, half the memory chips are required to achieve the same amount of memory as current packaging. This could result in reduced manufacturing costs. Overall, its maximum transmission rate per pin of 22Gb/s is very close to GDDR6X's 21Gb/s. So the gains in the future probably won't be for maximum performance, but rather memory capacity. You could argue nobody needs a GPU with 48GB of memory, but perhaps when we're gaming at 16K that'll change.

As far as products go, Samsung says it'll be introducing GDDR6W soon in small form factor packages such as notebooks. It's also working with partners to include it in AI accelerators and such. It's unclear whether AMD or Nvidia will adopt it, but if they do it'll likely be far in the future. That's just because both companies are already manufacturing their current boards with GDDR6/X designs, so we doubt they'd swap until a new architecture arrives.

Now Read:

Tagged In

VRAM GDDR6W HBM2 GDDR6X GDDR6

More from Extreme

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up