Chinese DRAM Maker Developing HBM-Like Memory

SK Hynix
(Image credit: SK Hynix)

China is striving to develop its own high-bandwidth memory-like (HBM-like) for artificial intelligence and high-performance computing applications, according to a report from the South China Morning Post. ChangXin Memory Technologies (CXMT) is reportedly at the forefront of this initiative. 

HBM is the indisputable leader when it comes to performance as every HBM stack features a 1024-bit memory interface and a decent data transfer rate. Because of the wide interface and vertical stacking, production of HBM devices does not require the most advanced lithography. In fact, it is believed that global DRAM leaders use time-proven technologies for their HBM2E and HBM3 memory devices. 

What HBM does require is sophisticated packaging technologies — as connecting eight or twelve memory devices vertically using tiny through silicon vias (TSVs) is a complicated procedure. Yet, assembling an HBM-like known-good stacked die (KGSD) module is easier than producing a DRAM device on a 10nm-class process technology.

Despite technological limitations imposed by sanctions, industry insiders questioned by SCMP suggest that CXMT could still produce its own HBM-like memory. But it remains to be seen how wide the Chinese 'HBM' interface will be and how many DRAM devices per module it will be able to stack.

CXMT is China's leading domestic producer of DRAM, both in terms of technological prowess and in terms of manufacturing capabilities, so it is China's best bet for developing a proprietary type of memory designed to compete against industry-standard HBM in terms of bandwidth and capacity. Due to U.S. sanctions, CXMT and other Chinese chip manufacturers are constrained to use less advanced technologies for production, putting them at a competitive disadvantage globally.

AI and GPC processors are essential for various applications, including autonomous vehicles, artificial intelligence, and high-performance computing, so it is understandable why China wants to build its own HBM-like memory. Currently, companies like Biren can obtain HBM2E memory from leading DRAM suppliers, but if the U.S. government restricts access to this type of memory, China's only option will be to rely on its own technologies. As a result, development of HBM-like memory is part of a larger national agenda to become self-reliant in semiconductor technology.

 

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • InvalidError
    What HBM does require is sophisticated packaging technologies — as connecting eight or twelve memory devices vertically using tiny through silicon vias (TSVs) is a complicated procedure.
    If everyone and their dog is jumping onto the backside power delivery train which also requires TSVs, then the "cost-complexity" of TSVs must have come down drastically since the days where TSVs were considered exotic.

    Or China could rip pages from that multi-layer DRAM story from a few days ago and try to beat the world at making the first 16/32/64GB single-die DRAM chips.
    Reply
  • bit_user
    InvalidError said:
    If everyone and their dog is jumping onto the backside power delivery train which also requires TSVs, then the "cost-complexity" of TSVs must have come down drastically since the days where TSVs were considered exotic.
    That's one explanation. Perhaps experience with DRAM-stacking indeed helped, there.

    However, another reason why backside power delivery is suddenly happening now is that power demands have been going up, while the size of the wires used to deliver it have stopped decreasing. That means they're taking up proportionately much more area than before, which makes minimizing their intrusion into the logic layers of the die a much higher-value problem.
    Reply
  • InvalidError
    bit_user said:
    However, another reason why backside power delivery is suddenly happening now is that power demands have been going up, while the size of the wires used to deliver it have stopped decreasing. That means they're taking up proportionately much more area than before, which makes minimizing their intrusion into the logic layers of the die a much higher-value problem.
    Why wires? You can do top-side delivery with vias too and top-side power vias can be as small as the finest process you can be bothered to use for them. TSVs on the other hand only go down to about 1um in diameter, which means you will still need power routing in the bottom metal layers to fan it out.

    The alleged density gains from backside power delivery likely come primarily from having few if any power routing contentions when you can get GND from the back (don't want the IHS to be live at 1.3V 300A) and Vbunchofrails from the front.
    Reply
  • gg83
    I would say that the South China Morning Post is not even close to a reputable source of information. as it is owned by alibaba
    Reply