Hybrid Memory Cube Consortium Establishes a Global Standard for 320 GB/s Memory

Following a 17-month collaborate development effort, more than 100 developer and adopter members of the Hybrid Memory Cube Consortium (HMCC) have accounted that they have reached a consensus on a global standard for a disruptive memory computing solution. That solution will allow the development of Hybrid Memory Cube (HMC) technology for a wide range of consumer and industrial applications.

The HMC Specification 1.0 currently enables companies to build memory that incorporates HMC's stacked, power efficient technology with capacities of 2 GB, 4 GB and 8 GB. While this may not seem all that impressive, a memory cube with eight links can provide an astonishing peak aggregate bandwidth of 320 GB/s, more than 20 times what is offered by current generation DDR3 memory. 

The achieved specification provides an advanced, short-reach (SR) and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close-proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking, and test and measurement.

HMC technology also holds promise in overcoming the "memory wall" challenge that stems from the capacities of conventional memory architectures being outstripped by the demands of high-performance computers and networking equipment. Overcoming this requires memory that can provide increased bandwidth and density with lower consumptions and was, in fact, one of the key motivations for forming the organization.

"The consensus we have among major memory companies and many others in the industry will contribute significantly to the launch of this promising technology," said Jim Elliott, Vice President, Memory Planning and Product Marketing, Samsung Semiconductor, Inc. "As a result of the work of the HMCC, IT system designers and manufacturers will be able to get new green memory solutions that outperform other memory options offered today."

HMCC is confident that Hybird Memory Cube Architecture will "leap beyond current and near-term memory architectures in the areas of performance, packaging and power efficiency." The consortium now aims to increase data rate speeds from 10, 12.5 and 15 Gb/s to 28 Gb/s for SR and from 10 Gb/s to 15 Gb/s for USR. The next revision of the HMC Specification is expected to gain consortium agreement by Q1 2014

Additional information and technical specifications can be found at HybridMemoryCube.org.

Contact Us for News Tips, Corrections and Feedback

  • Imagine combining this with the technology volta is using to stack dram. I wonder how much bandwith you could get then.

    Still, this is impressive, very much so. But I wonder what's going to hit the regular market first, this or DDR4?
    Reply
  • vaughn2k
    If desktop PCs would benefit more on this, then definitely desktop PCs won't die, there are still a lot of room for improvements!
    Reply
  • chicofehr
    This would be great for high end video cards. I wonder if it would speed up direct compute, open cl and 3D rendering.
    Reply
  • CaedenV
    As cool as this is (and it is pretty cool), memory speed is not a major issue for future computing. I am wondering if this tech will ever get cheap enough, or get enough support, so that it ever really gets used in anything. It seems like the regular updates of DDR standards, and the slow move to mainstream GDDR (good move Sony!) will be able to keep up with CPU's and GPU's processing capability moving forward.

    Still, I am really stoked to see stacked architecture starting to get somewhere. To think a year ago everyone was talking about how it would be impossible, and now in this week alone there have been articles of 2 companies starting stacked implementation. Once you get power consumption and leakage down low enough, then heat becomes less of an issue so that you can stack at least a few layers and still get adequate heat dissipation. I can't wait to see what this kind of stacked electronics brings about! It is the holy grail for SOC style computing because you can fit more stuff in essentially the same footprint. It also acts as a way to get around the latency and timing issues involved with many core CPU designs because you can put your IO for a lot of cores in a physically closer area, which should open up the way for 20+ core designs. Have perhaps a traditional dual and quad core design for day to day work, and then something like knights corner for programs that are optimized for many-thread CPUs where all of the cores are tiny simple low power cores, but the sheer volume of them make for impressive compute capacity. Maybe that is where this new memory tech helps? Something where you are feeding information to tens or hundreds of cores rather than your normal 4-16 of them.
    Reply
  • CaedenV
    athulajpImagine combining this with the technology volta is using to stack dram. I wonder how much bandwith you could get then. Still, this is impressive, very much so. But I wonder what's going to hit the regular market first, this or DDR4?DDR4 is due out this year and is already in production. We should start seeing consumer chips start supporting it with the release of Broadwell chips next year.
    Personally I think DDR4 is going to have a short lifespan. We have finally hit a point where your average consumer can cram way more ram on their systems than they will ever practically need for the life of the system. I am not saying that we will never need 16GB-32GB of ram in a home or gaming computer... just that we will not need it within the useful life of today's equipment. With that in mind, I think it would make a ton more sense to go Sony's route with a central stock of super high speed memory (be it XDR or GDDR) which can be used by the system, iGPU, or GPU, and then have either no RAM or just a little bit of insanely fast ram as a cache on the actual units. 8GB of GDDR would cost a pretty penny to put on a computer system, but for enthusiasts it would be well worth the money, and the cost would go down if it became more commonly used.
    I know there is practically 0 chance of that ever happening... but it is probably more likely than this new tech getting off the ground.
    Reply
  • hannibal
    It depend a lot of how much contact pins this new memory type demands. If this can achieve higher speeds with less contacts, it will become cheaper to produce and allso guite usefull in mobile environment where space is allways an issue.
    Reply
  • wanderer11
    Wouldn't 320 GB/s be 200 times DDR3 not 20 times? 320/20=16GB/s. DDR3 1600MHz is 1.6 GT/s.
    Reply
  • dark_knight33
    vaughn2kIf desktop PCs would benefit more on this, then definitely desktop PCs won't die, there are still a lot of room for improvements!
    The myth that the desktop PC will 'die' is nothing more than FUD. Long live the PC!
    Reply
  • InvalidError
    CaedenVPersonally I think DDR4 is going to have a short lifespan. We have finally hit a point where your average consumer can cram way more ram on their systems than they will ever practically need for the life of the system.The main advantage of DDR4 is clock speeds, not size. The main reason DDR4 "supports" twice the memory density as DDR3 did is mainly because the smallest DRAM size has doubled so size descriptions have been bumped up one notch.

    With Broadwell's IGP promising 4-5X HD4000's performance, DDR4's ~3.2GT/s will be very much welcome.

    vaughn2kIf desktop PCs would benefit more on this, then definitely desktop PCs won't die, there are still a lot of room for improvements!Stacked dies with (ultra) short range interconnects are better suited for eDRAM-like applications where the memory chip gets mounted on the same substrate as whatever it talks to... you could see future APU/GPUs with a few of those chips mounted directly on the CPU/GPU substrate for the frame buffer or possibly stacked with the CPU/GPU die itself.
    Reply
  • Vorador2
    I guess the main problem for this technology is yield. Such a complicated structure with transistors stacked in layers will be hard to reliably build with current technology.

    Still, it would solve some of the problems that high speed memory development is facing now.

    By they way, don't expect to use GDDR as main memory, because it suffers from high latency. There's a reason why it's not use for CPU anywhere.
    Reply