Hybrid Memory Cube Consortium Establishes a Global Standard for 320 GB/s Memory
The Hybrid Memory Cube Consortium (HMCC) has established a global standard for a disruptive memory computing solution that aims to enable memory to reach a peak aggregate bandwidth of 320 GB/s.
Following a 17-month collaborate development effort, more than 100 developer and adopter members of the Hybrid Memory Cube Consortium (HMCC) have accounted that they have reached a consensus on a global standard for a disruptive memory computing solution. That solution will allow the development of Hybrid Memory Cube (HMC) technology for a wide range of consumer and industrial applications.
The HMC Specification 1.0 currently enables companies to build memory that incorporates HMC's stacked, power efficient technology with capacities of 2 GB, 4 GB and 8 GB. While this may not seem all that impressive, a memory cube with eight links can provide an astonishing peak aggregate bandwidth of 320 GB/s, more than 20 times what is offered by current generation DDR3 memory.Â
The achieved specification provides an advanced, short-reach (SR) and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close-proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking, and test and measurement.
HMC technology also holds promise in overcoming the "memory wall" challenge that stems from the capacities of conventional memory architectures being outstripped by the demands of high-performance computers and networking equipment. Overcoming this requires memory that can provide increased bandwidth and density with lower consumptions and was, in fact, one of the key motivations for forming the organization.
"The consensus we have among major memory companies and many others in the industry will contribute significantly to the launch of this promising technology," said Jim Elliott, Vice President, Memory Planning and Product Marketing, Samsung Semiconductor, Inc. "As a result of the work of the HMCC, IT system designers and manufacturers will be able to get new green memory solutions that outperform other memory options offered today."
HMCC is confident that Hybird Memory Cube Architecture will "leap beyond current and near-term memory architectures in the areas of performance, packaging and power efficiency." The consortium now aims to increase data rate speeds from 10, 12.5 and 15 Gb/s to 28 Gb/s for SR and from 10 Gb/s to 15 Gb/s for USR. The next revision of the HMC Specification is expected to gain consortium agreement by Q1 2014
Additional information and technical specifications can be found at HybridMemoryCube.org.
Still, this is impressive, very much so. But I wonder what's going to hit the regular market first, this or DDR4?
Still, I am really stoked to see stacked architecture starting to get somewhere. To think a year ago everyone was talking about how it would be impossible, and now in this week alone there have been articles of 2 companies starting stacked implementation. Once you get power consumption and leakage down low enough, then heat becomes less of an issue so that you can stack at least a few layers and still get adequate heat dissipation. I can't wait to see what this kind of stacked electronics brings about! It is the holy grail for SOC style computing because you can fit more stuff in essentially the same footprint. It also acts as a way to get around the latency and timing issues involved with many core CPU designs because you can put your IO for a lot of cores in a physically closer area, which should open up the way for 20+ core designs. Have perhaps a traditional dual and quad core design for day to day work, and then something like knights corner for programs that are optimized for many-thread CPUs where all of the cores are tiny simple low power cores, but the sheer volume of them make for impressive compute capacity. Maybe that is where this new memory tech helps? Something where you are feeding information to tens or hundreds of cores rather than your normal 4-16 of them.
DDR4 is due out this year and is already in production. We should start seeing consumer chips start supporting it with the release of Broadwell chips next year.
Personally I think DDR4 is going to have a short lifespan. We have finally hit a point where your average consumer can cram way more ram on their systems than they will ever practically need for the life of the system. I am not saying that we will never need 16GB-32GB of ram in a home or gaming computer... just that we will not need it within the useful life of today's equipment. With that in mind, I think it would make a ton more sense to go Sony's route with a central stock of super high speed memory (be it XDR or GDDR) which can be used by the system, iGPU, or GPU, and then have either no RAM or just a little bit of insanely fast ram as a cache on the actual units. 8GB of GDDR would cost a pretty penny to put on a computer system, but for enthusiasts it would be well worth the money, and the cost would go down if it became more commonly used.
I know there is practically 0 chance of that ever happening... but it is probably more likely than this new tech getting off the ground.
The myth that the desktop PC will 'die' is nothing more than FUD. Long live the PC!
The main advantage of DDR4 is clock speeds, not size. The main reason DDR4 "supports" twice the memory density as DDR3 did is mainly because the smallest DRAM size has doubled so size descriptions have been bumped up one notch.
With Broadwell's IGP promising 4-5X HD4000's performance, DDR4's ~3.2GT/s will be very much welcome.
Stacked dies with (ultra) short range interconnects are better suited for eDRAM-like applications where the memory chip gets mounted on the same substrate as whatever it talks to... you could see future APU/GPUs with a few of those chips mounted directly on the CPU/GPU substrate for the frame buffer or possibly stacked with the CPU/GPU die itself.
Still, it would solve some of the problems that high speed memory development is facing now.
By they way, don't expect to use GDDR as main memory, because it suffers from high latency. There's a reason why it's not use for CPU anywhere.
Isn't DDR3 bandwidth ~10/11GB/s?
Depends on how wide the interface is... 12.8Gbps per channel for 1600MT/s DIMMs; that's 51.2Gbps for a quad-channel CPU.
GT/s is not a GB/s and i nthe case of GT/s on DDR3, that's GT/s per bit in the interface and the interface is 64 bits wide per channel. DDR3-1600, for example, has 12.8GB/s per channel. DDR3-2400, for example, is 19.2GB/s. With dual-channel memory being common and DDR3-1333 to DDR3-1600 being common, most modern CPUs have about 22GB/s to about 26GB/s of maximum theoretical bandwidth with realistic bandwidth then depending on the memory controller.
12.8GB/s, not 12.8Gb/s, for DDR3-1600 per channel. A lot of memory companies and others down the supply chain and more etc. say Gb/s, but that's a huge mistake for memory in most contexts that it's used in.
DDR3 1600Mhz = 1.6GT/s and Transfer rates of 102.4 Gbit/s or 12.8 GB/s
http://en.wikipedia.org/wiki/List_of_device_speeds
I obviously meant GB/s.
I don't know for you but for me, typing multiple capitals in a row requires extra concentration to override the single-capital reflex from normal writing. Slips by easily when tired.
Its more likely that the desktop is on its way out.
Technology is being reduced in size, and what you have in the market now is not a real reflection of out technological capabilities and latest scientific knowledge (far from it) - but, as prices of technology are reduced, newer technology is replacing old one faster... and therefore, the 'revisions' you see are appearing faster... albeit I would prefer they create quantum leaps on a regular basis (we certainly have the means and the know how to do it, but the monetary system prevents it).
Lies PS4 have it and its "perfect" according to Epic /end sarcasm
That's the REAL problem - software support. It's not the hardware. For example, getting software developers to support OpenCL or DirectCompute is like pulling teeth. A lot of them have really been dragging their feet. We need to tap into the hardware we've got before we decide to add any more dedicated chips to the mix.
Another more recent example of x86 using GDDR5: Xeon Phi.
One of the major issues with using GDDR5 on PCs is that GDDR5 signaling is intended for fixed memory configurations soldered directly to the PCB. A good chunk of what enables GDDR5 to run 2-3X faster than DDR3 is the lack of CPU/GPU socket, DIMM slot interface and associated PCB between the GDDR5 dies and the CPU/GPU.
I doubt many enthusiasts would be willing to make that sacrifice (soldered CPU+RAM) to get GDDR5 in their gaming PC... but with Haswell/GT3, I think we are already sort-of going there. It may only be a matter of few more years before Intel decides to put enough eDRAM or equivalent on their CPUs to forgo conventional system RAM altogether for most applications.