When non-buffered, non-ecc 16GB DDR3 modules are coming?

Status
Not open for further replies.

JonneH

Honorable
Nov 7, 2012
2
0
10,510
Hello, I'm curious if anyone can tell when non-ecc, normal DDR3 16GB modules are coming? It would be nice to have 64GB of memory on the usual 4 slot mainboard.

Now you have to buy a socket 2011 mainboard, if you want more than 32GB of memory.
 
My best 'guess' is it will be a while and by that I mean at least a couple years if not longer. Even the current consumer (non-ECC) 8GB/stick densities are only so-so in that I mean the IC's can be a problem for too many errors and overall reliability. Though in the past three-four months the new IC's seem to have improved 8GB/stick kits.

The 'need' just isn't there on the consumer side, and the folks that need 16GB/stick density need them in a more commercial settings i.e. professionals rendering very large projects like SQL, Video & etc.

Further, in mission critical and in professional settings where 64GB, 128GB, or higher can be properly utilized folks also need more 'computing power' to go along with increased memory capacities which can only be provided with additional cores and CPUs -- MP (2-4) CPUs with 8-cores and threads per CPU.

Reliability, again the vast majority of folks whom also need huge amounts of RAM also want it so that errors can be corrected which are attributable in all forms of RAM and currently ECC is the only solution to correct those errors.

I also get that folks may want large RAM Drives, but you don't want it packed full of errors and not to mention for say a 64GB RAM drive the added 5+ minutes for a shutdown nor the risk of permanent loss from an unforeseen sudden shutdown (e.g. 41/63).

Next is price, 16GB/stick RAM is very expensive and if the past is a good indication of the future the RDIMM will be considerably less expensive until UDIMM / non-ECC demands are sufficient enough to lower the prices.

So for affordable 16GB/stick consumer non-ECC don't hold your breath, and more than likely it will be after Haswell.
 

JonneH

Honorable
Nov 7, 2012
2
0
10,510
I run many virtual OS's for testing software, often with RAM drives. I know you have to get socket 2011 to have 8+ memory slots, and you can get even 48 slot mainboards.

But it's sad that Ivy Bridge E is being delayed until late 2013 so you have to go with 32nm processor. And in about 6 month there is going to be a new architecture with the upcoming Haswell processor, but socket 2011 users will still be stuck with the old Sandy Bridge.

I need more single thread performance than more cores, so I wish there could be a way to have 64 or 128GB with 22nm Ivy, and in 6 month with Haswell. Is that possible or I have to wait for Ivy Bridge E?
 

InvalidError

Titan
Moderator
For 16GB modules to come out, DRAM manufacturers would need to start pumping out 8Gbit RAM ICs. Right now, the largest RAM ICs are only 4Gbit with Samsung also selling 8Gbits stacked die DRAMs. I doubt we will see 8Gbits DDR3 dies, this will most likely happen with DDR4.

The reason why DRAM manufacturers cannot simply go with 32 chips DIMMs (or 16 stacked chips) is because this would increase the number of loads on the address/control busses from 32 to 64 DRAM pins. The CPU's address/cmd output drivers aren't designed to do that and driving bigger capacitive IO loads on the bus would likely come at the expense of (much) lower DRAM clock rates.

This is why large DIMMs for servers are registered and buffered. The buffers distribute signals on the DIMMs so the CPU is only driving a single address/command load per DIMM (instead of 18-36 for ECC) and the registers give an extra cycle to move commands across the busses to accommodate longer distances between the CPU and DRAM without sacrificing (too much) clock rate at the expense of extra latency.

If you think dual-channel DDR3 sucks due to 8GB/DIMM and two DIMMs per channel limit, signs indicate that Broadwell (Haswell's DDR4 successor) will only support a single DIMM per channel at 2400MT/s. So, even though 8Gbit DDR4 DRAMs might be out by then, the limit will still be 32GB for dual-channel due to LGA1150 (?) Broadwell only supporting two 16-chips DIMMs total.
 

ezakimak

Honorable
Nov 21, 2012
3
0
10,510
I don't see why Intel continues to disable ECC support on "consumer grade" CPU models. I recall back in the day w/my dual celeron and P3 cpus using ECC successfully and being glad I could--back when a mere 256MB was considered a lot of ram. Now, with 4GB modules even, I'd like to have the option of using ECC memory as well--even for a "mere" desktop. (my current desktop has 24GB)

The feature is on the die, it makes no sense to me why they disable it. Is that the only distinguishing feature between "consumer" vs server xeon packaging or something?

 

COLGeek

Cybernaut
Moderator
I have an Asus P8B WS with Xeon CPU in my workstation rig. The mobo will support both non-ECC and ECC memory with the appropriate CPU. Just passing along.

Have a happy Thanksgiving!!!
 

ezakimak

Honorable
Nov 21, 2012
3
0
10,510


Sure, but how much did that Xeon cost vs a standard i9x0?
Already have to pay a premium for the ECC ram, why also have to pay a premium for the CPU just to use the ECC feature?
 

ezakimak

Honorable
Nov 21, 2012
3
0
10,510
OC'ing is where the price difference matters. My i7-930 cost $294, but I can clock it *easily* to 3.35--more than the i7-975 costing $1000. The price difference of $700 pays for the rest of the components.
AFAICT, the *only* differences between the i7 and xeon counterparts are:
- xeon enables ECC
- xeon enables multi-cpu for >1 socket motherboards
- xeon disables changing bclk (locking out OC'ing)

For server and workstation applications I completely understand locking the frequency down (reliability), and enabling multi-cpu operation.
What I don't understand is why disable ECC functionality on the non-xeon cpus--especially now that 4+GB of ram is considered common place. Sure, few mainstream desktops may use ECC ram, but preventing the possiblity seems unecessary--especially where they cost the same for the same rated speed.
 

We agree - I won't use 'RAM Drives' in my SQL and certainly never anything but testing. We've used it (RAM Cache) for TEMP and that's it. If anyone in my office suggested that I OC my servers I'd keep giving them a strange look while giving them a box to pack-up their stuff while pointing at the door. Also the world is changing and the Broadwell CPU's will supposedly be soldered to the MOBO...nuts; see - http://www.tweaktown.com/news/26947/intel_s_haswell_could_be_the_end_of_the_road_for_upgrading_your_cpu/index.html

The prices for the Xeon aren't justified and all of these CPU's are from the same litho so I guess Intel simply is greedy. However, if you don't like Intel's policies then go the AMD route, all of their CPU's support ECC.

So which is it -- Toy or Tool that you need? If it's a Toy then SB-E and know it'll be a PITA, otherwise pucker-up and E5's.
 

InvalidError

Titan
Moderator

Not really that 'nuts' IMO. Intel knows their current CPUs deliver more performance than most people can shake applications at and will never need to upgrade their CPU during their PC's useful life. With all IO controllers and voltage regulators integrated in the CPU, there isn't much left for motherboard manufacturers to do to distinguish themselves and reduce costs to keep PCs cost-competitive against mobile devices. Soldering the CPU with all IO/VRM integrated shaves $30-40 off the manufacturing/packaging costs, shaves several square inches of board space and removes risk of user manipulation error with bending/breaking socket pins. Removing one layer of solder joints and the mechanical interface should improve signal integrity, power regulation, heat transfer and reliability.

The move may not be welcome by enthusiasts but enthusiasts represent only ~5% of the PC market. For people who simply want a PC to get their everyday stuff done, there is no real negative effect and an increasingly large proportion of people choose laptops over desktops which makes upgrading a non-option in most cases due to lack of BIOS support for anything faster anyhow regardless of whether or not the CPU is soldered.

The desktop PC is becoming little more than a commodity and as with nearly anything else that drops down to that level, cost-cutting ends up being a much greater concern than upgradability.

For the "mainstream" segment, going with soldered CPUs has many more advantages than inconvenients IMO.

Real enthusiasts and people who desperately need more performance than what Intel will make available in their soldered mainstream Broadwell chips will likely still have the option of K/X-series chips or taking the jump to Xeon territory.
 
Chris and I have been this discussing this topic already, and for the mass production PC's it's not an issue other than the 'savings' is in debate once you factor in RMA deconstruction. However, the problem for the enthusiast, which I am one of them, it's going to hamper choices. Intel also doesn't need 10 varieties of essentially the same CPU, but they need to keep folks like myself happy which they're not and the end result is reduced sales; no doubt a global depression/recession is the root issue. My biggest Intel complaint currently is that Intel is deliberately delaying CPU cycle times after AMD epic CPU failures of late.

Next, think about this one - lets say Intel decides the 'unit' is no longer the 'CPU' and instead it's the 'MOBO + CPU' then later no it's the 'MOBO + CPU + GPU(s)' and companies like ASUS, Gigabyte, MSI and etc are producing GPU's and peripherals instead.

As far as computing power goes that all depends on what your needs are but my need is and will always be my time. Plenty of folks are producing home movies and it ain't fast even my SB-E 6-core; Quick Sync is fine for your phone not your 'TV.'

Frog and the frying pan - folks don't really notice day-to-day changes, like a frog on a slow heated frying pan, and I recall text-only and very basic graphical UI --- years later Nintendo level graphics -- years later Direct X level graphics -- what I want is surreal to realistic real-time graphics and the CPU's nor GPU's are anywhere close to that immersive level unless 0.25 FPS (or less) is your thing.

MOBO's are all the 'same' well that's hardly the case and it's not worth my time proving it to you. Simple example, this OC is impossible on anything other than an outstanding MOBO yeah the silicon lottery is in play -> http://valid.canardpc.com/show_oc.php?id=2320509

The 'Desktop PC' has always been a commodity just like Phones and Tablets or whatever is coming down the pike positioned for mass sales. Chris had some more rumor mill info suggesting there will be non-BGA processors (Broadwell), but it's by far not confirmed and frankly in my experience I'll know for sure once i have one in my hands same like everyone.
 

InvalidError

Titan
Moderator

I doubt there is much "RMA reconstruction" since repairing boards with anything but the most trivial faults on them such as bricked BIOS quickly ends up costing more than chucking the whole board out and replacing with a new one. Removing a BGA chip from a PCB, cleaning the PCB so a new BGA can be soldered and cleaning the CPU so new balls can be attached for soldering is far more expensive than writing both parts off since a Broadwell motherboard will likely have a write-off value under $40 and Intel's write-off cost for mass-produced Broadwell mainstream chips will likely be closer to $70, neither worth a delicate $200 salvage procedure.

Soldering the CPU on the motherboard also avoids problems with people putting a good CPU on a defective board, killing the CPU, RMAing the board killing it with the now bad CPU, RMAing the CPU and killing the replacement on the bad board, rinse and repeat. This removes a lot of second-guessing from the RMA process and likely many of the most expensive and frequent causes of both RMAs and RMA disputes.


With Broadwell, the CPU will effectively be just about everything with the motherboard being little more than the passive backplane and power distribution between the CPU and IO slots/connectors.

Office desktops, terminals and non-gaming desktops/laptops represent more than 90% of the non-server/workstation PC market share and in most instances require nowhere near the performance of Intel's HD2500. Many people are already satisfied with HD2500's performance for low-end gaming. Broadwell will be two generations beyond HD4000 and if Intel triples performance twice, a ~10X faster HD4000 should have most mid-range gaming needs covered.

I suspect AMD is seeing the collapse of low-end GPU sales coming as well from their leak of HD88xx launch prices being $50-70 lower than HD78xx's.


What I do notice between my friends and family is an increasing number of them sticking to their 3-5 years old PCs and laptops because their old computers are still capable of doing everything they need to do reasonably quickly. Even on THG you see lots of people still (mostly) happy with their 4-7 years old PCs who are simply looking for the final push to convince themselves to go with an upgrade they might not really need yet - at least not enough so to feel comfortable calling the shot on their own.

While things do change, most people are not running into brick-wall limits anywhere near as quickly as they used to 20 years ago when every new application domain (MIDI to MP3 to MJPEG/MPEG1 to MPEG2 to xvid/divx to x264 at increasingly high resolutions for example) required tenfold advances in processing power just to become possible, never mind practical.

Between my old P4/NW and my new i5-3470, we are only talking about ~10X raw peak performance increase over 10 years. Not much to write home about when compared to the ~1000X increase between 1990 and 2000. Feels more like the PC industry is frozen in place than evolving at a breakneck pace. Most people have simply run out of uses for extra MIPS.


For video transcoding, QuickSync on HD4000 is twice as fast as the next best thing and several times faster than any software or even GPU-accelerated transcoder. For actual editing, mileage varies.

Performance-wise as far as Intel's mainstream plans for Broadwell go, they are MAINSTREAM. If your computing needs are not within what Intel targets as mainstream, you are free to look either at non-mainstream Intel CPUs or go with AMD if AMD survives that long and has viable products by then. Since you are already doing so by using an SB-E, you will probably be able to do so again with Haswell/Broadwell-E unless Intel decides to go BGA for 1P Xeons as well.


Desktop PCs used to be a luxury. The first PC I ever had access to (8088) cost over $5000. Most people had little to no use for PCs until online services like Compuserve, AoL and fiends started making lots of noise around 1992 and even then, PCs were still a major expense at ~$2000 for something reasonably usable. This would still qualify as a luxury rather than a commodity PC for most people today.

A commodity is something that is the same across the market and seemingly interchangeable for most people. A $2000 gaming PC is not interchangeable with a mainstream $500 PC for the ~5% PC gaming enthusiast crowd. But for 80-90% of people, even a $500 laptop would be perfectly adequate and for the most part indistinguishable from a $2000 model.

When I say that PCs have become a commodity, I mean that the amount of processing power available even from the lowest-end Trinity or Ivy Bridge chips is still enough for most people not to really care about anything more for the foreseeable future.


With Haswell and Broadwell having their VRM integrated into the CPU package, motherboard manufacturer will have even less influence over overclocking outcomes than ever before. Also, before worrying about mainstream Broadwell's overclocking results, you need to consider the possibility that mainstream/soldered Broadwell chips may have absolutely no overclocking support whatsoever.
 

Listen I really don't have the energy nor desire to debate you on this one here, so believe whatever you want to on this one, I really don't care.

On low-end (~<$80) disposable is the RMA' process, and your version and my version of low-mid-high is different. If anything I 'get' money and business, it's all about ROI and agreements for recovery costs not to mention the end result is refurbished.

The only part of your reply that caught my eye is transcoding. Never -- ever transcode your family movies using Quick Sync unless you're fine with a poorer quality render and artifacts galore, instead CPU renders. Quick Sync = Phone/Tablet render quality.

Most importantly, I've learned as should you second guessing what Intel will and or won't do when quite frankly they (Intel) hasn't fully decided is a complete effort in futility. I recall all of the crappy and hateful comments I received when I pre-announced both temperature & TDP issues with the IB -- as it turned-out I was correct.

As of now all of this is pure rumor mill and speculation.
 

InvalidError

Titan
Moderator

Sandy Bridge's QuickSync did get somewhat of a bad reputation from quirks and half-baked drivers. Negative comments about Ivy Bridge's QuickSync are much more sparse and not as critical.


Well, Intel integrating the VRM into the CPU package with Haswell is very much confirmed and that alone robs motherboard manufacturers of one of their traditionally key distinguishing features. Not much to be seen between the x16 slot and top of board when you ignore the VRM caps, coils, MOSFETs, PWM controllers and associated heatsinks if applicable. The bottom half is almost exactly the same across all manufacturers and models that simply expose CPU/IO-Hub features, give or take a few connector/header rotations and translations.

Where Intel is going seems crystal-clear to me: turning mainstream PCs into an SoC business to cut costs and enable new form factors. Partly to fulfill their 10+ years old corporate dream of putting PCs into nearly everything and partly because if Intel does not go there first, something with an ARM-based chip on it most likely will.

While ARM may not meet your personal computing requirements and Android/RT software may not be up to par with their PC counterparts YET, it is only a matter of time before mobile platforms and associated software become good enough for most people to give up on traditional PCs, albeit with the addition of USB/Bluetooth/dock keyboard+mouse, external HDMI/DP/Thunderbolt display and possibly external storage. The biggest crippling factor for developers right now is the very limited 0.5-1GB on most devices in the wild, a large chunk of which used by the OS. Once 2GB becomes the norm, things should get a lot more entertaining.
 

InvalidError

Titan
Moderator

Yes but those are BUFFERED DIMMs and to get to 1TB memory configuration, you need to use FBDIMM riser cards and stuff those with a ton of DIMMs, buffered or not depending on the FBDIMM controller chip used on the riser... so you have an x16 FBDIMM interface between the riser card and CPU and the FBDIMM chip breaks that into 2-4 DDR3 channels with 2-4 DIMMs each, which is how you get 1TB of RAM into a server with a pair of FBDIMM slots.
 

InvalidError

Titan
Moderator

To get 16GB non-buffered DIMMs out, they would need to make 8Gb DDR3 dies to keep the number of bus loads per DIMM slot at the expected maximum of 16-18 if you want to be able to use both slots on each channel and the way the DDR3 bus is arranged in PC, it is optimized for 8-bits wide DRAM packages (one strobe signal per 8bits data group) so using 4-bits dies may cause some problems aside from the control line load doubling, which seems to be what I'M thinks they have solved. (When the CPU reads data from DRAMs, the DRAMs drive the strobe signal to tell the CPU when to latch data in but with two 4bits die sharing the same strobe signal so there may be momentary driver conflicts on the strobe line if the dies are not perfectly matched for timing performance.)

Right now, the only production 8Gb chips I have read about are DDR4, meaning that those 32GB server DIMMs from over a year ago must be using die-stacking in their DDR2/DDR3 packages so those 36-chips modules must have 72-144 total DRAM dies on them, 2-4X as many as what desktop CPUs are designed for.
 

brendanh

Distinguished
Mar 19, 2007
2
0
18,510
jaquith's post above (7 Nov 2012), says although it may make commercial sense to produce 16GB unbuffered modules by now (16GB buffered sticks are now available at consumer-friendly price point - $75), from a technical perspective, unbuffered (non-ECC) DIMMs larger than 8GB are unlikely to be reliable. The new AMD motherboard chipset A88x has stated support for unbuffered 16GB DIMMs but there are as yet none on the market. To gain a competitive advantage over Intel, AMD should have provided support for ECC memory allowing 128GB and beyond.
 

InvalidError

Titan
Moderator

The chipset does not support any RAM whatsoever since the memory controller is integrated in the CPU. Memory type support is dictated entirely by which CPU you have.
 

MrMusAddict

Honorable
Jun 13, 2013
13
0
10,510
I know I'm a bit late to the party, but I am an employee with Crucial, manufacturer of RAM and SSDs. I can safely say that there are currently no plans for us to manufacture singular 16GB unbuffered non-ECC DIMMs. We are personally focusing on DDR4 (which will be sold up to 16GB a piece).

If 16GB DDR3 modules are planned to be in production by other manufacturers, I can expect them to be ludicrously expensive. Sort of like 4GB DDR2 modules. They exist, but in small quantities and for way too much money comparatively.
 

Tradesman1

Legenda in Aeternum
Most manuufacturers have dropped the idea, in part due to a lack of mobos and CPUs that could support them, and won't be seeing new coming out w/ DDR4 in the upswing - the reason for expensive DDR2 4GB is npbody makes many, same with all DDR2, production lines are dominated by DDr3 and are gearing up for DDR4, once DDR4 becomes mainstream, expec the price of DDR3 to shoot up as did DDR2 after DDR3 became mainstream - simply supply and demand - not a big demand, so fewer made but cost is higher, wait till we see what DDR2 jumps to when it becomes the thrird thought when production is discussed ;)
 
Status
Not open for further replies.

TRENDING THREADS