Sign in with
Sign up | Sign in
Your question

More than 64GB with intel core i7 3930K ??

Last response: in CPUs
Share
June 26, 2012 2:59:37 PM

Over here it says that the intel core i7 3930K has 64GB of RAM supported at the max, dependent on memory type:
http://ark.intel.com/products/63697

By "dependent on memory type" they probably mean that it's less than 64GB if you have the wrong type, but never more than 64GB, regardless of the memory type... otherwise they'd have said something like "max = 128GB (dependent on memory type)"

Okay then,
Why do so many of the MSI mobo's with X79 chipset built for core i7 CPUs say that their max memory supported = 128GB ??
http://eu.msi.com/product/mb/X79A-GD45--8D-.html#/?div=...

Are there any core i7 CPUs that can support more than 64GB ? Maybe if it was registered RAM ?

I'd find it hard to believe that MSI would build a mobo that's "overspecked" relative to all CPUs on the market.

I'd really like to be able to put more than 64GB of RAM in my core i7 3930K , but also don't want to waste money on anything that won't work.

More about : 64gb intel core 3930k

June 26, 2012 3:31:56 PM

Do you REALLY need 64GB of RAM?
Related resources
a b à CPUs
June 26, 2012 3:36:42 PM

Odd...I would assume all Intel 64-bit CPU's would have the same 64-bit implementation...

FYI, no ones implemented all 64-bits of memory addressing yet (no need for THAT much), but I'd still be shocked if this varied by model.
June 26, 2012 3:43:07 PM

Hi gamerk316,
Thanks for your response.
Could you please elaborate a bit more ?

What do you mean by "same 64-bit implementation" , and what do you mean by "no one's implemented all 64-bits of memory addressing yet (no need for that much)" ?

Are you saying that there's no need for more than 64GB of RAM ? Because I use lots of supercomputers with more than 70GB o RAM an lots of supercomputers that have more than 256GB of shared memory.

a c 188 à CPUs
a b å Intel
June 26, 2012 5:17:03 PM

Lets see if I can clear a couple things up for you. On the listing for the Intel® Core™ i7-3960X Extreme Edition you will see the supported memory for the processor's memory controller. Now when you pull the MSI board up and it lists that the board will supports 128GB you are seeing what the board is capible of doing "if" the processor would support that much.

If you are looking for more than 64GB of RAM support you might want to use one of the Intel® Xeon® E5's http://ark.intel.com/compare/64587,63696 Here you can see the difference between the Intel Core i7-3960X and the Intel Xeon E5-2643.

a b à CPUs
June 26, 2012 6:44:26 PM

healthyman said:
Hi gamerk316,
Thanks for your response.
Could you please elaborate a bit more ?

What do you mean by "same 64-bit implementation" , and what do you mean by "no one's implemented all 64-bits of memory addressing yet (no need for that much)" ?


Neither AMD or Intel have bothered to implement all 64-bit required for full 64-bit addressing, due to the hardware cost. AMD's implementation of X86-64 currently implements 52 bits physical and 48 bits virtual memory allocation, for a theoretical memory limit (in HW) of 4PB, rather then the maximum 16EB avaliable if the full 64-bit spec were implemented. I know Intel only supports a theoretical 256TB of RAM in HW right now (I forget how many bits that works out to, and I'm too lazy to do the math...).
June 26, 2012 6:52:07 PM

I guess they work out if you need that amount of ram your also likely to be better off with a xeon system.
a b à CPUs
a b } Memory
June 26, 2012 7:34:09 PM

I agree with IntelEnthusiast.

If your going for that much RAM, might as well buy a "tank" of a CPU and go for the Xeon, especially for that type of work you are going for (HPC).

Xeon's are built to be very stable and have more tests done with them than the consumer processors.
a c 116 à CPUs
a b } Memory
June 26, 2012 7:36:37 PM

gamerk316 said:
Neither AMD or Intel have bothered to implement all 64-bit required for full 64-bit addressing, due to the hardware cost.

Adding bits to address computations and IMC is trivial. The main limiting factor is the number of devices that can be practically attached to the memory busses because there is a practical maximum to how many loads (chips) can be connected to each address and control line.

The drivers on most of today's CPUs can drive a maximum of 32 chips/loads and at 16 chips per DIMM, that's two DIMMs per channel. The biggest DDR3 chips are 4Gbit which limits maximum DIMM size to 8GB. 4 channels x 2 DIMMs per channel x 8GB per DIMM = 64GB max for non-buffered non-registered LGA2011.

To go beyond that, you need to go Xeon with buffered/registered DIMMs. On registered/buffered DIMMs, the address/control signals pass through a register/buffer that distribute the signal across the DIMM so the CPU's drivers only 'see' one load per DIMM (the buffer/register chip) instead of 16 which allows each channel to have a 4-8 slots instead of 1-2 on mainstream boards.
June 26, 2012 8:58:16 PM

gamerk316:
That's interesting.. but I don't see howthat matters as I am not looking for more than 128GB of RAM ...

To everyone: I should have mentioned at the beginning that i will NOT be able to get a Xeon. I don't have enough money to get two CPUs and it's not possible to put a single Xeon into a motherboard that will support 3 GPUs.

IntelEnthusiast: Could you please confirm with me whether or not there exists a core i7 that supports more than 64GB of RAM , now that you know I can't go for a Xeon ?

IntelEnthusiast: Are any Core i7's in the future planned to support more than 64GB ?
I asked MSI why they made a motherboard designed exclusively for the Intel X79 chipset, for Core i7 CPUs, have maximum capacity of 128GB RAM, and they said it was so that it would be "future proof" for future Intel CPUs ... based on what InvalidError said in the last post, it seems it's just a metter of designing the right driver .. meaning that even the Core i7 3930K might one day be able to support the RAM.

InvalidError: So theoretically once a driver for the Core i7 3930K is made to allow more than 64GB of RAM, I'll be able to put more RAM in if the motherboard supports it ?



a c 116 à CPUs
a b } Memory
June 26, 2012 9:10:03 PM

healthyman said:
InvalidError: So theoretically once a driver for the Core i7 3930K is made to allow more than 64GB of RAM, I'll be able to put more RAM in if the motherboard supports it ?

The kind of "drivers" I was referring to was not software. I was talking about the IO-blocks (IOBs) built into the silicon die that drive the IO lines up/down to signal 1s and 0s on the memory bus lines. Those things have a limited amount of current drive/sink capacity that determines how quickly lines can go up/down, each chip attached to each line contributes extra parasitic capacitance that increases the load those IOBs need to drive and when the driver's source/sink capacity is exceeded, signals get garbled, memory gets corrupted and the computer crashes.

You will probably have to wait for Haswell's successor with DDR4 support and 16Gbit DRAM chips for 64GB memory support on mainstream boards/CPUs and 128GB on Extreme (quad-channel) CPUs.
a b à CPUs
a b } Memory
June 26, 2012 9:19:16 PM

One more thought on this, however I don't have much knowledge of it:

Basically - if you were to get 10 low end computers (say for example with 32GB of ram each) and hook them all up together with linux (via Gig switches/NICS), you would have a very powerful rendering farm.

Something like how SETI@Home works (not exactly the same though).

I know for Jurassic Park (yeah I know it's old) they did this to render some dino scenes.

It's probably out of the question, but just thought i'd throw it in there.
a c 141 à CPUs
a b å Intel
a b } Memory
June 26, 2012 10:38:34 PM

Intel isn't known for being accurate with this spec. Back when the 8GB DDR3 modules came out people put in 32 or 64GB of ram while intels website said the maximum was 16 or 32 (depending on the processor). sandy's were all showing 16GB at that time which makes a kind of sense since the largest sticks were 4gb (4x4=16)

Here is one such thread and surprisingly intels website now states 64GB instead of the previous 32.
http://www.tomshardware.com/forum/322755-28-memory-3960...

we would need to know how many address lines are coming out of the processor to determine the true maximum memory. You can probably find that in a white paper somewhere.
a c 99 à CPUs
June 27, 2012 5:35:12 AM

popatim said:
Intel isn't known for being accurate with this spec. Back when the 8GB DDR3 modules came out people put in 32 or 64GB of ram while intels website said the maximum was 16 or 32 (depending on the processor). sandy's were all showing 16GB at that time which makes a kind of sense since the largest sticks were 4gb (4x4=16)

Here is one such thread and surprisingly intels website now states 64GB instead of the previous 32.
http://www.tomshardware.com/forum/322755-28-memory-3960...

we would need to know how many address lines are coming out of the processor to determine the true maximum memory. You can probably find that in a white paper somewhere.


DDR3 is supposed to support up to 16 GB modules using regular unbuffered desktop memory- two 8-chip ranks of 8 Gbit ICs. The largest ICs currently made are only 4 Gbit, which will make up to 8 GB desktop modules. Eventually the OP's board will support 128 GB when 8 Gbit-equipped 16 GB modules are available. Likely why Intel doesn't say "this board supports 128 GB using 16 GB modules"right now is because somebody will go out and buy currently available 16 GB modules which are registered memory instead of unbuffered and thus will not work with the i7 CPU.

The OP is correct, registered memory- which the i7 does not support- allows for much larger memory sizes. The largest registered DDR3 module I have seen is 64 GB and costs many thousands of dollars. You could get 512 GB of RAM in eight DIMM slots using those modules, and it would cost what a very nice new car goes for.
a c 185 à CPUs
a b å Intel
a b } Memory
June 27, 2012 6:05:14 AM

This also depends on which version of windows you have, right?
a b à CPUs
June 27, 2012 12:11:01 PM

....and there used to be a time when people used to run Windows 98 with 64MB RAM !!! :D 
a c 99 à CPUs
June 27, 2012 12:53:04 PM

amuffin said:
This also depends on which version of windows you have, right?


Yes. 32 bit versions support a little over 3 GB as PAE is not supported. For 64 bits, Windows 7 Home Basic supports 8 GB, Home Premium supports 16 GB, and the other versions of Windows 7 support up to 192 GB of RAM. If you want to run more than 192 GB of RAM, you need Windows Server Datacenter or Enterprise edition.

$hawn said:
....and there used to be a time when people used to run Windows 98 with 64MB RAM !!! :D 


There was a time when people ran Apple ][s with 64 K of RAM.
a c 116 à CPUs
a b } Memory
June 27, 2012 1:02:32 PM

$hawn said:
....and there used to be a time when people used to run Windows 98 with 64MB RAM !!! :D 

Kind of sad, heh?

64MB used to be enough to have a reasonably usable system back then but today, many current desktop OSes will refuse to install/boot with any less than 512MB and will be only marginally usable due to excessive swapping.

Gotta love feature creep.

15 years ago, most programs were largely self-contained so the OS wasn't loading tons of not really necessary stuff to get any particular job done. Today's OSes and UIs load hundreds of libraries/DLLs simply because every application integrates some bits of trivial functionality that they want at least sort-of-seamlessly integrated. Back in the days, program associations were done by simply setting a program for a given extension but today, most applications opt for shell integration using DCOM objects which uses hundreds/thousands of times more RAM and disk space. Multiply this by thousands of features, you get the bloat we have today.
a b à CPUs
June 27, 2012 2:47:24 PM

InvalidError said:
Kind of sad, heh?

15 years ago, most programs were largely self-contained so the OS wasn't loading tons of not really necessary stuff to get any particular job done. Today's OSes and UIs load hundreds of libraries/DLLs simply because every application integrates some bits of trivial functionality that they want at least sort-of-seamlessly integrated. Back in the days, program associations were done by simply setting a program for a given extension but today, most applications opt for shell integration using DCOM objects which uses hundreds/thousands of times more RAM and disk space. Multiply this by thousands of features, you get the bloat we have today.


But it does makes it so much more easier for programmers rite :) 
Anyway since RAM is so cheap these days i guess it doesn't matter:)  But yea its kinda stupidly nostalgic :) 
a c 116 à CPUs
a b } Memory
June 27, 2012 3:42:48 PM

$hawn said:
But it does makes it so much more easier for programmers rite :) 

If by 'easier' you mean hair-raising when you try hunting bugs in your software only to discover that the actual bug is in a library that your application is indirectly using through multiple levels of separation and abstraction through a library of a library of a library of... that you did not expect to have anything to do with your own project, yes, 'easier' :p 

In today's software environments, it is very difficult for programmers to know everything that actually gets involved in running any given piece of software because so much stuff has ridiculous/unexpected/unnecessary/plain-stupid inter-dependencies.

$hawn said:
Anyway since RAM is so cheap these days i guess it doesn't matter:)  But yea its kinda stupidly nostalgic :) 

Just imagine how much faster, power-efficient and possibly more reliable things would be if we could go back to those more minimalist days while still applying the tighter quality/design controls required to manage today's hopelessly complex software.
a b à CPUs
June 27, 2012 3:56:00 PM

InvalidError said:
If by 'easier' you mean hair-raising when you try hunting bugs in your software only to discover that the actual bug is in a library that your application is indirectly using through multiple levels of separation and abstraction through a library of a library of a library of... that you did not expect to have anything to do with your own project, yes, 'easier' :p 


woah...that's a really long huge sentence there :) . I only write very small programs and for me depending on libraries is a boon. AFAIK, libraries are usually thoroughly tested, but then u possibly write much more complex programs, so u'd noe better:) 

InvalidError said:
Just imagine how much faster, power-efficient and possibly more reliable things would be if we could go back to those more minimalist days while still applying the tighter quality/design controls required to manage today's hopelessly complex software.


True....btw u remind me of my RTOS classes :p  But if we'd try to write everything by ourselves without depending on external libraries, it wud take ages to develop new software. Not to mention the horrors of having to test each and every software module....
a c 116 à CPUs
a b } Memory
June 27, 2012 4:28:31 PM

$hawn said:
I only write very small programs and for me depending on libraries is a boon. AFAIK, libraries are usually thoroughly tested, but then u possibly write much more complex programs, so u'd noe better:) 

It isn't so much about the number of libraries being (explicitly) used as it is about dependencies they may have between each-other.

While your program may explicitly use only one library, that library and your piece of software itself make calls to OS/UI libraries through APIs and those OS/UI libraries may make calls to other libraries which make calls to other libraries, which make calls to drivers which make calls to kernel libraries, etc. So your single-library "simple" program may end up indirectly depending on 100 libraries. This is particularly true with Windows where each COM library may register multiple resources that get automatically negotiated by object brokers when applications request a particular resource and any application can register their own objects to intercept calls and insert their own middleware in the stack to do some other intermediate processing that the original requesting program may not be interested in nor aware of.

With firmware-style software development like lightweight RTOS environments, you can still use libraries but you have much tighter control over how they depend on each other since the environment only has stuff you explicitly put in it for specific reasons, unlike general-purpose OSes where you have an unknown amount of "random" stuff added for reasons you have absolutely no knowledge about and features you might not even use in your lifetime.

Back in the Win95 days, who would have imagined that deleting a directory with 10k files in it which used to be nearly instantaneous (a few seconds) back then would turn into a 10 minutes operation today with 100x faster CPUs and 5x faster HDDs 15 years later? This is just one of the more blatant examples of progress gone wrong.
a b à CPUs
June 28, 2012 2:20:10 AM

hey thanx for the detailed explanation into the inner workings of modern software :)  Nice to gain some new knowledge every now and then :) 
!