Solved

Memory??

I am buying a GA-MA770T-UD3P motherboard. It says i can overclock to 1666, but it says to buy 1333/1066? What memory am i supposed to buy? I am goin with the AMD Phenom II X3 710 2.6GHz 3 x 512KB L2 Cache 6MB L3 Cache Socket AM3 95W Triple-Core Processor. Please help.

Thanks
7 answers Last reply Best Answer
More about memory
  1. If im right here is the memory im going with, G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Dual Channel Kit Desktop Memory Model F3-10666CL8D-4GBHK
  2. One more question, I have a Apevia 500 Watt Java power power supply, will this be enough to run this?
  3. AMD DDR2 boards only support two DIMMs at 1066 or higher; I don't know what the similar 'speed limit' is for DDR3, but I'm betting there is one... They do not support EPP/XMP, so any high speed memory will have to be set up by hand, to take advantage of it... There is no real-world advantage to faster RAM - only to lower latency RAM; the only reason you need faster RAM on systems with MCHs (northbridges w/memory controller hubs) is that, if you want to run any system bus speed past 400 (1600+ FSB), the lowest multiplier is 2 - so 800 will only let you take the system clock to a twitch above 400 - if you want to run 412-415 or higher, you need 1066...
  4. so is the ram i posted gonna work for me? or do I need a different set? No very computer literate heh :)
  5. Best answer
    Gimme a day or so here - I have a little tool that takes RAM at various speeds and latencies, 'normalizes' them to one speed for comparison, and then 'weights' them for 'bang for your buck' - I currently have it full of G.Skill tri-channel for X58s; will plug in some of the G.Skill at NewEgg, and see what we get...
  6. Which is better for cas latency? 8 or 9?
  7. :lol: 7!

    The latencies
    Quote:
    Noun
    latency (plural latencies)
    (electronics) A delay, a period between the initiation of something and the occurrence.

    are 'waiting periods' between physical operations involved in accessing the RAM. Accessing DRAM is not an arbitrarily fast process. At least not fast compared with the speed the processor is running and with which it can access registers and its internal cache. The RAM is 'arranged' on the DIMM's chips as a three dimensional array in ranks of rows by columns. It takes a physical time, usually in nanoseconds or picoseconds, to select a rank, select a column, select a row - before the contents can actually be read by the memory controller. When these periods of time are 'set' in the BIOS, they are expressed in 'cycle counts' of memory clock cycles, i.e., CAS-7 (column address select), tRCD-7 (row address select to column address select delay), etc. The lower these numbers (for a given clock speed), the faster the RAM can respond to queries from the CPU for its contents - but, therein lies the 'rub'! As there are becoming a huge number of clock speeds for various RAM (the range is now up to at least 200% - you can buy 'lowly' 1066, and, spend enough, 2133 can be had!) available, it becomes harder and harder to compare these requisite 'waiting periods', as they are expressed in 'clock counts' that vary in physical (nanosecond) terms, by up to a two to one ratio!

    CPUs with on-die memory controllers are much more responsive to low-latency, as opposed to faster-clocked RAM; I recently wrote this, in explanation:
    Quote:
    "There is a place where high speed, versus low latency, will be an actual advantage (as opposed to high 'synthetic' benchmark tests that really don't relate at all to the real world - always reminds me of a manufacturer of high-performance heads for cars who reminds us: "Remember: We don’t race flow benches!") - any operations that require large, sustained, reads from and writes to RAM - like, as I mentioned, video transcoding... I always consider my 'pass/fail' system stress test to be: watch/pause one HDTV stream off a networked ATSC tuner, while recording a second stream off a PCI NTSC tuner, while transcoding and 'de-commercialing' a third stream to an NAS media server... But, for the vast majority of people, for the vast majority of use, this is not the case. What's going on behind the scenes: the task scheduler is scurrying around, busier than a centipede learning to tap-dance, counting 'ticks': ...tick... yo - over there, you gotta finish up, your tick is over, push your environment, that's a good fella; oops - cache snoops says we've got an incoherency - grab me a meg for him from over there; ...tick... you - get me the address of the block being used by {F92BFB9B-59E9-4B65-8AA3-D004C26BA193}, will 'ya; yeah - UAC says he has permission - I dunno - we'll just have to trust him; damnit - everybody listen up, we've got a pending interrupt request, everyone drop what you're doing, and you - over there - query interrupt handler for a vector - this is important!!! ...tick.... This is why (aside from the obvious matter of access architecture) that swap files are optimized in 4k 'chunks'..."
Ask a new question

Read More

Gigabyte Memory Motherboards