Troubleshooting: DDR3 2400 C11 slower than DDR3 2133 C10 with A6-6400K?

bubbalex

Reputable
Feb 22, 2015
3
0
4,520
I am playing around with my kids' budget Minecraft/homework build running an A6-6400K at 4.4GHz on an Asus A88XM-A, and am stumped by a reduction in performance going from DDR3 2133 C10 to DDR3 2400 C11.

I ran 3DMark Cloudgate three times with the iGPU at stock and then again at 900MHz with the DDR3 2133 sticks, then installed the DDR3 2400 and re-ran the benchmark under the same settings. I am consistently seeing a 100-200 point reduction in 3DMark score with the 2400 sticks. Also, the iGPU isn't stable at 900MHz with the 2400 sticks. I triple-checked in BIOS and using CPU-Z to ensure the sticks were running at the proper frequency and timings. Here are the specs for the two RAM kits I'm comparing:

http://www.newegg.com/Product/Productcompare.aspx?Submit=ENE&N=100007611%2050008476%20600006050%20600006142&IsNodeId=1&bop=And&CompareItemList=147%7C20-231-653%5E20-231-653%2C20-231-665%5E20-231-665&percm=20-231-653%3A%24%24%24%24%24%24%24

I am running the 2400 RAM using the default DOCP settings. The mobo NB setting is on "auto".

Can anyone tell me why I'm seeing reduced performance with this supposed upgrade? Per the various frequency vs. CAS charts out on the internet, I was under the impression that 2400 CAS 11 was faster than 2133 CAS 10. Is this a problem with the NB or memory controller not being able to handle the 2400 frequency perhaps?

Thanks in advance!
 
Solution
Solved! Thanks for the help!

Here's what I did and the results in case it can help someone else in the future:

BIOS: The A88XM-A is a budget board, so the OC settings are somewhat limited. But it does have two relevant settings under the DIGI+ VRM section: "CPU/NB Current Capability" and "CPU Current Capability". I assume they are both just indirect references to voltage settings. I kept everything else at "Auto" and set those two to "110%". That little nudge was enough to resolve the issue.

Results: Compared to my previous 3dMark Cloudgate runs with the 2133 RAM, I saw a 1.4% average increase in total score at stock iGPU frequency and a 2.7% higher average score with the iGPU OC'ed to 900MHz. Also, the system was stable at...

Tradesman1

Legenda in Aeternum
Under DOCP the timings may not be optimal, AMD mobos ten to loosen the timings for high data rate DRAM, also the MC may not be strong enough to carry the 2400 effectively - on that can try increasing the CPU/NB voltage about + 0.05 (the CPU/NB feeds the MC (memory controller))
 

bubbalex

Reputable
Feb 22, 2015
3
0
4,520


Thanks for the tip. I'll try upping the voltage sometime tonight and report back. Sounds like my guess may have been accurate (that I'm hitting the limits of the memory controller, which is designed for 1866, I believe).
 

bubbalex

Reputable
Feb 22, 2015
3
0
4,520
Solved! Thanks for the help!

Here's what I did and the results in case it can help someone else in the future:

BIOS: The A88XM-A is a budget board, so the OC settings are somewhat limited. But it does have two relevant settings under the DIGI+ VRM section: "CPU/NB Current Capability" and "CPU Current Capability". I assume they are both just indirect references to voltage settings. I kept everything else at "Auto" and set those two to "110%". That little nudge was enough to resolve the issue.

Results: Compared to my previous 3dMark Cloudgate runs with the 2133 RAM, I saw a 1.4% average increase in total score at stock iGPU frequency and a 2.7% higher average score with the iGPU OC'ed to 900MHz. Also, the system was stable at 4.4GHz/900MHz under FurMark CPU+GPU stress test (unlike before).

Note: I think the higher % increase in 3DMark score at 900MHz iGPU frequency is because the increased iGPU clock results in the memory bandwidth being the predominant bottleneck in the system vs. the iGPU itself.
 
Solution