Long frame times are most jarring to me when there's a lot of on-screen movement. While slowing down usually helps mask this phenomenon somewhat, that's not really a viable workaround in first-person shooters and racing games.
We've established that it's difficult to record evidence of this phenomenon in multi-card configurations. But Fraps does make this possible in single-GPU systems. We're using it today to record performance in Battlefield 3.

It's difficult to generalize, but many folks can tolerate a 20 FPS minimum. So, we set an upper limit of 50 ms per frame to assure reasonable fluidity. Beyond that, adding time per frame can be a much more intrusive distraction.
The sad fact is that even with an average of 50 FPS (shown on the previous page), our fastest memory configuration can't reliably keep the A10-5800K's on-board graphics processor under 50 ms per frame.


Of course, maximum rendering times get worse as resolution increases. Memory latency could be an issue, but even pricey low-latency kits are barely better than the DDR3-1600 CAS 7 config we tested, or this setup's DDR3-2133 CAS 9 arrangement.
- Memory Scaling On AMD's Trinity Architecture
- Test System And Benchmarks
- Results: 3DMark And Aliens Vs. Predator
- Results: Battlefield 3 And F1 2012
- Battlefield 3, Frame By Frame
- Results: Skyrim And StarCraft II
- Power, Average Performance, And Efficiency
- When Does Spending 50% More Become A Great Value?
Right there. An APU is not a top-tier gamer, so incremental improvement really matters. I could not care less about going from 60FPS to 80FPS, but 30FPS to 40FPS, the same relative improvement, is a really big deal.
Oh you are so close yet so far to knowing what you're talking about...
You would be well served to learn about the Von Neumann architecture and more precisely the Von Neumann bottleneck.
http://en.wikipedia.org/wiki/Von_Neumann_architecture
The biggest bottleneck in any architecture is shared communication between all components, data throughput is crucial to all parts of a system; yet beyond that latency of components in relation to bandwidth is the real Achilles heel of the computer and THAT is why CPUs have L1/L2/L3 cache so that they can have ultra low latency memory that is usually around 1.5/5/7.5ns respectively. When you have that low latency combined with a bandwidth of what is 76000/44000/22000 MB/s compared to normal DDR3-1600 on sandy/ivy bridge you have a system that appears to not be bottlenecked by RAM. As for a Trinity AMD system, the reason why one sees such massive gains when going up in RAM speeds from DDR-3 1600 to 2133 is because the GPU can't get away with the tiny amount of storage that is the L1/L2 cache, it has to have a large interface of 512MB-3GB to crunch the massive amount of parallel data and therefore is limited by the aggregate throughput of the system's memory. Hypothetically, if you were to continue to increase the data rate of the system's memory you would see performance gains up until the point where the GPU's instructions units can no longer make use of the available interface.
Having said all that, until DDR-4 is out we can't say for certain that it will not have a huge impact on both AMD and Intel systems. This is because if DDR-4 manages to lower latency or greatly increase bandwidth you will see gains, especially if DDR-4 is able to achieve both lower latency and higher bandwidth at the same time. Oh and, to correct your first inaccuracy, DDR-4 will be lower power than what is currently available so it will use less electricity than DDR3-2400 therefore providing more performance per Watt of energy used.
The question is ... does the performance with higher speed memory continue to scale as the *SIMD Engine Array* is over-clocked.
Inquiring minds would like to know ...
Right there. An APU is not a top-tier gamer, so incremental improvement really matters. I could not care less about going from 60FPS to 80FPS, but 30FPS to 40FPS, the same relative improvement, is a really big deal.
Individuals who would use faster memory for gaming are likely to want to push their mid/high range card to the limits, do you plan on doing a similar piece for AMD CPUs as you did in the Intel article "Does Memory Performance Bottleneck Your Games?"
Also, I would like to see a Nvidia card at play as well. Maybe a 650 Ti or 660 Ti? In addition, it wold be nice to see the memory scaling difference between AMD and Nvidia GPUs in a single review.
Thanks.
Considering how DDR3-2400 is only a tiny fraction better than DDR3-2133, it's safe to assume memory stops being the bottleneck around that point. DDR4 will not noticeably improve performance or even power consumption as memory consumes almost negligible amounts of electricity to begin with.
It's back to looking at better GPUs and CPUs for better performance.
Bpttleneck hierarchy has always been GPU>CPU>RAM.
The CPU has always been more reliant on the RAM than the GPU but an APU is basically a GPU+CPU in one, so RAM is more important, but as we've seen, only up to DDR3-2133. After that diminishing returns skyrocket.
I didn't know that nVidia made APU's?
The more you know... /rollseyes/
still, 15 gb/s out of ddr3 2400 ram is just sad. i expect amd to improve in the next gen apus. the igpus deserve the extra memory bandwidth.
i wonder how cpu overclocking (along with igpu and ram oc) affect the games like skyrim, starcraft and f1. those seemed more memory sensitive.
Oh you are so close yet so far to knowing what you're talking about...
You would be well served to learn about the Von Neumann architecture and more precisely the Von Neumann bottleneck.
http://en.wikipedia.org/wiki/Von_Neumann_architecture
The biggest bottleneck in any architecture is shared communication between all components, data throughput is crucial to all parts of a system; yet beyond that latency of components in relation to bandwidth is the real Achilles heel of the computer and THAT is why CPUs have L1/L2/L3 cache so that they can have ultra low latency memory that is usually around 1.5/5/7.5ns respectively. When you have that low latency combined with a bandwidth of what is 76000/44000/22000 MB/s compared to normal DDR3-1600 on sandy/ivy bridge you have a system that appears to not be bottlenecked by RAM. As for a Trinity AMD system, the reason why one sees such massive gains when going up in RAM speeds from DDR-3 1600 to 2133 is because the GPU can't get away with the tiny amount of storage that is the L1/L2 cache, it has to have a large interface of 512MB-3GB to crunch the massive amount of parallel data and therefore is limited by the aggregate throughput of the system's memory. Hypothetically, if you were to continue to increase the data rate of the system's memory you would see performance gains up until the point where the GPU's instructions units can no longer make use of the available interface.
Having said all that, until DDR-4 is out we can't say for certain that it will not have a huge impact on both AMD and Intel systems. This is because if DDR-4 manages to lower latency or greatly increase bandwidth you will see gains, especially if DDR-4 is able to achieve both lower latency and higher bandwidth at the same time. Oh and, to correct your first inaccuracy, DDR-4 will be lower power than what is currently available so it will use less electricity than DDR3-2400 therefore providing more performance per Watt of energy used.
RAM Speeds above ddr 1333 does not bottleneck any current CPU in terms of gaming.
Hardly any game gove above 2 GB or ram used so 4 GB is what you need 95% of time you buy 8 GB because it's cheap.
And that's all about memory performance.
That's the main reason why you see a lot of streamers stream at the lowest possible settings which makes game look like crap but provides required fluidity to avoid the frustration of looking at the slideshow in which you can't act and proceed to lose the game.