It seems like you're more focused on office work then heavy gaming. Either way, you're shooting yourself in the foot with a Pentium. I'd also opt for a power supply that is 80 plus Bronze Certified, here's one at a great price. If you're not gaming much, that 6870 is overkill. I revised your build a bit and added you a case.
Integrated graphics have gotten better but still cant play high demanding games on high. i would say to stick with the A10 and when he can produce enough money, go for a strong GPU, yes it will bottleneck when you get it but at least you will be running games faster off the start.
if he can save his budget to a $750 mark(not a good place but something a lot better than the $500 mark) i would suggest a better build.
if hes not gonna game much then A10-5800k.is not a good option as it has 125 watts tdp. better go for cheap pentium or i3
Wrong in many accounts. Readers, disregard this comment as it has the wrong TDP and doesn't understand what TDP means either. FYI, TDP does not equal power consumption. Furthermore, Pentiums suck for gaming in most modern games.
Oh sure, their FPS numbers are often almost as high as that of the i3s, but FPS is not real-world. The above is a test done in average frame latency which is much more real-world (it takes into account variation in the FPS in small-scale time frames, the most noticeable, as well as some forms of stutter) and Pentiums do very poorly in real-world performance. FPS is almost as bad as a synthetic.
OP's best options for a CPU are an overclocked A8-5600K or an i3, preferably an Ivy Bridge i3. FYI, Trinity is very power-efficient. It uses less power than even Ivy Bridge dual-core CPUs such as the Pentiums and i3s at idle and reasonable power consumption at load, so anyone who tries to say that its power consumption is too high should get a clue.
OP, since you have an Intel preference. I'd recommend an i3, but that's not to say that AMD's Trinity APUs make bad options, it's just because they're not what you asked for.
Where do you get that chart anyway? That chart is misleading.
It's impossible that an A8 Llano can beat G2120 when gaming.
There's been a benchmark with i3 and Pentiums with BF3 and they perform just as well with only a few FPS difference.
The chart is absolutely correct, what you read is misleading. FPS doesn't take stuttering, sub-second frame rate variation, and a few other issues into account and these are issues that Pentiums struggle badly in (as I can confirm from personal experience). This is what separates Pentiums from i3s as I said: Similar FPS, dissimilar experience. Pentiums and Celeron's only having two threads hurts a lot in many games despite the fact that many games supposedly don't have much in the way of scaling beyond two threads.
I can theorize reasons for this and IMO, the most likely is that the Pentiums have such high performance per core, but poor multi-tasking and even low-resource needing threads from background tasks and some weaker game tasks, although not high in resource need, do not like the constant switching. The cache may also be a proble mas in some cases, Pentiums and Celerons have significantly less cache and gaming is generally very sensitive to cache capacity and performance.
Furthermore, your disbelief simply because of very low-end Intel dual-core CPU without even Hyper-Threading being beaten by a CPU with double the core count and only somewhat inferior performance per core is humorous, to say the least.
Also, BF3 MP, unlike BF3 SP, is extremely CPU-bound and extremely well-threaded. The A8-5600K would beat any and all i3s in many BF3 MP situations with many players and very intensive maps.
There really is no sense getting a Trinity APU when you can get a discrete card with it.
Again, the problem is that you're looking at mere FPS which is not a real-world measurement. Measuring the frames in a second is like measuring a person's growth from birth to death in five year increments. It's going to tell you how much they've grown after five years each time, but if you try to make a real-world average from it of growth per year, you can bet that real-world growth will be very different in smaller time frames and thus any average would be incredibly inaccurate.
Measuring average frame latency is far more accurate. This gives you the average time that each frame needed with the top 1% excepted as they include bad outliars. Even more accurate is when you measure percentage of frames over certain points of 50ms, 33.3ms, 16.7ms, and 8.3ms.
Pentiums do poorly in this because they have very high stutter and frame rate variation compared to most triple and quad-threaded CPUs relative to their FPS numbers.
EDIT: Furthermore, there is a lot of sense in getting APUs. They overclock between the average performance of the i3s and the i5s with ~i3 power consumption at idle and ~i5 consumption at gaming load (most of your time is usually spent at idle, so it's a minor loss for most people) and perform between them too despite the architectural flaw in the front end of each module, the crap cache, and a full process node loss. APUs also make the best entry-level systems. No system with a CPU from Intel can touch a good APU system for overall performance, power efficiency, and connectivity in the sub $400 system market.
The only problem with TechReport is that its testing isn't really real world as well.
G2120 with 7950.
A8-3850 with 7950.
with everything at max settings.
No one is going to do that. Although, it does give you food for thought .
It's real-world in that it tests true CPU performance so you know what graphics to get to best compliment the CPU. It clearly shows how something such as the 7850 may be ideal for the A8s and A10s (without overclocking) whereas the 7950 is better suited towards the i5s and i7s (with overclocking) and I'd put the factory-overclocked 7850s or the reference 7870 to be around ideal for the i3s, going strictly by the chart (granted that since I'm placing the 7950 as a maximum in that and assuming that no faster card could avoid a total CPU bottle-neck, so it's not using the most ideal methodology, just a basic example).
EDIT: I'd like to point out that this is only true for the specific circumstances that were tested in each game and changing the settings played may make a faster or slower graphics card more usable on a given CPU, also with a dependence on some other factors such as system memory performance (although for most DDR3 systems, that's not a huge deal, it can still be a factor).
Point is that using an obtusely high-end video card, although not real-world in that it's not using a card that'd be used by systems with those CPUs, does accurately demonstrate real-world performance of the tested PUs and is thus contributory to a real-world test that is trying to determine the CPU's performance limitations. It's not like FPS which is literally an in-between of synthetic benchmarks and real-world benchmarks, making it a poor thing to test for regardless of how real-world the systems being tested are.