Sign in with
Sign up | Sign in
Your question

Amd a6 apu vs fx4100 vs phenom 955 ?

Tags:
  • CPUs
  • AMD
Last response: in CPUs
Share
May 31, 2012 12:41:23 AM

hi,Im building a new pc at around 500$ budget..heres what im getting :
amd radeon 6770 1gb
4GB ddr3 ram
cooler master case
a 500gb HDD and a dvd r/w
I still didnt make my mind about the cpu(and the mobo too cuz it depends on the cpu u know)...so which of these 3 is better ? amd a6(dont remember the whole model number the seller told me about but its price is nearly similar to the other 2),fx-4100 or phenom II x4 955(if i found 1 becuase its discontiuned u know)...some might suggest an intel i3 but I dont wanna get a dual core even if its slightly better than the 3 i mentioned because games will surely need more than 2 cores in the future...I will be using my pc mainly for internet and gaming...

sorry if there are many typos,Im using my phone right now...

More about : amd apu fx4100 phenom 955

a b à CPUs
May 31, 2012 12:46:18 AM

If you do end up taking the FX route it might be worth the extra ~$30 for the FX 4170 over the FX 4100.
m
0
l
a b à CPUs
May 31, 2012 12:49:16 AM

I'd probably go with the A6 and crossfire it with the discrete card. Couldn't tell you which models are supported, but it's probably the best bang for your buck at such a low price point.

Also, don't skimp on your memory like that. The difference in 4GB and 8GB is only about $20. Is it really worth $20 to handicap your system's performance like that?
m
0
l
Related resources
May 31, 2012 11:56:06 AM

willard said:
I'd probably go with the A6 and crossfire it with the discrete card. Couldn't tell you which models are supported, but it's probably the best bang for your buck at such a low price point.

Also, don't skimp on your memory like that. The difference in 4GB and 8GB is only about $20. Is it really worth $20 to handicap your system's performance like that?

ok,I guess thats what Im getting..thanks .
m
0
l
a b à CPUs
September 24, 2012 12:14:04 PM

Hey dude...sorry for the late reply.... I prefer a6...coz i'm having one....i have a6-3650,2.6 ghz with radeon hd 6530d in the apu...
it'll be a better option if you are not having a graphics card right now,coz a6 can run games decently even without a card....and later,you can get a hd6450,hd6570 or hd6670 and crossfire it with apu graphics....it can give you even more performance....
m
0
l
a c 104 à CPUs
a b À AMD
September 24, 2012 12:39:57 PM

You cannot crossfire a 6770 with an A6 cpu and if you are getting a 6770 then the a6 is the worst choice between the 3 as it has the weekest CPU and a good built in GPU you will not use. Out of the 3 I would go with the Phenom x4 but unless its under 2/3rds of the price of an i3 I would go intel 100% as even pentium G series CPUs run games that can use 4 cores better than any of the CPUs you mentioned.
m
0
l
a b à CPUs
September 24, 2012 12:52:47 PM

If your planning for the FX, wait for the PileDriver release in about a month or so.....however, i'd really recommend an i3 for you.

The performance of an IB i3, which has hyperthreading, is almost as good that of the phenom/FX/A6 in games. The i3 will continue to be better than the AMD chips in gaming for the near future, plus it's also faster in general day to day stuff as well, while consuming around half the power of what AMD chips use. Its really a no brainer :)  In the end, for games it's the GPU that matters most.

In case you haven't read,
http://www.tomshardware.com/reviews/gaming-cpu-review-o... :) 
m
0
l
a b à CPUs
September 24, 2012 1:54:05 PM

simon12 said:
You cannot crossfire a 6770 with an A6 cpu and if you are getting a 6770 then the a6 is the worst choice between the 3 as it has the weekest CPU and a good built in GPU you will not use. Out of the 3 I would go with the Phenom x4 but unless its under 2/3rds of the price of an i3 I would go intel 100% as even pentium G series CPUs run games that can use 4 cores better than any of the CPUs you mentioned.


Hey dude,what I said is hd6670 and not hd6770......
m
0
l
a b à CPUs
September 24, 2012 2:22:33 PM

$hawn said:
If your planning for the FX, wait for the PileDriver release in about a month or so.....however, i'd really recommend an i3 for you.

The performance of an IB i3, which has hyperthreading, is almost as good that of the phenom/FX/A6 in games. The i3 will continue to be better than the AMD chips in gaming for the near future, plus it's also faster in general day to day stuff as well, while consuming around half the power of what AMD chips use. Its really a no brainer :)  In the end, for games it's the GPU that matters most.

In case you haven't read,
http://www.tomshardware.com/reviews/gaming-cpu-review-o... :) 


He said he doesn't want a i3 and gave the reasons for it. Yes a i3 will glory shot where it is optimally coded for, but in titles that actually work on AMD modulation and older architectures then the FX 41XX and PII 955BE are still much better options.
m
0
l
a b à CPUs
a b À AMD
September 24, 2012 10:53:12 PM

If you are going with the 6670, the A6 might be the best. If you are going with a 6770 then the 955 is the best, the 4100 isn't that bad either.
m
0
l
a c 154 à CPUs
a b À AMD
September 25, 2012 1:47:35 AM

If you really want to go with AMD I would go with the Phenom II 955. Really though for the price I would put a little more money into it and get an I3. You say you don't want a dual core because in the future games will use more cores but your fooling yourself with false hopes of future proofing. There is no such thing as future proofing. We aren't at the point yet where most games use more than two cores. Most games only use two cores and by the time the time we hit that point You'll probably be doing a new build anyway. The dual core I3 will handle most games just fine in most cases it will out perform all those AMD CPU's you listed. I would jut go with the I3 now and take the better performance.
m
0
l
a b à CPUs
September 25, 2012 5:39:53 AM

Posted tests I did with F1 2010/11/12, a game that scales cores well up to quad cores. The pentiums invariably struggle and can get outdone by older Athlon X3 and X4's, the i3 with its HT helps it a little bit but still struggles. While the FX 81XX and i7's sit comfortably around the 70 FPS mark. Games are moving towards multi core optimization, those that do it well will bring a dual core to its knees (metro 2033).
m
0
l
a c 154 à CPUs
a b À AMD
September 25, 2012 6:31:30 AM

Very few games will max out an I3. The only ones that will give it hard time are metro 2033, BF3 in multiplayer and Skyrim. Other than that the I3 can handle and out perform the APU and Bulldozer in all but those highly demanding games. Games are moving towards using more cores but we aren't at that points yet. Like I said if you think you are future proofig your not. By the time we hit that point all these CPU's will be obsolete.

In case you missed it AMD still has nothing that compete with Intel until the third tier down.

http://www.tomshardware.com/reviews/gaming-cpu-review-o...
m
0
l
a b à CPUs
September 25, 2012 6:57:53 AM

I did post gaming benches yesterday which showed games that scale cores well and the results were quite to the contrary, but it is tedious trying to get that message across, almost as tedious as going through the same rhetoric.

And in case you missed it, OP says he doesn't want a i3.
m
0
l
a c 154 à CPUs
a b À AMD
September 25, 2012 1:17:05 PM

sarinaide said:
I did post gaming benches yesterday which showed games that scale cores well and the results were quite to the contrary, but it is tedious trying to get that message across, almost as tedious as going through the same rhetoric.

And in case you missed it, OP says he doesn't want a i3.


:pfff:  :pfff: 
m
0
l
a b à CPUs
September 25, 2012 2:35:00 PM

It really doesn't help posting smilies, nor does it help posting just about every skewed website bench showing yourself to be handy at the copy paste but not much good when it comes to actually doing anything yourself. I have worked as a freelance hardware tester an have tried to remain as objective about products as I can to give those whom may require choices outside the land of intel which you now occupy, if you actually do the testing yourself you will find what you said above to be correct in about 10% of instances namely Crysis 2 and Skyrim (world is ending).

Why don't you try look at things outside this intel bubble you so affectionately cling to.
m
0
l
a c 154 à CPUs
a b À AMD
September 25, 2012 3:53:11 PM

sarinaide said:
It really doesn't help posting smilies, nor does it help posting just about every skewed website bench showing yourself to be handy at the copy paste but not much good when it comes to actually doing anything yourself. I have worked as a freelance hardware tester an have tried to remain as objective about products as I can to give those whom may require choices outside the land of intel which you now occupy, if you actually do the testing yourself you will find what you said above to be correct in about 10% of instances namely Crysis 2 and Skyrim (world is ending).

Why don't you try look at things outside this intel bubble you so affectionately cling to.


Ha thats calling the kettle black. Lol skewed benchmarks,why are they skewed because they don't show your precious AMD Bulldozer in the perfect light, because it shows it for what it is a crappy Core 2 duo equivelent. I would never believe private benchmarks especially not from a die hard AMD fanboy like you,it's to easy to skew the numbers. You can keep trying to convince us that Bulldozer and APU'sare great but the benchmarks from reliable sources don't lie. So you can keep doing your phony benchmarks and trying to convince us but the only one you're fooling is yourself. I don't know what your fetish is with AMD but I find it funny and entertaining.
m
0
l
a b à CPUs
September 25, 2012 4:58:57 PM

sarinaide said:
nor does it help posting just about every skewed website bench

It makes sense to claim bias when a data point falls far outside the trend. Claiming bias when the industry overwhelmingly finds the exact same result is nothing but denial.
m
0
l
a b à CPUs
September 25, 2012 5:16:57 PM

But it doesn't show overwhelmingly, it shows selective benches and the results are somewhat dubious. I don't deny that intel is possibly the better route with more features and growing platforms but the results are so badly done. Worked through a bunch with colleagues, about 25% of the results show consistencies, the rest were way off the mark.
m
0
l
a c 154 à CPUs
a b À AMD
September 25, 2012 11:21:32 PM

willard said:
It makes sense to claim bias when a data point falls far outside the trend. Claiming bias when the industry overwhelmingly finds the exact same result is nothing but denial.


Exactly right on all points, that is the epitome of fanboyism.
m
0
l
a b à CPUs
September 26, 2012 1:47:43 AM

sarinaide said:
But it doesn't show overwhelmingly, it shows selective benches and the results are somewhat dubious.

"Doesn't show overwhelmingly" and "doesn't meet the arbitrarily high bar I've set for it" do not mean the same thing. I read a lot of BD reviews, and I didn't see much in the way of compliments. I saw a lot in the way of "why is this eight core chip getting beaten by a four core chip clocked 500 MHz lower" and "wow, Bulldozer has a lower IPC than Phenom II?" The single threaded benchmarks were outright hysterical.

BD was a flop and Interlagos was worse, barely keeping pace with Intel's Xeons released twenty months before. BD is so obviously inferior that AMD had to market them using a value angle compared to a $1000 processor from the previous generation of chips, rather than dare compare it to the chips they priced it next to.

BD only excels in perfectly threaded applications, which are still somewhat rare for the average user. The shared FP scheduler gives the chip a Jekyll and Hyde personality with floating point heavy workloads (common for the average user) costing as much as 50% of the chip's performance.

Quote:
Worked through a bunch with colleagues, about 25% of the results show consistencies, the rest were way off the mark.

I'm sorry, but you're going to need to cough up some credentials or post your data if you expect me to accept your casual dismissal of the entire body of evidence that BD is inferior. I've seen this bias defense from AMD fanboys before. In no case have I been convinced that there's a coordinated conspiracy among dozens of independent websites to suppress favorable benchmarks for AMD, or to only use benchmarks which somehow favor Intel.

If AMD's chips were faster than Intel's, the benchmarks would show it. BD just didn't live up to the hype.
m
0
l
a b à CPUs
September 26, 2012 10:12:06 AM

If your justification on flop is not beating prior generations or cheaper chips of the current gen then look no further than 2011 with benches claiming a i7 3770K to be better than a 3960X in most, does that make it a flop no it doesn't. Sure a few benches and games are better optimized to AMD's archaic K2 architecture but most benches were also done October 2011, since then a series of patches, hotfixes and revisions to most apps have actually shown improvements, some significant.

While I didn't at any point say better they are certainly not across the board slower than Phenoms, and certainly not 55% slower than Intel as some have said. I will say IPC is around 12-15% slower, and that is against a company with all the wealth and around 6 years working on their architecture. I was against AMD releasing BD when they did, what was released was a product well below the initial engineering specs.

Power and heat, while yes that is not changing a great deal with AMD, they are compensating the loss of IPC with higher clocks, when a FX 8150 was on engineering specs supposed to be a 4.2ghz chip up to 4.5ghz on TC, at those clocks the difference in performance is significant.

I wasn't one that put expectations on BD, so for that I don't regard it a flop, perhaps underwhelming and badly marketed but that is history, BD was step one in a new architectural direction for AMD so to expect step one to be exceptional would be over ambitious.
m
0
l
a b à CPUs
September 26, 2012 3:56:23 PM

sarinaide said:
If your justification on flop is not beating prior generations or cheaper chips of the current gen then look no further than 2011 with benches claiming a i7 3770K to be better than a 3960X in most

Now that's interesting, because the 3770k wasn't released until late April of 2012 and benchmarks were only leaked a week or two before. Also, I'd say not beating your own chips from previous generations or cheaper chips of the current gen is a pretty goddamn big flop.

The 3770k isn't a flop because it's an incredibly fast chip that mops the floor with every other mainstream chip ever made. Now why don't you post a "BD isn't a flop because..." and try not to make me laugh. Here, I'll get you started.

BD isn't a flop because it cost significantly more than the 2500k at launch while performing significantly worse in the overwhelming majority of applications?

BD isn't a flop because its TDP is almost twice as high as competing chips?

BD isn't a flop because the shared FP scheduler cripples performance on the most common workloads?

BD isn't a flop because it offers better value than chips with legendarily poor value marketed to people with more money than sense?

BD isn't a flop because the chips were so bad compared to similarly priced chips that AMD had to cut the price twice in the first year of sales?

BD isn't a flop because it gets beaten in single threaded benchmarks by AMD's own chips from the previous generation?

BD isn't a flop because it gets beaten in multi threaded benchmarks by chips with half as many cores?

BD isn't a flop because Interlagos was barely able to keep pace with Intel's nearly two year old offerings operating at a significantly lower TDP, and was dominated in 100% of benchmarks by the SB based Xeons?

Quote:
Sure a few benches and games are better optimized to AMD's archaic K2 architecture but most benches were also done October 2011

And again, rationalization of your untenable belief that Bulldozer isn't as slow as every benchmark ever done has shown. A rational man looks at a pile of benchmarks showing that A is slower and B and accepts that A is slower than B. You're trying to blame everything but A for its slowness. This is commonly known as denial.

It's not even a question of optimization anyway, it's a question of a flawed design prioritizing server workloads over desktop workloads, dramatic reduction in IPC over the previous generation, a failure to significantly reduce TDP and inability to produce a chip that can compete on any level with Intel's top offerings. Even the Windows 8 threading optimizations (which are the largest performance gain to be had and by a large margin) only had a minor impact on performance, and only in some cases.

Optimization might make things better, but no amount of software changes can fix what's wrong with the hardware. If your workload is floating point heavy then BD is garbage and there's nothing you can do about it. If your workload doesn't parallelize well, then BD is garbage and there's nothing you can do about it. AMD's solution of "throw more cores at it" looks good on paper but fails to deliver in almost every way.

Like I said, Bulldozer is good at exactly one thing. Perfectly parallel integer math. This is an exceptionally rare type of workload, present almost exclusively in rendering and conversion software.

Quote:
While I didn't at any point say better they are certainly not across the board slower than Phenoms, and certainly not 55% slower than Intel as some have said.

I said they lose 50% of their performance in FP heavy workloads, which is true. The modules are incapable of scheduling floating point work on both of its cores simultaneously. Half of your cores sit idle during floating point work, thus you lose half of the performance. Nobody said they were 55% slower than Intel. Bulldozer is 50% slower than itself when presented workloads that Intel's chips have no problems whatsoever with.

Quote:
I will say IPC is around 12-15% slower

It's actually about 17% lower IPC than Phenom according to these benchmarks:
http://www.pcper.com/reviews/Processors/AMD-FX-Processo...

A nearly 20% cut in IPC is massive.

Quote:
and that is against a company with all the wealth and around 6 years working on their architecture.

So Intel making a good product prevented AMD's engineers from being competent and not losing 20% of their single threaded performance? I'm sorry, I just don't follow.

Bulldozer's IPC being lower has not one goddamn thing to do with Intel. It's a flawed design that AMD took from start to finish. Unless Intel was in there sabotaging designs, then you should probably stop trying to blame AMD's abject failure in IPC on Intel.

Quote:
I was against AMD releasing BD when they did, what was released was a product well below the initial engineering specs.

As a result of the flawed design of the Bulldozer module, not due to being rushed. This kind of high level design gets finalized years in advanced. AMD made their bed, and now they've got to sleep in it. Another year wouldn't have changed anything. You also have to consider how the market changes while they wait. Sure, they could have made Bulldozer way better by waiting another couple years, but they'd still be delivering 2011 technology in 2013, then. The longer you wait, the worse the chip looks because your competitors have had more time to produce products that don't suck.

Hell, I doubt AMD could have made a 2500k competitor if they'd delayed anyway. Even if they meets all the goals they have for Piledriver, it will still be slower than a 2500k on numerous workloads because of the underlying flaws in the architecture. The Bulldozer module itself is the problem. For it to perform well you have to feed it perfectly threaded integer workloads, which as I've explained over and over again, are rare for the average user.

Quote:
Power and heat, while yes that is not changing a great deal with AMD, they are compensating the loss of IPC with higher clocks

Big deal. They failed at IPC so they were forced to increase clock speed, which increased heat. This is a bad thing. You're basically saying "Yeah, this part of the chip sucks, but at least they went and sacrificed performance elsewhere to make it suck a tiny bit less!" Intel's chips run cooler and faster each generation, AMD's lose performance and run just as hot as before.

Quote:
I wasn't one that put expectations on BD, so for that I don't regard it a flop, perhaps underwhelming and badly marketed but that is history

So underwhelming performance and grossly misleading advertising to try to cover it up doesn't constitute a flop?

Quote:
BD was step one in a new architectural direction for AMD so to expect step one to be exceptional would be over ambitious.

How about expecting it to be adequate? Not worse than the previous generation? A step forward, not backward? Not priced higher than chips which mop the floor with it in practically every test?
m
0
l
a b à CPUs
September 26, 2012 8:58:10 PM

Well from what I have heard about steamroller is the module design is completely different, if you had to take a die chart of a Thuban add two cores each core with multiple FPU in itself to process two instructions per core, and a front end that is similar to what Haswell will have I don't really believe AMD really put much time into Piledriver. While having seen the official ES chips performance better yes but nothing really fixed on the power and heat.

From what is said of Steamroller AMD are still not giving up on the mid 4ghz standard operating perimeters but reduce power and heat, I am sure to achieve that they will need better transistors.

I would like to just go on the "certain workloads" point, while most of the FX 81XX chips I use in builds for private and clients the systems are used in high calculations per second professional level systems, which the Zambezi does pretty well, In gaming terms the FX 4170 compared to every other FX does one thing the others don't do and that is hold consistent frame rates. I do believe this is a lot to do with being late and many programs are not going to make schedule and compiler changes this late into the game. Windows 8 and beyond we will have to wait and see what happens.
m
0
l
a b à CPUs
September 26, 2012 10:57:28 PM

sarinaide said:
Well from what I have heard about steamroller is the module design is completely different

Actually, it's very much the opposite. Steamroller is an incremental upgrade over Piledriver, which is an incremental upgrade over Bulldozer. Steamroller will continue to share the FP scheduler, which will continue to be a performance problem.

Quote:
From what is said of Steamroller AMD are still not giving up on the mid 4ghz standard operating perimeters but reduce power and heat

Clock speed is meaningless by itself, and pointing to AMD's higher clocks smacks of not understanding chip performance at a basic level. Clock speed is but one of many factors influencing performance. I don't care if AMD runs their chips at 1000 GHz, I care who has the fastest chip.

Quote:
I would like to just go on the "certain workloads" point, while most of the FX 81XX chips I use in builds for private and clients the systems are used in high calculations per second professional level systems

I'm sorry, I'm calling bullshit. Nothing you've said up to this point has given me any indication you have a professional's knowledge in this area. Want to counter? Kindly explain what kind of system you'd build me for high performance database access, and what factors are at play. Anyone making "builds for private and clients" used for "high calculations per second professional level systems" should be able to answer this off the top of their head.

Pro tip: The term you're looking for is FLOPS, not "calculations per second." A calculation is far too vague a term to be useful for comparisons.

Quote:
which the Zambezi does pretty well

Only if you don't compare it to the cheaper chips that perform better.

Quote:
In gaming terms the FX 4170 compared to every other FX does one thing the others don't do and that is hold consistent frame rates. I do believe this is a lot to do with being late and many programs are not going to make schedule and compiler changes this late into the game.

You should have some idea what you're talking about before you start speaking. As a professional software developer, what you just said is so wrong it makes my head hurt.

Quote:
Windows 8 and beyond we will have to wait and see what happens.

No we won't, Windows 8 has been available for testing for a long time now. Spoiler alert, BD still sucks on Win8.
m
0
l
a b à CPUs
September 27, 2012 3:14:16 PM

willard said:
Quote:
In gaming terms the FX 4170 compared to every other FX does one thing the others don't do and that is hold consistent frame rates. I do believe this is a lot to do with being late and many programs are not going to make schedule and compiler changes this late into the game.

You should have some idea what you're talking about before you start speaking. As a professional software developer, what you just said is so wrong it makes my head hurt.

I posted this from work yesterday between builds, so I didn't really get to go into this. So, here's a quick peek into how optimization in software development actually works.

First of all, software as a whole is never, ever optimized for a specific architecture. In very rare cases this might be done, but it's practically unheard of. Even in applications that demand unbelievable levels of performance you don't see these kinds of optimizations. They are just not cost effective. In some cases you'll see problem points in the code that have different code paths for different architectures, but again, this is pretty rare.

It makes a lot of sense if you think about it. You're unlikely to get very much performance out of targeting a specific architecture for optimization. Maybe a few tenths of a percent of overall performance at best. The reason is you really can't get around doing what you have to do, and if the architecture is bad at it then it's just bad at your software.

You also run into the problem of negatively impacting performance on other platforms. So, you'd really have to maintain separate code paths for each architecture you wanted to optimize. In essence, you have to design the software over again for each architecture. This is an expensive proposition, and with a very poor payoff.

The most common advice given to novice programmers in regards to optimization is simply "Don't do it." More experienced programmers eventually move onto "Don't do it yet." It's easy for somebody who doesn't understand software development to think you should invest lots of time into optimization as you develop to be sure the software is fast but you end up wasting huge amounts of time optimizing things that will have virtually no impact on performance.

It's entirely possible to optimize 90% of the code in a program to run in 0.00000000000000001% of the time it took before, and still not have a noticeable impact on performance because the slowness is a result of the last 10% that wasn't optimized. In fact, there is typically a very small amount of code responsible for the majority of the work, except in cases like heap lock contention.

To be honest, a lot of what you said was utter nonsense. Here are some of my favorite bits.

Quote:
in titles that actually work on AMD modulation and older architectures then the FX 41XX and PII 955BE are still much better options.

You should probably look up the word "modulation" in a dictionary before you use it again, because there's no such thing as modulation in relation to CPU architectures.

Quote:
Games are moving towards multi core optimization

And this is my first tip that you don't understand the software side either. The term you're looking for is parallelization, and not all games need it or can make use of it. The GPU is still the deciding factor in framerate for the majority of games.

Quote:
I have worked as a freelance hardware tester an have tried to remain as objective about products as I can to give those whom may require choices outside the land of intel which you now occupy

What you're doing is very, very, very far from objective. Objective would be accepting test results and moving on, not attempting to discredit every review that showed BD for what it is and lying about your credentials.

Quote:
I don't deny that intel is possibly the better route with more features and growing platforms

I'd love for you to clarify this, specifically the platform part, because it looks to me like you're talking out of your ass trying to backtrack a bit to paint yourself as not a fanboy, but just can't bring yourself to admit that Intel chips are faster. How exactly has the Intel platform grown, and why is that beneficial to users?

Quote:
f your justification on flop is not beating prior generations or cheaper chips of the current gen then look no further than 2011 with benches claiming a i7 3770K to be better than a 3960X in most

I think this is my favorite. In one sentence you manage to lie about 3770k benches, get the release date wrong and claim that a chip being slower than both the competition and your own previous chips does not constitute a flop. If Intel released a 4770k that performed on the level of a 2500k you'd be screaming bloody murder about the flop. But AMD does it and somehow you no longer judge processors by their performance. Odd.

Quote:
certainly not 55% slower than Intel as some have said.

There you go lying again. This specific logical fallacy is known as a "straw man" argument.

Quote:
I will say IPC is around 12-15% slower

I don't need no benchmarks, I'll just decide how the chips perform :lol: 

Quote:
that is against a company with all the wealth and around 6 years working on their architecture

And apparently Intel's success caused AMD's chips to perform worse in absolute terms :lol: 

Quote:
Well from what I have heard about steamroller is the module design is completely different

And here's your lack of knowledge coming up again. Did you not even consider looking into this? Not sure what a chip they'll be releasing two years after Bulldozer has to do with this debate (you know, about Bulldozer), though. Methinks you're desperately grasping at straws at this point.

Quote:
I am sure to achieve that they will need better transistors.

Really now? Care to explain that? I'd love to see your fumbling explanation for yet another thing you don't understand and only said to make it look like you knew what you were talking about.

Quote:
most of the FX 81XX chips I use in builds for private and clients the systems are used in high calculations per second professional level systems

Just couldn't resist lying about your credentials again, could you? It's hilariously obvious you've never built a professional system. I doubt you even know what a 2U is, or what QPI is, or what characteristics in a chip are desirable (hint, bulldozer's heat is a massive liability in high performance computing).

Quote:
I do believe this is a lot to do with being late and many programs are not going to make schedule and compiler changes this late into the game.

And here you try to speak with authority about something you are completely and totally clueless about.

I may be an asshole, but you appear to be a liar and a fraud. When I speak, I do so from years of experience as a computer professional. When you speak, you do so from your ass.

Now I'd appreciate it if you'd stop pretending you were a professional hardware tester, professional systems engineer (lol @ BD in professional systems, I'd literally sue you for incompetence if I were your imaginary client) or that you understand how software development works.

It's obvious to everyone here you're nothing but an AMD fanboy with delusions of grandeur. Before you decide to get into a debate like this again, I'd very strongly recommend learning what the hell you're talking about.
m
0
l
a c 154 à CPUs
a b À AMD
September 27, 2012 4:43:36 PM

Wow...just wow. Couldn't have said it better especially this part:

Quote:
I may be an asshole, but you appear to be a liar and a fraud. When I speak, I do so from years of experience as a computer professional. When you speak, you do so from your ass.

Now I'd appreciate it if you'd stop pretending you were a professional hardware tester, professional systems engineer (lol @ BD in professional systems, I'd literally sue you for incompetence if I were your imaginary client) or that you understand how software development works.
m
0
l
a b à CPUs
a b À AMD
September 27, 2012 5:51:45 PM

You guys gotta stop arguing. Heavy FPU performance hasn't been relevant in general computations since 10 years ago. The only heavy Floatpoint codes nowadays come from synthetics and specialized codes. Gaming is one of the heavier use of floatpoint operations and it still will only make up like 5% of all operations done by the cpu. The 4 shared Floatpoint units in bulldozer were powerful enough to replace the 6 inside the Phenoms with compatible performance. Floatpoint is clearly not a severe bottleneck in the cpu.
m
0
l
a b à CPUs
a b À AMD
September 27, 2012 5:59:29 PM

and also @willard, you don't need to be so condescending. There are many places where people specially optimize code for the hardware, much more than just general code as 90% of all cpus are embbed. The compiler used for the code are specialize optimized for what ever architecture you want to run on, you get 20-30 percent improvements in some codes by switching compilers for difference hardware.

I don't feel like getting into an argument but the fact you called out him like that while you yourself have many flawed points is quite unjustified. Some of your points are good and Im not going to argue about intel being better, pretty much everyone agrees on that but I do like to say you lack a lot of the knowledge in regards to CPU architecture and design to make claims you are making.
m
0
l
a c 154 à CPUs
a b À AMD
September 27, 2012 11:19:37 PM

No Willard is absoutly right on all the points he made. Maybe like me he's sick of Sarinades idiotic fanboy crap he post in every thread about AMD. Maybe if he was more objective and didn't make stupid, fanboy laced comments like those mentioned above people wouldn't have a problem with him.
m
0
l
a b à CPUs
September 28, 2012 6:23:15 AM

This is where the "I am going to make it my mission to turn people against AMD" post of yours is needed, and you talk about objectivity.

The case in point, yesterdays reviews on Trinity APU's, you will still go around recomending by default i3's just because. Anyways I think this just about puts pay to the necessity of keeping this going.
m
0
l
a c 154 à CPUs
a b À AMD
September 28, 2012 3:00:52 PM

You mean like how you parade around wih your phony benchmarks and outrougous claims trying to convince everyone that Bulldozer isn't garbage? I am objective like I said I call it like I see it. I've been plenty critical of IB especially with using thermal paste instead of fluxless solder on the IHS and the stupid push pin heatsink. You though go around with phony benchmarks and unproven credentials making claims out your butt trying todefend your buy. Face it you are what willard said, a fanboy in denial. BTW congratulations once again your AMD fanboy fetish ruined another thread.
m
0
l
a b à CPUs
September 28, 2012 3:14:56 PM

esrever said:
The 4 shared Floatpoint units in bulldozer were powerful enough to replace the 6 inside the Phenoms with compatible performance. Floatpoint is clearly not a severe bottleneck in the cpu.

Give it integer math and it acts like an 8 core chip. Give it floating point math and it acts like a 4 core chip. What do you call this if not a bottleneck as a result of the shared floating point unit? I'm not sure why you think its ability to barely keep pace with a two year old chip with 25% fewer cores clocked 10% lower is somehow a good thing.

Quote:
and also @willard, you don't need to be so condescending.

But then what will I do while I drink my coffee in the morning?

Quote:
There are many places where people specially optimize code for the hardware, much more than just general code as 90% of all cpus are embbed.

I'm sorry, I thought we were talking about desktop processors used in desktop computers running desktop applications. I fail to see how bringing up embedded devices is relevant. Of course you're going to optimize your brains out on embedded devices. It's a massively resource constrained system that usually needs to target exactly one architecture and frequently has real time processing requirements. This is basically the opposite of what a desktop computer is and requires a different strategy.

Quote:
The compiler used for the code are specialize optimized for what ever architecture you want to run on, you get 20-30 percent improvements in some codes by switching compilers for difference hardware.

I was specifically referring to him claiming the programmers needed to go back and optimize the code for bulldozer. Telling the compiler to optimize for it is as simple as setting a flag and hitting the compile button.

Manual optimization, like I said, is basically never done. Compiler optimizations tend to be architecture agnostic to prevent sacrificing performance or support on every other chip. BD specific optimizations are things like using FMA operations instead of two FP operations for the same result. BD happens to be the first AMD chip to support FMA, so a K8 chip couldn't even run the code.

Quote:
I don't feel like getting into an argument but the fact you called out him like that while you yourself have many flawed points is quite unjustified.

I stand by every point I've made.

Quote:
Some of your points are good and Im not going to argue about intel being better, pretty much everyone agrees on that but I do like to say you lack a lot of the knowledge in regards to CPU architecture and design to make claims you are making.

Really? I guess I was too busy building assemblers and virtual machines to learn anything about the architecture I was producing bytecode for and simulating. I suppose my years of experience writing and debugging x86 ASM left me with little to no understanding of x86. I probably didn't learn anything about optimizations either while working as a professional software developer (you know, with a degree from my years of training in these areas you say I have little knowledge of).

You know, I think I'm going to trust my own expertise over that of some guy who just comes in and says "no you're mean and wrong."

Quote:
This is where the "I am going to make it my mission to turn people against AMD" post of yours is needed, and you talk about objectivity.

I'm not trying to turn anyone against AMD, I'm trying to bash some sense into your lying fanboy head. My posts have been filled with facts and knowledge. Yours have been filled with lies, hand-waving dismissals of all evidence that doesn't agree with your opinion.

Quote:
The case in point, yesterdays reviews on Trinity APU's, you will still go around recomending by default i3's just because.

Yep, I sure will. It's not like I've been recommending the Llano APUs over the i3s already. Hell, it's not like I did it in the third post of this very thread.

Wait...

Quote:
Maybe like me he's sick of Sarinades idiotic fanboy crap he post in every thread about AMD. Maybe if he was more objective and didn't make stupid, fanboy laced comments like those mentioned above people wouldn't have a problem with him.

Pretty much. I take offense to fanboys in general (fanboyism is the antithesis of logic), and I have no qualms about calling out fanboys that lie to justify their fandom.

If you love AMD fine, go buy AMD chips and be happy. Just don't make *** up to support your love and tell the world they're wrong for not accepting your lies.
m
0
l
a b à CPUs
September 28, 2012 3:16:40 PM

rds1220 said:
I am objective like I said I call it like I see it. I've been plenty critical of IB especially with using thermal paste instead of fluxless solder on the IHS and the stupid push pin heatsink.

Oh my god that stupid heatsink. I've lost count of how many I've broken because the pins never want to all go in straight. I buy aftermarket coolers for every build just so I won't have to deal with that hunk of ***.

I'll admit, one area AMD has Intel beat in is their stock HSF. Now if only they could make something worth buying to put it on...
m
0
l
a c 154 à CPUs
a b À AMD
September 28, 2012 3:20:43 PM

willard said:
Maybe like me he's sick of Sarinades idiotic fanboy crap he post in every thread about AMD. Maybe if he was more objective and didn't make stupid, fanboy laced comments like those mentioned above people wouldn't have a problem with him.
Pretty much. I take offense to fanboys in general (fanboyism is the antithesis of logic), and I have no qualms about calling out fanboys that lie to justify their fandom.

If you love AMD fine, go buy AMD chips and be happy. Just don't make *** up to support your love and tell the world they're wrong for not accepting your lies.
[/quote]

I feel the same way and agree with you there.
m
0
l
a c 154 à CPUs
a b À AMD
September 28, 2012 3:30:37 PM

willard said:
Oh my god that stupid heatsink. I've lost count of how many I've broken because the pins never want to all go in straight. I buy aftermarket coolers for every build just so I won't have to deal with that hunk of ***.

I'll admit, one area AMD has Intel beat in is their stock HSF. Now if only they could make something worth buying to put it on...


I only broke one in all the builds I've done but I've come close to breaking alot more. I have a pair of really small, narrow needle nose pliers that I use to bend the clear plastic part back into shape when they get crooked. For me I always buy an aftermarket cooler and the same for when I do gaming builds for people. Problem is that most of the builds I do is for everyday users so something like a hyper 212 would be an unnecessary added cost. Plus when I do builds for everyday usesers I usually use a micro ATX desktop case so an aftermarket cooler wouldn't really fit anyway.
m
0
l
a b à CPUs
September 28, 2012 3:41:37 PM

I only do builds for friends, and have yet to do a non-gaming build. Basically every one of the computers I've built over the years is overclocked to a certain degree. The exception is my wife's computer, which uses an ancient Core 2 Duo and the stock HSF.
m
0
l
a b à CPUs
September 28, 2012 3:52:49 PM

Willard, I did post screen shots and FRAPS results not that long ago along with Noob and others regarding BF3 performance, those results were denied because it didn't concur with x, y and z, yet the other users were getting similar results. I addressed that with my liaison at AMD and basically they told me that AMD doesn't engage in marketing campaigns of that nature as the costs are to high.

I have admitted that Intel overall win the bench stakes but merely stated that the difference is not a automatic. Different countries have different costings and a AMD may better fit a budget and give similar like performance at a lower cost, in some countries AMD is more expensive than Intel so it is not viable at any level. I also never disputed that there are synthetics which do stress the architecture beyond its capabilities to the point that it is not good, but there are plenty of synthetics that do represent the FX chips accordingly and to their pricing, the FX 81XX do come up between the i5 and i7's a lot so in that regard if a person is on a AMD platform it makes sense to stay.

In gaming peformance, it is rather mixed and it depends on again, platform, country and costing. There are titles that reflect good results for FX parts, namely Metro 2033, Codemaster games, certain RPG's like Dragons Age but then there are bad results to with the 980's and 1100T's able to outgun the most expensive FX chip.

Power and heat, well regardless of how one looks at it, if you compare to intel then its bad but stock idle and load power was less than a 1100T on the 8150, its just overclocked power consumption that is riddiculous.

What the FX has done well was improve the memory controller, in that the FX with the right RAM moreso 1866 gives a more smooth responsive system over the 1100T.

All in all while FX didn't comprehensively beat Phenom II notabably the higher end Phenoms it still offers better all round performance bar a few games. Relative to intel again it depends on needs, it is still very capable of doing many day to day tasks well, but a person operating a word processor doesn't need a 125w TPD part, so in that regard a lower level APU or intel part is more conducive. Extreme gamers should take the most expensive setups affordable on the intel side, while mainstream users can find what they need in either manufacturer.
m
0
l
!