Sign in with
Sign up | Sign in
Your question
Closed

Is the AMD FX 8350 good for gaming - Page 5

Tags:
  • CPUs
  • Product
Last response: in CPUs
Share
April 9, 2013 5:37:30 PM

whyso said:

2. Very true. But if you are running a system 24/7 rendering/encoding, it will add up.


I hear this often. I would like to know how many people is running a 3570k, 3770k, or a FX-8350 24/7 full-load during years. Personally I don't know anyone.
April 9, 2013 6:18:40 PM

8350rocks said:
Your assumptions are all false:

1.) Intel build quality is poorer, their chips are not SOI...the reason they cost more is because of onboard graphics that no one uses.

2.) The difference in power consumption over the course of a year is equivalent to turning on an additional 40W light bulb in your home.

3.) The Gaming myth has been debunked already...games like Crysis 3, Planetside 2, Bioshock Infinite, Metro 2033, Tomb Raider, and others are all within margin for error difference between the i5-3570k and the FX 8350, and the FX 8350 even beats the i7-3770k in some games. Skyrim is the only outlier, so don't bother to cite it as an example...

4.) For the extra $130 difference between the FX8350 and i7-3770k I can buy a H100i cooling system, and still come out cheaper than the i7-3770k and it's better stock cooler.

5.) If Intel had superior build quality why does AMD hold EVERY world record for overclocking, where build quality really comes directly into play? They hold them by 1+ GHz by the way (highest record is 8.76 GHz, where intel is 7.18 GHz), not some trivial margin of 100 MHz or something like that. AMD has the world record for highest overclock with all cores active by the way as well (8 cores on the FX8350 @ 8.176 GHz)


I know all of that (however most of which is false). And Yes they do have a world record for clock speed, but that actually doesn't even matter the most. The thing that matters is code, which is what Intel exceeds in, and physical build quality. And Intel onboard video does actually come into play. Most people do not know this, but the onboard Intel graphics actually assists in any way possible to make the computer run faster. For example, windows explorer. It helps with windows areo and allows the GPU to not have to focus on it as much.

The other thing is that the Power Consumption is false. Many studies have shown that the power consumption is less. The reasoning is the physical build and coding.

The code is built more efficiently and can run processes faster than any other because of the way its executed, thus, it doesn't need as much power to do it. So cool you hold a world record for Clock Speed, but the factor that comes into play is the fact of test of actually executing operations. I.E. Super P.I which you can calculate up to 32 (or 64) million digits of pi, and it does it in multiple loops. Then tell me who will win then?

Also with the 8 Cores, that's better for Video editing, which is what its made that way for. Intel Hyper Threading is what helps games. Its actually more efficient than 8 separate cores running individually. There are a few games out there that don't benefit from it (ArmA) but you can still just shut it off.

The Physical is false also. Yeah there were a couple of downgrades, but still, the overall quality of the physical components is much better. This is also where the heat comes into play. From my study, I've found that AMD (and this is probably known) tends to heat up a lot. Many studies have shown that AMDs take up a score of watts or more and also generate more heat. Something that is counteract-ant to that is that my friend (and im getting that same) has an ASUS SABERTOOTH Mobo with thermal plating and his i7-3770k with the fan that came with it: and running BF3 max settings, the fan never went above half and it went to 1ºC, which is still dangerous but is absolutely amazing.

With the games, the CPUs can never run on their own with a game and it depends on what kind of GPU you get. Now of course, AMDs would obviously work the best because they make CPUs and GPUs that can easily function with each other, which honestly is an un-fair advantage. However, if you say ran it with and Nvidia card then it would be more fair. Then you have the games that best fit certain GPUs and then the Mobos with certain specs, etc. Simply put there are MANY factors that can affect the usage of it. Thus I did some research and found that Yes, AMD did do Great (notice the capital G) with many games, however, in comparison, most games run better on Nvidia cards (which is where this factor comes in). THUS further more Intel cards work better more with Nvidia than AMD does with them. And even an Intel CPU can do great with an AMD GPU. And after my studies (again) Intel came in first outdoing it with most games (about 80% and these games are the biggest on the market). HOWEVER, AMD did put up a great fight and broke out and did fantastic in multiple categories and came runner-ups behind Intel in others.

With that the Intel and Nvidia setup did actually out run most of the dual AMD setups (CPU and GPU) and proves to be the better match. Now yes it does cost more money, but that physical aspect comes into play and Nvidia and Intel are much better with their physical components than AMD and deserves that extra pull with the money or so.

EDIT: And please do not T L : D R me. I'm putting in much valuable information into this.

EDIT 2: I'm getting off for tonight, can't wait to see what "information" you'll have for me tomorrow. :) 
a c 210 à CPUs
April 9, 2013 9:23:06 PM

1.)Intel uses Bulk wafers...SOI > Bulk...they only manage to make it work by using their proprietary Triple gate process.

This should explain the difference:

http://en.wikipedia.org/wiki/Silicon_on_insulator

AMD's wafers and internal components ARE physically higher quality than intel. That's why AMD products can take more voltage and OC higher. overclock ability is a direct indicator of build quality.

2.) Temps have nothing to do with quality...it's entirely to do with voltage and resistance, ohms law says that the more energy that passes through a circuit with resistance will generate heat proportionately to the amount of energy transferred over the circuit directly effected by the distance.

You can read about ohms law here:

http://en.wikipedia.org/wiki/Ohms_law

3.) Coding and build quality have nothing to do with power consumption, it has everything to do with the chips voltage operating range. If a chip is designed to operate at 125W then it consumes more power than a chip that operates at 95W. It's as simple as that. Many intel chips have lower wattage ratings and lower TDP.

4.) Coding has nothing to do with ILP, DLP, or TLP. Intel protocol is designed to take serial (single file) instructions and break them down quickly. This is called SIMD. AMD also has SIMD capability, but it excels at executing multiple streams of data and instructions at once...far better than intel as a matter of fact, this is called MIMD.

This explains SIMD:
http://en.wikipedia.org/wiki/SIMD

This explains MIMD:
http://en.wikipedia.org/wiki/MIMD

That explains the internal architecture differences between the 2 companies.

5.) Games do not use HyperThreading...at all. There is literally 0 benefit to HT in gaming. Meanwhile...AMD's integer cores are capable of being used to run calculations in gaming. See, the problem is, games use too much of a single intel core for a "virtual core" to be able to operate at any feasible rate, because "virtual cores" are a background process and cannot operate efficiently without tapping necessary resources that the 4 real cores are using. AMD does not have this issue, because the 8 real cores can be operated independently without tapping resources from another core. 2 cores share a floating point unit, though floating point calculations are shared with the GPU, the CPU will perform some of those calculations under heavy load in games like Crysis 3. That's why AMD benches so well on games like that. Also, a CPU cannot be 1C unless the ambient temperature is about 33-34F, because it cannot be cooler than the ambient temperature unless you're using Liquid Nitrogen or Liquid Helium coolant. His temp sensor was wrong.

Info on HyperThreading:
http://en.wikipedia.org/wiki/Hyper-threading

Quote:
Overall the performance history of hyper-threading is mixed. As one commentary on high performance computing notes:


Hyper-Threading can improve the performance of some MPI applications, but not all. Depending on the cluster configuration and, most importantly, the nature of the application running on the cluster, performance gains can vary or even be negative. The next step is to use performance tools to understand what areas contribute to performance gains and what areas contribute to performance degradation.[12]


Bold face type is mine.

Here is a list of applications supporting HT:

http://www.tomshardware.co.uk/hyper-threading-core-i7-9...

Notice there aren't but about 10-12 games on the list? The rest are all applications for businesses and productivity or media. HT is no real advantage in gaming.

6.) AMD's "load sharing" software allows ANY CPU to take advantage of it...AMD, unlike Nvidia and intel, do not participate in proprietary only processes. That's why AMD is so much better on Linux than Intel/Nvidia. Plus, games run better based on who sponsors them and what developer's kit they use, some run better on AMD, others run better on Nvidia.

This talks about AMDs App Acceleration:
http://www.amd.com/us/products/technologies/amd-app/Pag...

7.) Your "benchmarks" were not really well executed, and which AMD cards vs which Nvidia cards? A GTX680 is not fair to run against a HD 7870, even an XT. You have to look at GPU benchmarks to compare the "equality" of your systems. If I put a HD 7990 in my system and benched it against a GTX 660Ti, of course I would win, even if I had a i7-3930 in the intel system because the GPU would bottleneck the CPU.

Your assumptions are wrong.
Related resources
April 10, 2013 5:00:20 AM

GOM3RPLY3R said:

I know all of that (however most of which is false). And Yes they do have a world record for clock speed, but that actually doesn't even matter the most. The thing that matters is code, which is what Intel exceeds in, and physical build quality.


Precisely the world-record on overclocking was achieved by a better build quality of the AMD chip.

GOM3RPLY3R said:

And Intel onboard video does actually come into play. Most people do not know this, but the onboard Intel graphics actually assists in any way possible to make the computer run faster. For example, windows explorer. It helps with windows areo and allows the GPU to not have to focus on it as much.


I fail to see your point by several reasons. Intel graphics is bad for most gaming titles. I am pretty sure that the OP will be buying a discrete card. AMD graphics (both integrated or discrete) also deal with things such as aero acceleration. Aero is not used when gaming at full screen. Aero is disabled in W8. Aero does not even exist in other OSs.

GOM3RPLY3R said:

The code is built more efficiently and can run processes faster than any other because of the way its executed, thus, it doesn't need as much power to do it. So cool you hold a world record for Clock Speed, but the factor that comes into play is the fact of test of actually executing operations. I.E. Super P.I which you can calculate up to 32 (or 64) million digits of pi, and it does it in multiple loops. Then tell me who will win then?


Super Pi is using a 25% of a desktop i5/i7 chip, but only a 12.5% of an eight-core FX chip.

GOM3RPLY3R said:

Also with the 8 Cores, that's better for Video editing, which is what its made that way for. Intel Hyper Threading is what helps games. Its actually more efficient than 8 separate cores running individually.


HT can increase, decrease, or maintain equal the performance. There are examples where HT activated decreases performance of some application. Why? Because, with HT activated, a 4-core i7 is performing like a 5-6 core chip, but it is not a real 5-6 core chip because there is only 4 physical cores, the rest are only virtual.

It does not scale as a chip with 8 physical cores. That is why the FX-8350 can be 30-70% faster than an i7-3770k with apps that load the 8-cores.

GOM3RPLY3R said:

The Physical is false also. Yeah there were a couple of downgrades, but still, the overall quality of the physical components is much better. This is also where the heat comes into play. From my study, I've found that AMD (and this is probably known) tends to heat up a lot. Many studies have shown that AMDs take up a score of watts or more and also generate more heat. Something that is counteract-ant to that is that my friend (and im getting that same) has an ASUS SABERTOOTH Mobo with thermal plating and his i7-3770k with the fan that came with it: and running BF3 max settings, the fan never went above half and it went to 1ºC, which is still dangerous but is absolutely amazing.


Precisely it is the other way. The better physical build is the reason why AMD thermal tolerances are higher than intel. It can run hotter without problems.

GOM3RPLY3R said:

With the games, the CPUs can never run on their own with a game and it depends on what kind of GPU you get. Now of course, AMDs would obviously work the best because they make CPUs and GPUs that can easily function with each other, which honestly is an un-fair advantage. However, if you say ran it with and Nvidia card then it would be more fair.


How can you say this after saying us above the supposed advantages of intel graphics on intel chips? And why would be more fair to run AMD with Nvidia, when Nvidia GPUs are usually optimized for Intel CPUs? {*}


{*} I understand why they are collaborating so closely. Intel cannot compete with AMD on graphics and needs Nvidia. Nvidia cannot make X86 chips and needs Intel to compete with AMD...
a b à CPUs
April 10, 2013 5:58:55 AM

Amd thermal tolerances are lower than intel. Tjmax for 8350 is 90, ivy bridge is 100-105. If there is no discrete gpu present (which was exactly my point because it will cost money) the i7 can use openCL acceleration while the FX cannot.



Thats a sizable advantage without the need for a discrete gpu. You also arn't considering that for strictly cpu tasks the hd4000 is perfectly sufficient while the 8350 will require an dgpu which will raise costs and power consumption (but only when a gpu is not really needed for the system and the hd4000 is sufficient- if the work requires a gpu then the advantage is lost).

I suppose the question is how does trigate stand up to SOI/finfet? (I honestly have no idea).

Yes, super pi is 25% of an intel i5/i7 vs 12.5% of a FX but that forgetting how much faster the i5/i7 is going to be. (If you are still not seeing how this is a poor comparison running super pi on an 8 core jaguar chip-ps4 vs the i7, its still 25% vs 12.5% but the i5/i7 is going to blow the jaguar chip out of the water, being more than two times as fast [per core]).

Games do use hyperthreading. Its why an i3 is much much better than a pentium for gaming despite only a small increase in clock speed in multithread games. It's why in games like crysis 3 the i7 is a little better than the i5. Games don't use it as well as the modular fx architecture though. That article is more than 2 years old.

AMD had a real chance with GCN in laptops (which is a little more efficient than kepler in the smaller chips). (If I could have found a 7850m or 7870m I would have bought it right away). The problem was that they had few design wins and their drivers were crap (enduro issues with the 7970m for almost a year). Hopefully the 384 cores 'solar system' chips and richland crossfire will change this. Also AMD turbo in mobile is crap, intel lets their cpus draw enough power to turbo and throttle based on temperature, amd throttles based on power usage, so many times your chip is at only 55 degrees but throttling.



a c 210 à CPUs
April 10, 2013 7:54:41 AM

whyso said:
Amd thermal tolerances are lower than intel. Tjmax for 8350 is 90, ivy bridge is 100-105. If there is no discrete gpu present (which was exactly my point because it will cost money) the i7 can use openCL acceleration while the FX cannot.


AMD can take full advantage of AMD Radeon App Acceleration, which is a similar technology. Intel has not yet adapted their technology to take advantage of this process...so the benefit is considerably less for them. Though the opportunity is still available.


whyso said:
Thats a sizable advantage without the need for a discrete gpu. You also arn't considering that for strictly cpu tasks the hd4000 is perfectly sufficient while the 8350 will require an dgpu which will raise costs and power consumption (but only when a gpu is not really needed for the system and the hd4000 is sufficient- if the work requires a gpu then the advantage is lost).


Agreed...and if you're going to shill out $309 for the i7-3770k you're not going to run HD 4000 intel iGPU. You'd be a fool to do so...as you would gain no performance advantage graphically to take advantage of that chips capability. I have yet to see an i7 build that runs intel integrated graphics...thus on their high end chips, any advantages are nearly always lost. (So, if you're intel, why bother to include them?)

whyso said:
I suppose the question is how does trigate stand up to SOI/finfet? (I honestly have no idea).


Intel has already stated they will be moving away from TriGate on Bulk to finfet/SOI for any architecture beyond Haswell because the quality of wafer they use is insufficient to support smaller processes than 14 nm.

whyso said:
Yes, super pi is 25% of an intel i5/i7 vs 12.5% of a FX but that forgetting how much faster the i5/i7 is going to be. (If you are still not seeing how this is a poor comparison running super pi on an 8 core jaguar chip-ps4 vs the i7, its still 25% vs 12.5% but the i5/i7 is going to blow the jaguar chip out of the water, being more than two times as fast [per core]).


There are several benchmark sites already moving away from SuperPi as a benchmark, because it is now becoming a less and less useful indicator of real world performance. Years ago it was great at showing single thread performance, since that is becoming less relevant with each passing day, many do not include this benchmark in their CPU reviews any longer.

whyso said:
Games do use hyperthreading. Its why an i3 is much much better than a pentium for gaming despite only a small increase in clock speed in multithread games. It's why in games like crysis 3 the i7 is a little better than the i5. Games don't use it as well as the modular fx architecture though. That article is more than 2 years old.


Yes, it is dated, but I could not find a more recent list of HT supported software, so I took what I could find. There are some games that use it, but there are a great many more that do not, and there are even some that run drastically better with HT disabled...so, as shown in my quote above (from a senior intel employee, no less), even intel concedes that not all applications benefit from HT technology and in some cases it is a hindrance, rather than a boon.

whyso said:
AMD had a real chance with GCN in laptops (which is a little more efficient than kepler in the smaller chips). (If I could have found a 7850m or 7870m I would have bought it right away). The problem was that they had few design wins and their drivers were crap (enduro issues with the 7970m for almost a year). Hopefully the 384 cores 'solar system' chips and richland crossfire will change this. Also AMD turbo in mobile is crap, intel lets their cpus draw enough power to turbo and throttle based on temperature, amd throttles based on power usage, so many times your chip is at only 55 degrees but throttling.


This architecture will be changing drastically in the next few months, I agree that the solar system chips and Richland will make a large impact here. I disagree with AMDs updating of this segment last proportionately to the Desktop market, but I can understand why they did it. They placed higher importance on maintaining their largest market share segments over trying to conquest a new segment. It makes business sense, but I would have preferred to see them launch the mobile segment at the same time to try to conquest more of that market more quickly. However, with limited R&D money, you have to plug the biggest holes first, and I understand why they did it the way they did.



April 10, 2013 12:29:28 PM

whyso said:
Amd thermal tolerances are lower than intel. Tjmax for 8350 is 90, ivy bridge is 100-105.


AMD does not measure Tj but Tc (by several technical reasons) and obtain Tj values from an estimation formula which uses a different scaling than Intel.

Moreover, those technical specifications are not a substitute for real tests. If Intel had better thermals, as you suggest, then with simpler cooling it would break worldwide record of over-clocking. The fact is that the record goes for AMD.
a b à CPUs
April 10, 2013 2:15:15 PM

I believe that the 28nm process used for SR and Kaveri will be bulk but I'm not sure.
http://www.anandtech.com/show/6201/amd-details-its-3rd-...

Depends really, there are PLENTY of applications that need pure CPU grunt and really don't care about graphics other than something that can do a decent job of displaying to the screen.

I'm not sure about the tj and thermals but with notebooks AMD has a much lower temperature threshold than intel chips (which will run at 95+ degrees).
a c 210 à CPUs
April 10, 2013 4:18:27 PM

whyso said:
I believe that the 28nm process used for SR and Kaveri will be bulk but I'm not sure.
http://www.anandtech.com/show/6201/amd-details-its-3rd-...

Depends really, there are PLENTY of applications that need pure CPU grunt and really don't care about graphics other than something that can do a decent job of displaying to the screen.

I'm not sure about the tj and thermals but with notebooks AMD has a much lower temperature threshold than intel chips (which will run at 95+ degrees).


No, it will be a PD-SOI process...from GloFo...the next step down is supposed to be 20nm PD-SOI as well.

The only commercial applications that would require i7-3770k muscle are rendering machines and workstations, which clearly would already have a workstation GPU to render.

For raw number crunching a business would save itself a ton of money and just use something like an i3, or have a server setup to use something like Opterons or POWERPC or Xeon CPUs that were only going to be used for those specific functions.
April 10, 2013 4:24:48 PM

whyso said:
I'm not sure about the tj and thermals but with notebooks AMD has a much lower temperature threshold than intel chips (which will run at 95+ degrees).


What "temperature threshold"? And how is it measured/estimated? You cannot compare apples to oranges.

The situation is similar to Intel TDP vs AMD TDP. You cannot compare them directly because each company defines/measures TDPs differently.
a b à CPUs
April 10, 2013 5:30:23 PM

8350rocks said:
whyso said:
I believe that the 28nm process used for SR and Kaveri will be bulk but I'm not sure.
http://www.anandtech.com/show/6201/amd-details-its-3rd-...

Depends really, there are PLENTY of applications that need pure CPU grunt and really don't care about graphics other than something that can do a decent job of displaying to the screen.

I'm not sure about the tj and thermals but with notebooks AMD has a much lower temperature threshold than intel chips (which will run at 95+ degrees).


No, it will be a PD-SOI process...from GloFo...the next step down is supposed to be 20nm PD-SOI as well.

The only commercial applications that would require i7-3770k muscle are rendering machines and workstations, which clearly would already have a workstation GPU to render.

For raw number crunching a business would save itself a ton of money and just use something like an i3, or have a server setup to use something like Opterons or POWERPC or Xeon CPUs that were only going to be used for those specific functions.


EVERYWHERE I look I see 28 nm BULK.

I'm talking about fairly low budget but intensive number crunching (like in academic institutions where xeons may be too expensive and not required but you still need to run a lot of computationally sensitive data 24/7)
a c 210 à CPUs
April 11, 2013 7:30:44 AM

whyso said:
8350rocks said:
whyso said:
I believe that the 28nm process used for SR and Kaveri will be bulk but I'm not sure.
http://www.anandtech.com/show/6201/amd-details-its-3rd-...

Depends really, there are PLENTY of applications that need pure CPU grunt and really don't care about graphics other than something that can do a decent job of displaying to the screen.

I'm not sure about the tj and thermals but with notebooks AMD has a much lower temperature threshold than intel chips (which will run at 95+ degrees).


No, it will be a PD-SOI process...from GloFo...the next step down is supposed to be 20nm PD-SOI as well.

The only commercial applications that would require i7-3770k muscle are rendering machines and workstations, which clearly would already have a workstation GPU to render.

For raw number crunching a business would save itself a ton of money and just use something like an i3, or have a server setup to use something like Opterons or POWERPC or Xeon CPUs that were only going to be used for those specific functions.


EVERYWHERE I look I see 28 nm BULK.

I'm talking about fairly low budget but intensive number crunching (like in academic institutions where xeons may be too expensive and not required but you still need to run a lot of computationally sensitive data 24/7)


Those places run the FX8350/8320 or something similar with a bargain bin GPU...it saves them more money than buying a i7-3770k.

Educational arenas, where money is tight, are not going to buy intel just because, they'll buy the most budget friendly solution that will do the task. AMD is far more budget friendly, even with a $50 GPU you're still saving $80 before you account for more expensive motherboards with intel product.

http://www.investorvillage.com/smbd.asp?mb=476&mid=1246...

http://news.softpedia.com/news/AMD-and-GlobalFoundries-...

Sorry, I was incorrect, it's not Partially Depleted Silicon on Insulator they're using...(PD-SOI) it's Fully Depleted Silicon on Insulator (FD-SOI). They already use it in their 28nm wafer. They have been since 2003 making mostly SOI wafers, because GloFo was AMD's Fab business and AMD has solely used SOI since 2003.

Intel still uses TriGate finfet on bulk wafers.
a b à CPUs
April 11, 2013 8:34:40 AM

The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.

I agree that with a 3570k or 8350, power consumption doesn't matter TOO much because if you get either one of those processors, your going to overclock and give that advantage away and the difference amount to a cup of coffee once a month. However, power consumption does give us an idea about a chips efficiency or the amount of power you have per watt. And this is where the big advantage lies. It says a lot about an Intel chip that Intel can do so much more, with so much less power. How fast would an Intel chip be if it could use well over 100w? Very fast. But that's not what they're trying to do, it is what AMD is trying to do though, and they still can't get up to par with an Intel even with power consumption out of the picture.

I agree with the HD4000 graphics, they hardly benefit at all, if any. They're a waste, and Intel should offer the 3570k and 3770k with out the graphics and for 50$ cheaper. If they would, they'd never sell another 3570k with an iGPU again but they would sell more of them. Although they do make a 3350p with the iGPU disabled although I still think it's on the chip, just disabled.

If anyone still thinks that an 8350 is completely equal to an Intel 3570k or 3770k in gaming, just take a look at Tom's new article that came out today. Comparing Nvidia and AMD cards in SLI, using an 8350 and 3770k. Clearly Nvidia and AMD cards run better on an Intel. Especially AMD's cards, which is surprising. But Nvidia cards run better on an 8350 because they require less power from the CPU. Very interesting indeed. But you can clearly see the quality difference in between the two chips. Is it worth the extra 20 bucks for the 3570k? IMO, yes it is. I know the article had a 3770k but their is really no difference between them in the games they tested.

Read the article, if you want 10% more performance in pretty much every game tested, then you'll want the best. If you could care less about 10FPS out of 100FPS, then you'll be rewarded with a better price.

And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.
April 11, 2013 12:04:54 PM

ericjohn004 said:
The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.


I know lots of fanless motherboards using AMD chips: C60, C70, E240, E350... Some time ago the E350 was very popular among HTPC builders. I don't check today.

There are lots of fanless motherboards using the Intel atoms: 425, 525, 2500... but those cannot be used for multimedia or HTPC. There is a MSI fanless motherboard that includes an intel celeron 847, but I know little about it.

AMD also showed how you can build a fanless system using a powerful trinity A10 chip

http://news.softpedia.com/news/AMD-Demonstrates-Fanless...

Finally, there are people running FX-8350 with passive cooling.
a c 210 à CPUs
April 11, 2013 12:18:21 PM

ericjohn004 said:
The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.


Well, if you spring for a well insulated case, you have less issue with noise, though building a 0 db HTPC is an interesting idea...the APU solutions AMD offer would be quite a bit of bang for your buck with some good capability if you need it in a pinch, and they're not known to run particularly hot...

ericjohn004 said:
I agree that with a 3570k or 8350, power consumption doesn't matter TOO much because if you get either one of those processors, your going to overclock and give that advantage away and the difference amount to a cup of coffee once a month. However, power consumption does give us an idea about a chips efficiency or the amount of power you have per watt. And this is where the big advantage lies. It says a lot about an Intel chip that Intel can do so much more, with so much less power. How fast would an Intel chip be if it could use well over 100w? Very fast. But that's not what they're trying to do, it is what AMD is trying to do though, and they still can't get up to par with an Intel even with power consumption out of the picture.


Actually what chip is designed to consume has little bearing on speed. There are AMD chips that are 95W like the FX6300 which run comparable races to the mid range 84W Intel's, especially if you get into overclocking. Power consumption between the 2 amounts to about the equivalent of turning on an extra 40W light bulb in your home...(you won't notice the difference...) AMD's current architecture is designed around a certain power requirement, but that doesn't mean that Intel would run faster if they consumed more power, it just means that they would consume more power.

ericjohn004 said:
I agree with the HD4000 graphics, they hardly benefit at all, if any. They're a waste, and Intel should offer the 3570k and 3770k with out the graphics and for 50$ cheaper. If they would, they'd never sell another 3570k with an iGPU again but they would sell more of them. Although they do make a 3350p with the iGPU disabled although I still think it's on the chip, just disabled.


That little trinket on the die also allows intel to claim their i7's have more transistors than the FX8XXX series...in reality...the difference isn't much. I am not sure why they did it the way that they did...but it was clearly a poor choice. I suppose it may save them money to manufacture them all the same way, and it likely does...but it also makes the cost of the product increase, even though it saves money in manufacturing to only offer it one way.

ericjohn004 said:
If anyone still thinks that an 8350 is completely equal to an Intel 3570k or 3770k in gaming, just take a look at Tom's new article that came out today. Comparing Nvidia and AMD cards in SLI, using an 8350 and 3770k. Clearly Nvidia and AMD cards run better on an Intel. Especially AMD's cards, which is surprising. But Nvidia cards run better on an 8350 because they require less power from the CPU. Very interesting indeed. But you can clearly see the quality difference in between the two chips. Is it worth the extra 20 bucks for the 3570k? IMO, yes it is. I know the article had a 3770k but their is really no difference between them in the games they tested.


I read the article this morning, as a matter of fact, and the 3770k was only marginally faster in most applications, and in some the FX8350 was marginally faster...the first benchmark they ran the AMD was faster as I recall...(I think it was BF3...?). But it only proves that 10% margin of error is accurate...(10% of 100 FPS = 10 FPS difference) So, in order to reasonably conclude one had a dramatic advantage you'd need to show a greater than 10% difference, and even then you could only say definitively that one was marginally better above margin for error at that specific task.

ericjohn004 said:
Read the article, if you want 10% more performance in pretty much every game tested, then you'll want the best. If you could care less about 10FPS out of 100FPS, then you'll be rewarded with a better price.


I addressed this above.

ericjohn004 said:
And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.


If all you did was download stuff on itunes and play flash games on facebook or something...sure, SuperPi is your best benchmark. Unfortunately...in this day and age...multi-tasking, gaming, multimedia, rendering, encoding, ripping DVD/BRD has made a single thread benchmark less and less relevant. As I said earlier, quite a few websites have stopped even bothering to run SuperPi, because while it is good at what it does...it doesn't cover many angles at all.

a b à CPUs
April 13, 2013 9:12:23 PM

juanrga said:
whyso said:
If comparing overclocking we are comparing max overclocking (or max safe overclocking). If one is a better overclocker than that should be taken into account. This is why amd gpus are generally superior to nvidia gpus. They have more overclocking headroom and scale better with overclocking. Generally we can boost a gcn gpu by a greater % than a kepler gpu at max overclocks. There is nothing biased against this at all.


You did not get the point. It was not about comparing overclocking capabilities of both chips but about him pretending that you only can compare Intel to AMD at the same clocks: e.g. an AMD at clock speed vs an overcloked Intel, which is not only ridiculous but also biased.

whyso said:

The FX was only working one or two cores at that speed. It was running at over 2 volts. It was using liquid helium. thats hardly consumer representitive.


No. I was referring to the eight-core record

http://news.softpedia.com/news/All-8-Cores-of-the-AMD-F...

And did you read what I said about the relation between a world-record and what one can obtain at home?



Last thing I'll say on the subject but I wasn't "pretending" the chips needed to be compared at the same relative clockspeed. All I'm saying is that it would make sense to measure how both CPU's perform at the same clock speeds given that in most cases 4.8-5.0 ghz seems to be the top end overclock for both CPU's on Air. Yes I know there are those with setups running at 5.2+ ghz with an FX but this is not the norm and the power and heat draw is tremendous.

That's all I was saying.
April 14, 2013 9:33:10 AM

juanrga, I have to say, you are really taking this to another level. I can really tell you are such a Fan-Boy.

1. The World Record for clock speed doesn't matter. And yes it can withstand higher temps BECAUSE of the build quality. Your right on that, but its more because of a build quality more focused on heat rather than performance. And yeah we got a higher Clock Speed, but its the code that matters, and I think your disregarding that fact.

2. About Aero, yeah its disabled in Full Screen, but what about windowed? I know everyone doesn't just run in windowed, however, for all the games I play, It usually increases the FPS from about 10 to anywhere up to 80. And yeah that "wont matter" when I get my ass-kicking PC, but its just natural for me to play in that, I get annoyed in Full Screen. So your fact about Windows Aero isn't fully accurate. And what Dumb-Ass would get Windows 8 for a Gaming PC? It's terrible for it. With the Intel being bad for most gaming titles, you are right, however, when Intel runs its Graphics with GPU in, it tries to focus on anything that isnt being focused on by the main GPU. So i.e all the background processes. This goes back to me with windowed. Sometimes, I may run on one screen and want to check Facebook or a Map or something on another. It really comes in handy then.

3. Percentages, again, don't really matter. It's again with the code. I can run Super Pi and force it to go on all Cores. Then in a comparison between the two that both are to run at the same (4.0 for example), an i7-3770k still can get about the same results as the 8 Core FX Chip, simply because of the way its Coded and Physical Build.

4. What apps run 8 cores? Almost NONE. The extra cores at this point in time do not matter what soever, unless you use it to run cool bench marking and trying to do cool 8000 step math equations. With Hyper-Threading, you made the point that activated it can decrease performance, and stating that the 8 Separate Core Combo is better. I have to say, 1. You can just disable it. 2. Some applications actually work better with Intel HT on 4 cores than separate AMD 8 cores. And 3. Yeah It'll be faster with apps that run all eight, but unless your an "Extreme Computer Scientist," I don't see that advantage with my ArmA Game.

5. Back to that Physical Build stuff. Yeah it can run hotter without problems, but think about the process of the workload. Your AMD will probably run at 4.5 Ghz, and mine will run the same. And yeah you have more Cores, but the problem with that is, you have more of a heat problem. So AMD will use less expensive and more heat resistant materials that won't come close to the performance if you had regular materials. And thus you need the "Superior 8 Core Power" so you can get a similar performance as the Intel Processors.

6. With your usage question, I can say that. AMD's CPUs and GPUs are more suited towards each other so the workflow is more committed an can be processed in a 'better' manner. And it would be more fair in a testing environment. It would be most fair to run AMD with AMD, AMD GPU with Intel, Intel with Nvidia, and AMD processor with Nvidia. And yeah their more optimized, but there not made for each other (AMD and AMD), and that's where the AMD and AMD vs. Intel and Nvidia Combo comes into play.

Putting this with what ericjohn said, what do you have to say with your AMD fans?

ericjohn004 said:
The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.

I agree that with a 3570k or 8350, power consumption doesn't matter TOO much because if you get either one of those processors, your going to overclock and give that advantage away and the difference amount to a cup of coffee once a month. However, power consumption does give us an idea about a chips efficiency or the amount of power you have per watt. And this is where the big advantage lies. It says a lot about an Intel chip that Intel can do so much more, with so much less power. How fast would an Intel chip be if it could use well over 100w? Very fast. But that's not what they're trying to do, it is what AMD is trying to do though, and they still can't get up to par with an Intel even with power consumption out of the picture.

I agree with the HD4000 graphics, they hardly benefit at all, if any. They're a waste, and Intel should offer the 3570k and 3770k with out the graphics and for 50$ cheaper. If they would, they'd never sell another 3570k with an iGPU again but they would sell more of them. Although they do make a 3350p with the iGPU disabled although I still think it's on the chip, just disabled.

If anyone still thinks that an 8350 is completely equal to an Intel 3570k or 3770k in gaming, just take a look at Tom's new article that came out today. Comparing Nvidia and AMD cards in SLI, using an 8350 and 3770k. Clearly Nvidia and AMD cards run better on an Intel. Especially AMD's cards, which is surprising. But Nvidia cards run better on an 8350 because they require less power from the CPU. Very interesting indeed. But you can clearly see the quality difference in between the two chips. Is it worth the extra 20 bucks for the 3570k? IMO, yes it is. I know the article had a 3770k but their is really no difference between them in the games they tested.

Read the article, if you want 10% more performance in pretty much every game tested, then you'll want the best. If you could care less about 10FPS out of 100FPS, then you'll be rewarded with a better price.

And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.


***And that last footnote that you made, since AMD makes both Intel and Nivida rely on each other. Its no different than the AMD CPU and GPU relying on each other. You basically just answered your own question.***
April 14, 2013 9:55:41 AM

8350rocks said:
ericjohn004 said:
And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.


If all you did was download stuff on itunes and play flash games on facebook or something...sure, SuperPi is your best benchmark. Unfortunately...in this day and age...multi-tasking, gaming, multimedia, rendering, encoding, ripping DVD/BRD has made a single thread benchmark less and less relevant. As I said earlier, quite a few websites have stopped even bothering to run SuperPi, because while it is good at what it does...it doesn't cover many angles at all.



I agree with what your saying 8350, but your just stating the obvious for no reason. There are plenty of sources out there that show comparisons with 3D rendering/gaming/etc. He was basically just agreeing to what I said earlier.
a b à CPUs
April 14, 2013 12:54:58 PM

8350rocks said:
whyso said:
8350rocks said:
whyso said:
I believe that the 28nm process used for SR and Kaveri will be bulk but I'm not sure.
http://www.anandtech.com/show/6201/amd-details-its-3rd-...

Depends really, there are PLENTY of applications that need pure CPU grunt and really don't care about graphics other than something that can do a decent job of displaying to the screen.

I'm not sure about the tj and thermals but with notebooks AMD has a much lower temperature threshold than intel chips (which will run at 95+ degrees).


No, it will be a PD-SOI process...from GloFo...the next step down is supposed to be 20nm PD-SOI as well.

The only commercial applications that would require i7-3770k muscle are rendering machines and workstations, which clearly would already have a workstation GPU to render.

For raw number crunching a business would save itself a ton of money and just use something like an i3, or have a server setup to use something like Opterons or POWERPC or Xeon CPUs that were only going to be used for those specific functions.


EVERYWHERE I look I see 28 nm BULK.

I'm talking about fairly low budget but intensive number crunching (like in academic institutions where xeons may be too expensive and not required but you still need to run a lot of computationally sensitive data 24/7)


Those places run the FX8350/8320 or something similar with a bargain bin GPU...it saves them more money than buying a i7-3770k.

Educational arenas, where money is tight, are not going to buy intel just because, they'll buy the most budget friendly solution that will do the task. AMD is far more budget friendly, even with a $50 GPU you're still saving $80 before you account for more expensive motherboards with intel product.

http://www.investorvillage.com/smbd.asp?mb=476&mid=1246...

http://news.softpedia.com/news/AMD-and-GlobalFoundries-...

Sorry, I was incorrect, it's not Partially Depleted Silicon on Insulator they're using...(PD-SOI) it's Fully Depleted Silicon on Insulator (FD-SOI). They already use it in their 28nm wafer. They have been since 2003 making mostly SOI wafers, because GloFo was AMD's Fab business and AMD has solely used SOI since 2003.

Intel still uses TriGate finfet on bulk wafers.


I don't know. At my university, every computer I can find (with the exception of really old athlon 64 computers) is using an intel chip from pentium to core 2 duo to i7-2600. These are the general use computers. The academic/professional use computers all use intel too.

Really for most places a dgpu is not needed. That is a $50 saving toward an intel system.
April 14, 2013 6:30:12 PM

GOM3RPLY3R said:
juanrga, I have to say, you are really taking this to another level. I can really tell you are such a Fan-Boy.


Calling other names will not eliminate the facts... those you continue to ignore.

GOM3RPLY3R said:

1. The World Record for clock speed doesn't matter. And yes it can withstand higher temps BECAUSE of the build quality. Your right on that, but its more because of a build quality more focused on heat rather than performance. And yeah we got a higher Clock Speed, but its the code that matters, and I think your disregarding that fact.


I (and other people also) already explained what the world record shows. It is about build quality. Intel does not hold the world-record because its build quality is poor.

GOM3RPLY3R said:

2. About Aero, yeah its disabled in Full Screen, but what about windowed? I know everyone doesn't just run in windowed, however, for all the games I play, It usually increases the FPS from about 10 to anywhere up to 80. And yeah that "wont matter" when I get my ass-kicking PC, but its just natural for me to play in that, I get annoyed in Full Screen. So your fact about Windows Aero isn't fully accurate. And what Dumb-Ass would get Windows 8 for a Gaming PC? It's terrible for it. With the Intel being bad for most gaming titles, you are right, however, when Intel runs its Graphics with GPU in, it tries to focus on anything that isnt being focused on by the main GPU. So i.e all the background processes. This goes back to me with windowed. Sometimes, I may run on one screen and want to check Facebook or a Map or something on another. It really comes in handy then.


Are you trying to say that intel is better for gaming, because you play games on a window and gets Aero acceleration from the HD graphics? When Aero are just fancy effects for the desktop?

And then you try to support your 'special' opinion by insulting to millions of users who game on W8.

And finally you add how wonderful HD graphics work with a discrete GPU, when there are hundred of people disabling it, because generates problems when gaming. WOW!

GOM3RPLY3R said:

3. Percentages, again, don't really matter. It's again with the code. I can run Super Pi and force it to go on all Cores. Then in a comparison between the two that both are to run at the same (4.0 for example), an i7-3770k still can get about the same results as the 8 Core FX Chip, simply because of the way its Coded and Physical Build.


Let me assume that you are running on four cores a program which is single threaded. Even if I was to accept that you would only be using a half (50%) of the FX-chip, but the 100% of the i7 chip. As you see again it has little to do with "coded" and "Physical Build".

GOM3RPLY3R said:

4. What apps run 8 cores? Almost NONE. The extra cores at this point in time do not matter what soever, unless you use it to run cool bench marking and trying to do cool 8000 step math equations. With Hyper-Threading, you made the point that activated it can decrease performance, and stating that the 8 Separate Core Combo is better. I have to say, 1. You can just disable it. 2. Some applications actually work better with Intel HT on 4 cores than separate AMD 8 cores. And 3. Yeah It'll be faster with apps that run all eight, but unless your an "Extreme Computer Scientist," I don't see that advantage with my ArmA Game.


There are several applications that use eight-cores. Mostly are professional applications and some are for scientists (I don't know what you mean by "Extreme Computer Scientist").

Moreover, you miss again the point that some people works multitask. I.e. they run several applications at once. Therefore it does not matter if a single application cannot use the eight-cores, because several applications can. In fact FX users know that its chip works better than the i7 at multitask (intel has flag problems).

A new generation of eight-core games are being developed because next consoles will be eight-core designs. You continue denying this fact as well.

GOM3RPLY3R said:

5. Back to that Physical Build stuff. Yeah it can run hotter without problems, but think about the process of the workload. Your AMD will probably run at 4.5 Ghz, and mine will run the same. And yeah you have more Cores, but the problem with that is, you have more of a heat problem. So AMD will use less expensive and more heat resistant materials that won't come close to the performance if you had regular materials. And thus you need the "Superior 8 Core Power" so you can get a similar performance as the Intel Processors.


You continue denying facts. It is not AMD who is using "less expensive" materials, but Intel who is using poor materials and builds for saving some bucks. That is why they cannot achieve AMD clocks.

Now just answer this to me. If the Intel chips are so incredible superior in terms of performance why Intel needs to cheat with its compiler and generate biased benchmarks? and why the fastest supercomputers in the world use AMD chips?

GOM3RPLY3R said:

6. With your usage question, I can say that. AMD's CPUs and GPUs are more suited towards each other so the workflow is more committed an can be processed in a 'better' manner. And it would be more fair in a testing environment. It would be most fair to run AMD with AMD, AMD GPU with Intel, Intel with Nvidia, and AMD processor with Nvidia. And yeah their more optimized, but there not made for each other (AMD and AMD), and that's where the AMD and AMD vs. Intel and Nvidia Combo comes into play.


Did you read recent toms hardware review showing how Nvidia can run better on AMD than AMD on AMD?
a c 210 à CPUs
April 14, 2013 9:38:58 PM

GOM3RPLY3R said:
juanrga, I have to say, you are really taking this to another level. I can really tell you are such a Fan-Boy.

1. The World Record for clock speed doesn't matter. And yes it can withstand higher temps BECAUSE of the build quality. Your right on that, but its more because of a build quality more focused on heat rather than performance. And yeah we got a higher Clock Speed, but its the code that matters, and I think your disregarding that fact.


Oh, but it DOES matter, see...Intel uses a triple gate on bulk process to try to get the most from the bulk wafer. Bulk is the cheapest wafer you can buy, it's the lowest quality bin, and intel uses TriGate to try to squeeze the most out of it. Funny they charge the most for the cheapest silicon huh? AMD uses bulk for non important things that don't require high performance...otherwise they use a SOI, Silicon On Insulator, which means that the silicon has an extra component in it to keep it insulated and heat resistant, making it perform better...for longer.

Quote:
2. About Aero, yeah its disabled in Full Screen, but what about windowed? I know everyone doesn't just run in windowed, however, for all the games I play, It usually increases the FPS from about 10 to anywhere up to 80. And yeah that "wont matter" when I get my ass-kicking PC, but its just natural for me to play in that, I get annoyed in Full Screen. So your fact about Windows Aero isn't fully accurate. And what Dumb-Ass would get Windows 8 for a Gaming PC? It's terrible for it. With the Intel being bad for most gaming titles, you are right, however, when Intel runs its Graphics with GPU in, it tries to focus on anything that isnt being focused on by the main GPU. So i.e all the background processes. This goes back to me with windowed. Sometimes, I may run on one screen and want to check Facebook or a Map or something on another. It really comes in handy then.


Intel does not currently support on board graphics plus discrete GPU, it causes a plethora of issues with the hardware, and they have outright stated that onboard graphics should be disabled if you're using a discrete GPU. You've received bad information somewhere.

Quote:
3. Percentages, again, don't really matter. It's again with the code. I can run Super Pi and force it to go on all Cores. Then in a comparison between the two that both are to run at the same (4.0 for example), an i7-3770k still can get about the same results as the 8 Core FX Chip, simply because of the way its Coded and Physical Build.


The physical build is inferior. PERIOD. Anyone who knows ANYTHING about composition of materials will not argue that intel has a superior quality wafer...it's the most easily disproved claim you make. You keep talking about "how it's coded"...you do realize coding is not a part of a CPU right? It's programming language. If you're talking about protocols...then intel is designed for single threaded applications, I have reviewed this multiple times in this thread alone. Also, SuperPi is a single core benchmark...it cannot be "forced" onto more cores...it is designed specifically to test single core performance. Bad information. The only reason the i7-3770k competes with the FX8350 in many categories is frankly, because it is that good at single threaded apps, and this allows it to overcompensate in highly threaded apps.

Quote:
4. What apps run 8 cores? Almost NONE. The extra cores at this point in time do not matter what soever, unless you use it to run cool bench marking and trying to do cool 8000 step math equations. With Hyper-Threading, you made the point that activated it can decrease performance, and stating that the 8 Separate Core Combo is better. I have to say, 1. You can just disable it. 2. Some applications actually work better with Intel HT on 4 cores than separate AMD 8 cores. And 3. Yeah It'll be faster with apps that run all eight, but unless your an "Extreme Computer Scientist," I don't see that advantage with my ArmA Game.


Actually, games like Crysis 3 that "support" HT actually run better without it on...google it and look at the youtube videos...the facts are there. HT is a way to rook people out of more money for basically a software trying to do the work of a core in a background operation...that all the while robs the hardware of performance on the foreground operation. HyperThreading is an industry wide inside joke...Intel has nearly admitted as much openly.

Quote:
5. Back to that Physical Build stuff. Yeah it can run hotter without problems, but think about the process of the workload. Your AMD will probably run at 4.5 Ghz, and mine will run the same. And yeah you have more Cores, but the problem with that is, you have more of a heat problem. So AMD will use less expensive and more heat resistant materials that won't come close to the performance if you had regular materials. And thus you need the "Superior 8 Core Power" so you can get a similar performance as the Intel Processors.


TDP and core voltage have a direct correlation to heat. At no point does the number of cores come into play. PERIOD.

Again, the BS about materials...look man...I posted a link to wikipedia that explained the difference between SOI and bulk for you...and you still sit here and try to tell me, wrongly, that I am wrong and you are right. Show me one shred of evidence that says bulk wafers are better than SOI, or that intel uses anything other than bulk wafers. You can't find it...you know why? Because it doesn't exist.

Quote:
6. With your usage question, I can say that. AMD's CPUs and GPUs are more suited towards each other so the workflow is more committed an can be processed in a 'better' manner. And it would be more fair in a testing environment. It would be most fair to run AMD with AMD, AMD GPU with Intel, Intel with Nvidia, and AMD processor with Nvidia. And yeah their more optimized, but there not made for each other (AMD and AMD), and that's where the AMD and AMD vs. Intel and Nvidia Combo comes into play.

Putting this with what ericjohn said, what do you have to say with your AMD fans?


Just because intel is more optimized for single threaded apps does not mean they will be better at anything multi-threaded now or in the near future. You need to come up with some facts to support your statements.

ericjohn004 said:
The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.

I agree that with a 3570k or 8350, power consumption doesn't matter TOO much because if you get either one of those processors, your going to overclock and give that advantage away and the difference amount to a cup of coffee once a month. However, power consumption does give us an idea about a chips efficiency or the amount of power you have per watt. And this is where the big advantage lies. It says a lot about an Intel chip that Intel can do so much more, with so much less power. How fast would an Intel chip be if it could use well over 100w? Very fast. But that's not what they're trying to do, it is what AMD is trying to do though, and they still can't get up to par with an Intel even with power consumption out of the picture.

I agree with the HD4000 graphics, they hardly benefit at all, if any. They're a waste, and Intel should offer the 3570k and 3770k with out the graphics and for 50$ cheaper. If they would, they'd never sell another 3570k with an iGPU again but they would sell more of them. Although they do make a 3350p with the iGPU disabled although I still think it's on the chip, just disabled.

If anyone still thinks that an 8350 is completely equal to an Intel 3570k or 3770k in gaming, just take a look at Tom's new article that came out today. Comparing Nvidia and AMD cards in SLI, using an 8350 and 3770k. Clearly Nvidia and AMD cards run better on an Intel. Especially AMD's cards, which is surprising. But Nvidia cards run better on an 8350 because they require less power from the CPU. Very interesting indeed. But you can clearly see the quality difference in between the two chips. Is it worth the extra 20 bucks for the 3570k? IMO, yes it is. I know the article had a 3770k but their is really no difference between them in the games they tested.

Read the article, if you want 10% more performance in pretty much every game tested, then you'll want the best. If you could care less about 10FPS out of 100FPS, then you'll be rewarded with a better price.

And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.

***And that last footnote that you made, since AMD makes both Intel and Nivida rely on each other. Its no different than the AMD CPU and GPU relying on each other. You basically just answered your own question.***


Already addressed this earlier.
April 15, 2013 7:40:24 AM

8350rocks said:
GOM3RPLY3R said:
juanrga, I have to say, you are really taking this to another level. I can really tell you are such a Fan-Boy.

1. The World Record for clock speed doesn't matter. And yes it can withstand higher temps BECAUSE of the build quality. Your right on that, but its more because of a build quality more focused on heat rather than performance. And yeah we got a higher Clock Speed, but its the code that matters, and I think your disregarding that fact.


Oh, but it DOES matter, see...Intel uses a triple gate on bulk process to try to get the most from the bulk wafer. Bulk is the cheapest wafer you can buy, it's the lowest quality bin, and intel uses TriGate to try to squeeze the most out of it. Funny they charge the most for the cheapest silicon huh? AMD uses bulk for non important things that don't require high performance...otherwise they use a SOI, Silicon On Insulator, which means that the silicon has an extra component in it to keep it insulated and heat resistant, making it perform better...for longer.

Quote:
2. About Aero, yeah its disabled in Full Screen, but what about windowed? I know everyone doesn't just run in windowed, however, for all the games I play, It usually increases the FPS from about 10 to anywhere up to 80. And yeah that "wont matter" when I get my ass-kicking PC, but its just natural for me to play in that, I get annoyed in Full Screen. So your fact about Windows Aero isn't fully accurate. And what Dumb-Ass would get Windows 8 for a Gaming PC? It's terrible for it. With the Intel being bad for most gaming titles, you are right, however, when Intel runs its Graphics with GPU in, it tries to focus on anything that isnt being focused on by the main GPU. So i.e all the background processes. This goes back to me with windowed. Sometimes, I may run on one screen and want to check Facebook or a Map or something on another. It really comes in handy then.


Intel does not currently support on board graphics plus discrete GPU, it causes a plethora of issues with the hardware, and they have outright stated that onboard graphics should be disabled if you're using a discrete GPU. You've received bad information somewhere.

Quote:
3. Percentages, again, don't really matter. It's again with the code. I can run Super Pi and force it to go on all Cores. Then in a comparison between the two that both are to run at the same (4.0 for example), an i7-3770k still can get about the same results as the 8 Core FX Chip, simply because of the way its Coded and Physical Build.


The physical build is inferior. PERIOD. Anyone who knows ANYTHING about composition of materials will not argue that intel has a superior quality wafer...it's the most easily disproved claim you make. You keep talking about "how it's coded"...you do realize coding is not a part of a CPU right? It's programming language. If you're talking about protocols...then intel is designed for single threaded applications, I have reviewed this multiple times in this thread alone. Also, SuperPi is a single core benchmark...it cannot be "forced" onto more cores...it is designed specifically to test single core performance. Bad information. The only reason the i7-3770k competes with the FX8350 in many categories is frankly, because it is that good at single threaded apps, and this allows it to overcompensate in highly threaded apps.

Quote:
4. What apps run 8 cores? Almost NONE. The extra cores at this point in time do not matter what soever, unless you use it to run cool bench marking and trying to do cool 8000 step math equations. With Hyper-Threading, you made the point that activated it can decrease performance, and stating that the 8 Separate Core Combo is better. I have to say, 1. You can just disable it. 2. Some applications actually work better with Intel HT on 4 cores than separate AMD 8 cores. And 3. Yeah It'll be faster with apps that run all eight, but unless your an "Extreme Computer Scientist," I don't see that advantage with my ArmA Game.


Actually, games like Crysis 3 that "support" HT actually run better without it on...google it and look at the youtube videos...the facts are there. HT is a way to rook people out of more money for basically a software trying to do the work of a core in a background operation...that all the while robs the hardware of performance on the foreground operation. HyperThreading is an industry wide inside joke...Intel has nearly admitted as much openly.

Quote:
5. Back to that Physical Build stuff. Yeah it can run hotter without problems, but think about the process of the workload. Your AMD will probably run at 4.5 Ghz, and mine will run the same. And yeah you have more Cores, but the problem with that is, you have more of a heat problem. So AMD will use less expensive and more heat resistant materials that won't come close to the performance if you had regular materials. And thus you need the "Superior 8 Core Power" so you can get a similar performance as the Intel Processors.


TDP and core voltage have a direct correlation to heat. At no point does the number of cores come into play. PERIOD.

Again, the BS about materials...look man...I posted a link to wikipedia that explained the difference between SOI and bulk for you...and you still sit here and try to tell me, wrongly, that I am wrong and you are right. Show me one shred of evidence that says bulk wafers are better than SOI, or that intel uses anything other than bulk wafers. You can't find it...you know why? Because it doesn't exist.

Quote:
6. With your usage question, I can say that. AMD's CPUs and GPUs are more suited towards each other so the workflow is more committed an can be processed in a 'better' manner. And it would be more fair in a testing environment. It would be most fair to run AMD with AMD, AMD GPU with Intel, Intel with Nvidia, and AMD processor with Nvidia. And yeah their more optimized, but there not made for each other (AMD and AMD), and that's where the AMD and AMD vs. Intel and Nvidia Combo comes into play.

Putting this with what ericjohn said, what do you have to say with your AMD fans?


Just because intel is more optimized for single threaded apps does not mean they will be better at anything multi-threaded now or in the near future. You need to come up with some facts to support your statements.

ericjohn004 said:
The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.

I agree that with a 3570k or 8350, power consumption doesn't matter TOO much because if you get either one of those processors, your going to overclock and give that advantage away and the difference amount to a cup of coffee once a month. However, power consumption does give us an idea about a chips efficiency or the amount of power you have per watt. And this is where the big advantage lies. It says a lot about an Intel chip that Intel can do so much more, with so much less power. How fast would an Intel chip be if it could use well over 100w? Very fast. But that's not what they're trying to do, it is what AMD is trying to do though, and they still can't get up to par with an Intel even with power consumption out of the picture.

I agree with the HD4000 graphics, they hardly benefit at all, if any. They're a waste, and Intel should offer the 3570k and 3770k with out the graphics and for 50$ cheaper. If they would, they'd never sell another 3570k with an iGPU again but they would sell more of them. Although they do make a 3350p with the iGPU disabled although I still think it's on the chip, just disabled.

If anyone still thinks that an 8350 is completely equal to an Intel 3570k or 3770k in gaming, just take a look at Tom's new article that came out today. Comparing Nvidia and AMD cards in SLI, using an 8350 and 3770k. Clearly Nvidia and AMD cards run better on an Intel. Especially AMD's cards, which is surprising. But Nvidia cards run better on an 8350 because they require less power from the CPU. Very interesting indeed. But you can clearly see the quality difference in between the two chips. Is it worth the extra 20 bucks for the 3570k? IMO, yes it is. I know the article had a 3770k but their is really no difference between them in the games they tested.

Read the article, if you want 10% more performance in pretty much every game tested, then you'll want the best. If you could care less about 10FPS out of 100FPS, then you'll be rewarded with a better price.

And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.

***And that last footnote that you made, since AMD makes both Intel and Nivida rely on each other. Its no different than the AMD CPU and GPU relying on each other. You basically just answered your own question.***


Already addressed this earlier.


https://www.youtube.com/watch?v=rIVGwj1_Qno
https://www.youtube.com/watch?v=4et7kDGSRfc
https://www.youtube.com/watch?v=eu8Sekdb-IE
a c 210 à CPUs
April 16, 2013 5:18:01 PM

GOM3RPLY3R said:


I'm just going to say that its sad that almost all of the results were done with AMD GPUs. That's not very fair. None of these links show constant results with all the same GPUs. However though I did find, that from all of the videos combined together (as a reference average), Intel did win overall.


Considering Tom's Hardware just published an article showing that AMD GPUs perform better with intel CPUs and Nvidia GPUs perform better with AMD CPUs...it doesn't surprise me that Intel won slightly. If they had used Nvidia GPUs AMD would have won outright without any doubt...LOL.
April 16, 2013 5:50:40 PM

8350rocks said:
GOM3RPLY3R said:
juanrga, I have to say, you are really taking this to another level. I can really tell you are such a Fan-Boy.

1. The World Record for clock speed doesn't matter. And yes it can withstand higher temps BECAUSE of the build quality. Your right on that, but its more because of a build quality more focused on heat rather than performance. And yeah we got a higher Clock Speed, but its the code that matters, and I think your disregarding that fact.


Oh, but it DOES matter, see...Intel uses a triple gate on bulk process to try to get the most from the bulk wafer. Bulk is the cheapest wafer you can buy, it's the lowest quality bin, and intel uses TriGate to try to squeeze the most out of it. Funny they charge the most for the cheapest silicon huh? AMD uses bulk for non important things that don't require high performance...otherwise they use a SOI, Silicon On Insulator, which means that the silicon has an extra component in it to keep it insulated and heat resistant, making it perform better...for longer.

Quote:
2. About Aero, yeah its disabled in Full Screen, but what about windowed? I know everyone doesn't just run in windowed, however, for all the games I play, It usually increases the FPS from about 10 to anywhere up to 80. And yeah that "wont matter" when I get my ass-kicking PC, but its just natural for me to play in that, I get annoyed in Full Screen. So your fact about Windows Aero isn't fully accurate. And what Dumb-Ass would get Windows 8 for a Gaming PC? It's terrible for it. With the Intel being bad for most gaming titles, you are right, however, when Intel runs its Graphics with GPU in, it tries to focus on anything that isnt being focused on by the main GPU. So i.e all the background processes. This goes back to me with windowed. Sometimes, I may run on one screen and want to check Facebook or a Map or something on another. It really comes in handy then.


Intel does not currently support on board graphics plus discrete GPU, it causes a plethora of issues with the hardware, and they have outright stated that onboard graphics should be disabled if you're using a discrete GPU. You've received bad information somewhere.

Quote:
3. Percentages, again, don't really matter. It's again with the code. I can run Super Pi and force it to go on all Cores. Then in a comparison between the two that both are to run at the same (4.0 for example), an i7-3770k still can get about the same results as the 8 Core FX Chip, simply because of the way its Coded and Physical Build.


The physical build is inferior. PERIOD. Anyone who knows ANYTHING about composition of materials will not argue that intel has a superior quality wafer...it's the most easily disproved claim you make. You keep talking about "how it's coded"...you do realize coding is not a part of a CPU right? It's programming language. If you're talking about protocols...then intel is designed for single threaded applications, I have reviewed this multiple times in this thread alone. Also, SuperPi is a single core benchmark...it cannot be "forced" onto more cores...it is designed specifically to test single core performance. Bad information. The only reason the i7-3770k competes with the FX8350 in many categories is frankly, because it is that good at single threaded apps, and this allows it to overcompensate in highly threaded apps.

Quote:
4. What apps run 8 cores? Almost NONE. The extra cores at this point in time do not matter what soever, unless you use it to run cool bench marking and trying to do cool 8000 step math equations. With Hyper-Threading, you made the point that activated it can decrease performance, and stating that the 8 Separate Core Combo is better. I have to say, 1. You can just disable it. 2. Some applications actually work better with Intel HT on 4 cores than separate AMD 8 cores. And 3. Yeah It'll be faster with apps that run all eight, but unless your an "Extreme Computer Scientist," I don't see that advantage with my ArmA Game.


Actually, games like Crysis 3 that "support" HT actually run better without it on...google it and look at the youtube videos...the facts are there. HT is a way to rook people out of more money for basically a software trying to do the work of a core in a background operation...that all the while robs the hardware of performance on the foreground operation. HyperThreading is an industry wide inside joke...Intel has nearly admitted as much openly.

Quote:
5. Back to that Physical Build stuff. Yeah it can run hotter without problems, but think about the process of the workload. Your AMD will probably run at 4.5 Ghz, and mine will run the same. And yeah you have more Cores, but the problem with that is, you have more of a heat problem. So AMD will use less expensive and more heat resistant materials that won't come close to the performance if you had regular materials. And thus you need the "Superior 8 Core Power" so you can get a similar performance as the Intel Processors.


TDP and core voltage have a direct correlation to heat. At no point does the number of cores come into play. PERIOD.

Again, the BS about materials...look man...I posted a link to wikipedia that explained the difference between SOI and bulk for you...and you still sit here and try to tell me, wrongly, that I am wrong and you are right. Show me one shred of evidence that says bulk wafers are better than SOI, or that intel uses anything other than bulk wafers. You can't find it...you know why? Because it doesn't exist.

Quote:
6. With your usage question, I can say that. AMD's CPUs and GPUs are more suited towards each other so the workflow is more committed an can be processed in a 'better' manner. And it would be more fair in a testing environment. It would be most fair to run AMD with AMD, AMD GPU with Intel, Intel with Nvidia, and AMD processor with Nvidia. And yeah their more optimized, but there not made for each other (AMD and AMD), and that's where the AMD and AMD vs. Intel and Nvidia Combo comes into play.

Putting this with what ericjohn said, what do you have to say with your AMD fans?


Just because intel is more optimized for single threaded apps does not mean they will be better at anything multi-threaded now or in the near future. You need to come up with some facts to support your statements.

ericjohn004 said:
The reason why low power is so important is because of heat, noise, and obviously electricity use. You can build an HTPC with a 35w intel in it and it'd be completely silent without needing a fan. It'd be a lot harder to do the same thing with an AMD as I've read in Tom's hardware's 0 db PC build. It's possible, but just a lot easier to do it with an intel, and cooler too.

I agree that with a 3570k or 8350, power consumption doesn't matter TOO much because if you get either one of those processors, your going to overclock and give that advantage away and the difference amount to a cup of coffee once a month. However, power consumption does give us an idea about a chips efficiency or the amount of power you have per watt. And this is where the big advantage lies. It says a lot about an Intel chip that Intel can do so much more, with so much less power. How fast would an Intel chip be if it could use well over 100w? Very fast. But that's not what they're trying to do, it is what AMD is trying to do though, and they still can't get up to par with an Intel even with power consumption out of the picture.

I agree with the HD4000 graphics, they hardly benefit at all, if any. They're a waste, and Intel should offer the 3570k and 3770k with out the graphics and for 50$ cheaper. If they would, they'd never sell another 3570k with an iGPU again but they would sell more of them. Although they do make a 3350p with the iGPU disabled although I still think it's on the chip, just disabled.

If anyone still thinks that an 8350 is completely equal to an Intel 3570k or 3770k in gaming, just take a look at Tom's new article that came out today. Comparing Nvidia and AMD cards in SLI, using an 8350 and 3770k. Clearly Nvidia and AMD cards run better on an Intel. Especially AMD's cards, which is surprising. But Nvidia cards run better on an 8350 because they require less power from the CPU. Very interesting indeed. But you can clearly see the quality difference in between the two chips. Is it worth the extra 20 bucks for the 3570k? IMO, yes it is. I know the article had a 3770k but their is really no difference between them in the games they tested.

Read the article, if you want 10% more performance in pretty much every game tested, then you'll want the best. If you could care less about 10FPS out of 100FPS, then you'll be rewarded with a better price.

And superPi is a very important benchmark, as it demonstrates single threaded capability, which is a very large slice of the pie. The reason why this benchmarks is important is because it is indicative of the performance you can expect to get out of programs like lame, and iTunes, single threaded games, and so many other pieces of software and also much of Windows 7 uses a single core. To say single threaded programs are being phased out is true, but to say a single cores performance isn't important at all is completely untrue. It's still very important, and it will remain this way for years, although it is being phased out it will always be important to some extent.

***And that last footnote that you made, since AMD makes both Intel and Nivida rely on each other. Its no different than the AMD CPU and GPU relying on each other. You basically just answered your own question.***


Already addressed this earlier.


8350rocks said:
GOM3RPLY3R said:


I'm just going to say that its sad that almost all of the results were done with AMD GPUs. That's not very fair. None of these links show constant results with all the same GPUs. However though I did find, that from all of the videos combined together (as a reference average), Intel did win overall.


Considering Tom's Hardware just published an article showing that AMD GPUs perform better with intel CPUs and Nvidia GPUs perform better with AMD CPUs...it doesn't surprise me that Intel won slightly. If they had used Nvidia GPUs AMD would have won outright without any doubt...LOL.


But when they did the 8350 vs the 3570K on the GTX 670, Intel still won... even a little better than the 8350 vs 3770k on the 7970...
April 16, 2013 6:06:03 PM

8350rocks said:


To reply to you (I didn't feel like picking out quotes and wasting space) I say this:

The heat does matter yes and that they use Bulk wafers yes. However, yeah Intel can only work up to about 75ºC and AMD can go up to almost 100ºC, but the i7-3770k does come with a fan that usually never lets it go above about 55ºC at all (if you are a normal middle-classed citizen who takes proper care of your computer). As I said before, my friend had his fan running, not even at the max, and with his thermal plated ASUS SABERTOOTH Mobo, his stayed at a frosty 1ºC running BF3 on max settings. Even at that and the videos that Sam posted, from all the benchmarks and stuff I collected, Intel still wins overall with a physical build that's more for performance.

The Kernel for Intel is built more efficiently (still better than even the new kernels that AMD released) and it can run better with most games today.

I'm happy that AMD got a world record for Heat and CPU speed, but what it really comes down to is that, the Kernel and what the build is made for. AMD is geared more towards video editing, higher clock, and high heat resistance.

And this is where I'm going to end it. As I usually say, it completely depends on what your doing. If you doing video editing, playing games that are better for AMD, or have a tight budget, it would make sense to get the AMD CPU/GPU. If you are mostly or totally a gamer and do normal everyday processes like web searches, skype, etc. then it would make sense to get an Intel. However, if you are a fan boy, just please, look at facts. I fully support AMD, and if I were to get into video editing, or want to eventually make a cheap PC for video recording and editing, I am all for AMD, but otherwise, Intel.



a c 210 à CPUs
April 16, 2013 10:45:17 PM

GOM3RPLY3R said:
8350rocks said:


To reply to you (I didn't feel like picking out quotes and wasting space) I say this:

The heat does matter yes and that they use Bulk wafers yes. However, yeah Intel can only work up to about 75ºC and AMD can go up to almost 100ºC, but the i7-3770k does come with a fan that usually never lets it go above about 55ºC at all (if you are a normal middle-classed citizen who takes proper care of your computer). As I said before, my friend had his fan running, not even at the max, and with his thermal plated ASUS SABERTOOTH Mobo, his stayed at a frosty 1ºC running BF3 on max settings. Even at that and the videos that Sam posted, from all the benchmarks and stuff I collected, Intel still wins overall with a physical build that's more for performance.

The Kernel for Intel is built more efficiently (still better than even the new kernels that AMD released) and it can run better with most games today.

I'm happy that AMD got a world record for Heat and CPU speed, but what it really comes down to is that, the Kernel and what the build is made for. AMD is geared more towards video editing, higher clock, and high heat resistance.

And this is where I'm going to end it. As I usually say, it completely depends on what your doing. If you doing video editing, playing games that are better for AMD, or have a tight budget, it would make sense to get the AMD CPU/GPU. If you are mostly or totally a gamer and do normal everyday processes like web searches, skype, etc. then it would make sense to get an Intel. However, if you are a fan boy, just please, look at facts. I fully support AMD, and if I were to get into video editing, or want to eventually make a cheap PC for video recording and editing, I am all for AMD, but otherwise, Intel.





His CPU cannot be running at 1C...it would have to be 34F in his home...I wish my AC was that good.

His thermal sensor readings are off...should be much closer to 20-23C depending on ambient temps.

Unless he's running something like liquid helium or liquid nitrogen or dry ice cooling system
a b à CPUs
April 17, 2013 12:36:47 AM

1C? You just got docked a billion smart points. I'm "unfollowing this thread." This is a crazy argument if you believe the CPU was running at 1C. Damn, I feel stupid for even hearing that. I'm not even joking. Like 8350rocks said, his thermal sensor reading is either way off, he's running LN or you're a bad troller. Or maybe a little bit of everything.
April 17, 2013 11:25:26 AM

GOM3RPLY3R said:

And this is where I'm going to end it. As I usually say, it completely depends on what your doing. If you doing video editing, playing games that are better for AMD, or have a tight budget, it would make sense to get the AMD CPU/GPU. If you are mostly or totally a gamer and do normal everyday processes like web searches, skype, etc. then it would make sense to get an Intel. However, if you are a fan boy, just please, look at facts. I fully support AMD, and if I were to get into video editing, or want to eventually make a cheap PC for video recording and editing, I am all for AMD, but otherwise, Intel.


The FX-8350 is so good as the i7-3770k for playing current games, because the difference is minimal, as showed before. However, the eight-core FX has two advantages. First, the next generation of games will be highly threaded; it is not a causality that game developers chose a eight-core chip for the PS4. Second, the AMD socket will allow for further upgrades.

Your claim that i7-3770k is better for "normal everyday processes like web searches, skype, etc." is completely false.
a c 210 à CPUs
April 17, 2013 1:34:46 PM

griptwister said:
1C? You just got docked a billion smart points. I'm "unfollowing this thread." This is a crazy argument if you believe the CPU was running at 1C. Damn, I feel stupid for even hearing that. I'm not even joking. Like 8350rocks said, his thermal sensor reading is either way off, he's running LN or you're a bad troller. Or maybe a little bit of everything.


lmao @ docked 1 billion smart points...hahahaha! That was funny!
April 18, 2013 5:40:46 PM

griptwister said:
1C? You just got docked a billion smart points. I'm "unfollowing this thread." This is a crazy argument if you believe the CPU was running at 1C. Damn, I feel stupid for even hearing that. I'm not even joking. Like 8350rocks said, his thermal sensor reading is either way off, he's running LN or you're a bad troller. Or maybe a little bit of everything.


I'm not kidding. He has a huge case that's over 2 feet tall and about eight inches wide, and it has 1 180mm fan, 4 120mm fans, and 6 80mm fans. Then he has an ASUS SBAERTOOTH Mobo with thermal plating, and the Intel fan that came specially made for it.
April 18, 2013 5:45:13 PM

juanrga said:
GOM3RPLY3R said:

And this is where I'm going to end it. As I usually say, it completely depends on what your doing. If you doing video editing, playing games that are better for AMD, or have a tight budget, it would make sense to get the AMD CPU/GPU. If you are mostly or totally a gamer and do normal everyday processes like web searches, skype, etc. then it would make sense to get an Intel. However, if you are a fan boy, just please, look at facts. I fully support AMD, and if I were to get into video editing, or want to eventually make a cheap PC for video recording and editing, I am all for AMD, but otherwise, Intel.


The FX-8350 is so good as the i7-3770k for playing current games, because the difference is minimal, as showed before. However, the eight-core FX has two advantages. First, the next generation of games will be highly threaded; it is not a causality that game developers chose a eight-core chip for the PS4. Second, the AMD socket will allow for further upgrades.

Your claim that i7-3770k is better for "normal everyday processes like web searches, skype, etc." is completely false.


Yes it is great, the AMD does exceeded the Intel by a lot, but in a much smaller ratio than the games where Intel is very dominant. As an educational estimate I would say 7/10 of all games today run much better with and Intel with an Nvidia rather than the AMD combo. Even if some may by bee the smallest amount. And The AMD is great with everyday processes too. I think I may have worded that wrong. Both are good for what they're specialized to do and are better at doing, but both can at the same time as that, do those web searches and such.
a c 210 à CPUs
April 18, 2013 7:19:56 PM

GOM3RPLY3R said:
griptwister said:
1C? You just got docked a billion smart points. I'm "unfollowing this thread." This is a crazy argument if you believe the CPU was running at 1C. Damn, I feel stupid for even hearing that. I'm not even joking. Like 8350rocks said, his thermal sensor reading is either way off, he's running LN or you're a bad troller. Or maybe a little bit of everything.


I'm not kidding. He has a huge case that's over 2 feet tall and about eight inches wide, and it has 1 180mm fan, 4 120mm fans, and 6 80mm fans. Then he has an ASUS SBAERTOOTH Mobo with thermal plating, and the Intel fan that came specially made for it.


Unless it's 34F in his computer room...(If he keeps his PC in a walk in freezer for example, this may be possible...though that would be stupid in and of itself...)...His CPU temp cannot be lower than the ambient room temperature...the only way that could be possible...would be if he is running liquid nitrogen or liquid helium, or some other high end cooling system. Otherwise, it is not possible because of the laws of physics and chemistry that have been around for a few 100 years and not disproven.

Unless he is using "magic dust" to keep his PC Cool...it simply is impossible.

Let me repeat that...IMPOSSIBLE.

The answer to this is one of the following:

A.) His temp sensors are way off.

B.) Something isn't properly calibrated to the temp sensor.

C.) He is running a ridiculous cooling system that is typically reserved for world record OC attempts.

I will allow you to think on that for a while...unfortunately though...one of the 3 above is the answer...and his CPU does not really idle at 1C. It is not feasible in this universe...maybe in another one out there somewhere...but not here.
April 20, 2013 4:50:40 AM

8350rocks said:
GOM3RPLY3R said:
griptwister said:
1C? You just got docked a billion smart points. I'm "unfollowing this thread." This is a crazy argument if you believe the CPU was running at 1C. Damn, I feel stupid for even hearing that. I'm not even joking. Like 8350rocks said, his thermal sensor reading is either way off, he's running LN or you're a bad troller. Or maybe a little bit of everything.


I'm not kidding. He has a huge case that's over 2 feet tall and about eight inches wide, and it has 1 180mm fan, 4 120mm fans, and 6 80mm fans. Then he has an ASUS SBAERTOOTH Mobo with thermal plating, and the Intel fan that came specially made for it.


Unless it's 34F in his computer room...(If he keeps his PC in a walk in freezer for example, this may be possible...though that would be stupid in and of itself...)...His CPU temp cannot be lower than the ambient room temperature...the only way that could be possible...would be if he is running liquid nitrogen or liquid helium, or some other high end cooling system. Otherwise, it is not possible because of the laws of physics and chemistry that have been around for a few 100 years and not disproven.

Unless he is using "magic dust" to keep his PC Cool...it simply is impossible.

Let me repeat that...IMPOSSIBLE.

The answer to this is one of the following:

A.) His temp sensors are way off.

B.) Something isn't properly calibrated to the temp sensor.

C.) He is running a ridiculous cooling system that is typically reserved for world record OC attempts.

I will allow you to think on that for a while...unfortunately though...one of the 3 above is the answer...and his CPU does not really idle at 1C. It is not feasible in this universe...maybe in another one out there somewhere...but not here.


I'm not sure why you think not. The only way that I think this may be possible is: Since the actual Thermal Plates are made of metal, the cool air that's constantly being sucked in and around the plates cools them, then in vice-versa, eventually slightly cooling the air around the plates that is being sucked in. Also, if air is sucked in very quickly through a small place (under and around the fan and thermal plate), it naturally gets cold. So thus, the rapid air suction under the fan and the cooled thermal plates are what cause the processor to get so cold. Also, the cooled thermal plates would then me more tolerant to receive more heat, and taking away more hot air.

Thank You Science. ^_^
April 20, 2013 6:17:22 AM

By the way, doesn't it feel like were kind of dragging this topic?... Just saying.
a c 210 à CPUs
April 20, 2013 7:54:37 AM

GOM3RPLY3R said:
By the way, doesn't it feel like were kind of dragging this topic?... Just saying.


Considering the CPU is constantly generating heat, your explanation does not jive with the laws of conservation of energy. That heat has to be somewhere!

Let me break this down:

(1) If he is using air cooling then, by default...the coolest his CPU could be, is ambient temperature in the room. This is because, without external cooling sources to lower the temperature of the CPU beyond the temperature of the air in the room, the CPU cannot be cooler than the same air used to cool it.

(2) If he could theoretically arrive at 1C with air cooling only...then world record OC attempts wouldn't bother to use LN2 or LH2 or DICE or any other high end cooling systems to keep their CPU cool...they would just buy a $30 fan and rock on.

(3) I will prove this example to you. Ok...? Go out to a car, turn on the car, and run the FAN make 100% sure that you are not running the A/C compressor. The coolest temperature you will get from the FAN is the ambient temperature outside. If your theory actually worked for CPUs, it should work in a car too right? A car is made of metal, and all that metal could be cooled by the fans in the radiator. Unfortunately...your idea is wrong...you can sit in that 90 degree car, and run the FAN, and the coolest it will ever get it is 90 degrees. However, as soon as you engage a external cooling system (i.e. the A/C Compressor)...the air inside that car can easily drop below the ambient temperature. This is because an additional cooling agent has been introduced into the system which generates the cooling required to lower the temperature.

(4) Your hypothesis is wrong...air cooling alone cannot lower something below the ambient temperature in the room. See, the law of conservation of energy says:

Quote:
Energy is neither created, nor destroyed


http://en.wikipedia.org/wiki/Conservation_of_energy

The above link will elaborate more clearly, but what is happening is, the energy generated as heat, cannot be removed without an external source to artificially change it into a different form. You must burn more energy to counteract the energy being spent as heat.
April 20, 2013 8:14:04 AM

8350rocks said:
GOM3RPLY3R said:
By the way, doesn't it feel like were kind of dragging this topic?... Just saying.


Considering the CPU is constantly generating heat, your explanation does not jive with the laws of conservation of energy. That heat has to be somewhere!

Let me break this down:

(1) If he is using air cooling then, by default...the coolest his CPU could be, is ambient temperature in the room. This is because, without external cooling sources to lower the temperature of the CPU beyond the temperature of the air in the room, the CPU cannot be cooler than the same air used to cool it.

(2) If he could theoretically arrive at 1C with air cooling only...then world record OC attempts wouldn't bother to use LN2 or LH2 or DICE or any other high end cooling systems to keep their CPU cool...they would just buy a $30 fan and rock on.

(3) I will prove this example to you. Ok...? Go out to a car, turn on the car, and run the FAN make 100% sure that you are not running the A/C compressor. The coolest temperature you will get from the FAN is the ambient temperature outside. If your theory actually worked for CPUs, it should work in a car too right? A car is made of metal, and all that metal could be cooled by the fans in the radiator. Unfortunately...your idea is wrong...you can sit in that 90 degree car, and run the FAN, and the coolest it will ever get it is 90 degrees. However, as soon as you engage a external cooling system (i.e. the A/C Compressor)...the air inside that car can easily drop below the ambient temperature. This is because an additional cooling agent has been introduced into the system which generates the cooling required to lower the temperature.

(4) Your hypothesis is wrong...air cooling alone cannot lower something below the ambient temperature in the room. See, the law of conservation of energy says:

Quote:
Energy is neither created, nor destroyed


http://en.wikipedia.org/wiki/Conservation_of_energy

The above link will elaborate more clearly, but what is happening is, the energy generated as heat, cannot be removed without an external source to artificially change it into a different form. You must burn more energy to counteract the energy being spent as heat.


I'm loving your science dude. However, it happened. I actually touched his CPU when it was at idle, it felt like a freezer. I have to say, Science it right, but there are many things within itself that makes it false.

How come when a thermometer is placed behind a fan, it would be cooler than the thermometer in front of it? Simply because we can take space for an example. There is virtually no air in space, thus it is so cold because of the lack of molecules to be heated.

THUS with a fan, the air behind it is cooler because of the displaced air molecules. So there are less molecules behind than in front meaning that the back is more cool. NOW with the CPU fan, those little fans push out LOADS of air, more than my regular fan at home. Then think about the very small space it is in. It is harder for a substance, even air, to move in smaller places, thus, there is a large lack of air in there, also its very dark so there's little to no light energy that can heat it up. Also the Thermal plates practically are touching the heat sync and displacing the heat extremely.

Thus in the end, there is little air to heat so it becomes colder. Even though the CPU is producing heat, the ratio of air that is being heated compared to the air the being cooled, is much more, plus the heatsync and thermal plates drastically displacing that hot energy in the metals.
a c 210 à CPUs
April 20, 2013 8:43:51 AM

GOM3RPLY3R said:
8350rocks said:
GOM3RPLY3R said:
By the way, doesn't it feel like were kind of dragging this topic?... Just saying.


Considering the CPU is constantly generating heat, your explanation does not jive with the laws of conservation of energy. That heat has to be somewhere!

Let me break this down:

(1) If he is using air cooling then, by default...the coolest his CPU could be, is ambient temperature in the room. This is because, without external cooling sources to lower the temperature of the CPU beyond the temperature of the air in the room, the CPU cannot be cooler than the same air used to cool it.

(2) If he could theoretically arrive at 1C with air cooling only...then world record OC attempts wouldn't bother to use LN2 or LH2 or DICE or any other high end cooling systems to keep their CPU cool...they would just buy a $30 fan and rock on.

(3) I will prove this example to you. Ok...? Go out to a car, turn on the car, and run the FAN make 100% sure that you are not running the A/C compressor. The coolest temperature you will get from the FAN is the ambient temperature outside. If your theory actually worked for CPUs, it should work in a car too right? A car is made of metal, and all that metal could be cooled by the fans in the radiator. Unfortunately...your idea is wrong...you can sit in that 90 degree car, and run the FAN, and the coolest it will ever get it is 90 degrees. However, as soon as you engage a external cooling system (i.e. the A/C Compressor)...the air inside that car can easily drop below the ambient temperature. This is because an additional cooling agent has been introduced into the system which generates the cooling required to lower the temperature.

(4) Your hypothesis is wrong...air cooling alone cannot lower something below the ambient temperature in the room. See, the law of conservation of energy says:

Quote:
Energy is neither created, nor destroyed


http://en.wikipedia.org/wiki/Conservation_of_energy

The above link will elaborate more clearly, but what is happening is, the energy generated as heat, cannot be removed without an external source to artificially change it into a different form. You must burn more energy to counteract the energy being spent as heat.


I'm loving your science dude. However, it happened. I actually touched his CPU when it was at idle, it felt like a freezer. I have to say, Science it right, but there are many things within itself that makes it false.

How come when a thermometer is placed behind a fan, it would be cooler than the thermometer in front of it? Simply because we can take space for an example. There is virtually no air in space, thus it is so cold because of the lack of molecules to be heated.

THUS with a fan, the air behind it is cooler because of the displaced air molecules. So there are less molecules behind than in front meaning that the back is more cool. NOW with the CPU fan, those little fans push out LOADS of air, more than my regular fan at home. Then think about the very small space it is in. It is harder for a substance, even air, to move in smaller places, thus, there is a large lack of air in there, also its very dark so there's little to no light energy that can heat it up. Also the Thermal plates practically are touching the heat sync and displacing the heat extremely.

Thus in the end, there is little air to heat so it becomes colder. Even though the CPU is producing heat, the ratio of air that is being heated compared to the air the being cooled, is much more, plus the heatsync and thermal plates drastically displacing that hot energy in the metals.


You cannot fundamentally change the laws of thermodynamics. You cannot "trick" them either. Everything from chemistry 101 to Quantum Physics uses them as a fundamental base to build on. Therefore, they are not "false" ever. If you were right, and his CPU runs at 1C, then you just disproved Einstein's theory of General Relativity and the theory of Special Relativity, you also just debunked Newtonian Physics, Thermodynamics, Basic Chemistry, and any other Science that is based on Thermodynamic principles.

So, all knowing science master of the world....How do you accomodate for the way the world operates...because evidently the last 1000 years of science is irrelevant according to you.

You're not right...you're wrong...and I have explained this to you, I have lost count how many times.

UNLESS AN EXTERNAL CHEMICAL SOURCE MAKES HIS CPU COOLER THAN AMBIENT TEMPERATURE, THEN YOU'RE NOT BEATING SCIENCE, YOU'RE JUST DENYING THE FACTS!

Why don't you google the question..."Can my CPU be cooler than ambient temperature with just a cooling fan?" See what you get...

As a matter of fact...here. see for yourself. The world is not wrong...you are.
April 20, 2013 10:23:04 AM

8350rocks said:
GOM3RPLY3R said:
8350rocks said:
GOM3RPLY3R said:
By the way, doesn't it feel like were kind of dragging this topic?... Just saying.


Considering the CPU is constantly generating heat, your explanation does not jive with the laws of conservation of energy. That heat has to be somewhere!

Let me break this down:

(1) If he is using air cooling then, by default...the coolest his CPU could be, is ambient temperature in the room. This is because, without external cooling sources to lower the temperature of the CPU beyond the temperature of the air in the room, the CPU cannot be cooler than the same air used to cool it.

(2) If he could theoretically arrive at 1C with air cooling only...then world record OC attempts wouldn't bother to use LN2 or LH2 or DICE or any other high end cooling systems to keep their CPU cool...they would just buy a $30 fan and rock on.

(3) I will prove this example to you. Ok...? Go out to a car, turn on the car, and run the FAN make 100% sure that you are not running the A/C compressor. The coolest temperature you will get from the FAN is the ambient temperature outside. If your theory actually worked for CPUs, it should work in a car too right? A car is made of metal, and all that metal could be cooled by the fans in the radiator. Unfortunately...your idea is wrong...you can sit in that 90 degree car, and run the FAN, and the coolest it will ever get it is 90 degrees. However, as soon as you engage a external cooling system (i.e. the A/C Compressor)...the air inside that car can easily drop below the ambient temperature. This is because an additional cooling agent has been introduced into the system which generates the cooling required to lower the temperature.

(4) Your hypothesis is wrong...air cooling alone cannot lower something below the ambient temperature in the room. See, the law of conservation of energy says:

Quote:
Energy is neither created, nor destroyed


http://en.wikipedia.org/wiki/Conservation_of_energy

The above link will elaborate more clearly, but what is happening is, the energy generated as heat, cannot be removed without an external source to artificially change it into a different form. You must burn more energy to counteract the energy being spent as heat.


I'm loving your science dude. However, it happened. I actually touched his CPU when it was at idle, it felt like a freezer. I have to say, Science it right, but there are many things within itself that makes it false.

How come when a thermometer is placed behind a fan, it would be cooler than the thermometer in front of it? Simply because we can take space for an example. There is virtually no air in space, thus it is so cold because of the lack of molecules to be heated.

THUS with a fan, the air behind it is cooler because of the displaced air molecules. So there are less molecules behind than in front meaning that the back is more cool. NOW with the CPU fan, those little fans push out LOADS of air, more than my regular fan at home. Then think about the very small space it is in. It is harder for a substance, even air, to move in smaller places, thus, there is a large lack of air in there, also its very dark so there's little to no light energy that can heat it up. Also the Thermal plates practically are touching the heat sync and displacing the heat extremely.

Thus in the end, there is little air to heat so it becomes colder. Even though the CPU is producing heat, the ratio of air that is being heated compared to the air the being cooled, is much more, plus the heatsync and thermal plates drastically displacing that hot energy in the metals.


You cannot fundamentally change the laws of thermodynamics. You cannot "trick" them either. Everything from chemistry 101 to Quantum Physics uses them as a fundamental base to build on. Therefore, they are not "false" ever. If you were right, and his CPU runs at 1C, then you just disproved Einstein's theory of General Relativity and the theory of Special Relativity, you also just debunked Newtonian Physics, Thermodynamics, Basic Chemistry, and any other Science that is based on Thermodynamic principles.

So, all knowing science master of the world....How do you accomodate for the way the world operates...because evidently the last 1000 years of science is irrelevant according to you.

You're not right...you're wrong...and I have explained this to you, I have lost count how many times.

UNLESS AN EXTERNAL CHEMICAL SOURCE MAKES HIS CPU COOLER THAN AMBIENT TEMPERATURE, THEN YOU'RE NOT BEATING SCIENCE, YOU'RE JUST DENYING THE FACTS!

Why don't you google the question..."Can my CPU be cooler than ambient temperature with just a cooling fan?" See what you get...

As a matter of fact...here. see for yourself. The world is not wrong...you are.


Then how come it WAS at 1ºC (34ºC), when I touched it, it was like a freezer, and his room temp was 70ºF?
a c 210 à CPUs
April 20, 2013 2:45:48 PM

GOM3RPLY3R said:
http://www.youtube.com/watch?v=AbMYV8Djt7k

This is probably the only video of its kind that did CPU comparison the right way.


(1) If you touched his heatsink/fan system, you did not get an accurate judgement...

(2) Compressed air could be used to manipulate temperatures for a short period of time, I seriously think he's either playing a trick on you, or his temp sensor is screwed up and you didn't really experience what you thought. You realize extreme heat can feel like extreme cold and vice versa. If his temp sensor was off and his CPU was really at 60C (far more likely...) that's 140F, and could have felt cold but really was not.

(3) So, you're telling me that 1 CPU benchmark out of hundreds of millions was correctly performed and the rest were all done incorrectly. Further, you are asserting that some schmuck on the internet with no previous benchmark experience is the one who executed this..."perfect CPU benchmark"?

a b à CPUs
April 20, 2013 3:24:44 PM

It is possible to use processes such as evaporative cooling to cool below ambient.

I don't know if you've seen it but in greenhouses you have things such as these



The cardboard is soaked with water and air is blown through. The water uses some of the thermal energy in the air to evaporate (as it takes heat to evaporate water). The air leaves the cardboard mesh cooler than when it went in (and thus below ambient). These things can cool quite a bit. (Though they arn't used as much as they should because higher humidity lower effectiveness and it increase moisture content (which isn't good sometimes)).

April 20, 2013 4:11:53 PM

Jesus this topic has really went the distance from some guy asking is the 8350 good for gaming to god knows what? laugh out loud, seriously
a c 210 à CPUs
April 20, 2013 5:54:02 PM

Seriously...you felt so compelled to post in this thread just to swear about it? Next time why not hold that thought...? Kids may read this, you know...?
April 20, 2013 5:58:04 PM

8350rocks said:
GOM3RPLY3R said:
http://www.youtube.com/watch?v=AbMYV8Djt7k

This is probably the only video of its kind that did CPU comparison the right way.


(1) If you touched his heatsink/fan system, you did not get an accurate judgement...

(2) Compressed air could be used to manipulate temperatures for a short period of time, I seriously think he's either playing a trick on you, or his temp sensor is screwed up and you didn't really experience what you thought. You realize extreme heat can feel like extreme cold and vice versa. If his temp sensor was off and his CPU was really at 60C (far more likely...) that's 140F, and could have felt cold but really was not.

(3) So, you're telling me that 1 CPU benchmark out of hundreds of millions was correctly performed and the rest were all done incorrectly. Further, you are asserting that some schmuck on the internet with no previous benchmark experience is the one who executed this..."perfect CPU benchmark"?



There are many "correct" CPU benchmarks out there. I just find this one to be the best CPU comparison as it used multiple benchmarks that were CPU intensive and really amplified the CPU over say Frame Rates for games and other things.
a c 210 à CPUs
April 20, 2013 5:58:13 PM

whyso said:
It is possible to use processes such as evaporative cooling to cool below ambient.

I don't know if you've seen it but in greenhouses you have things such as these



The cardboard is soaked with water and air is blown through. The water uses some of the thermal energy in the air to evaporate (as it takes heat to evaporate water). The air leaves the cardboard mesh cooler than when it went in (and thus below ambient). These things can cool quite a bit. (Though they arn't used as much as they should because higher humidity lower effectiveness and it increase moisture content (which isn't good sometimes)).



Yeah, but you wouldn't use such a contraption to cool a CPU, and additionally...it might make 2-3C difference, but it wouldn't make anything go from 22-23C down to 1C
a b à CPUs
April 20, 2013 7:15:51 PM

Lol, I'll tell you what really happened gomer. Either, Your friend can't read temps worth crap or, You made that up. No modern day i7 CPU could ever run at 1C unless if it was downclocked extremely, and you have a good heat spreader.
April 20, 2013 7:18:37 PM

griptwister said:
Lol, I'll tell you what really happened gomer. Either, Your friend can't read temps worth crap or, You made that up. No modern day i7 CPU could ever run at 1C unless if it was downclocked extremely, and you have a good heat spreader.


All I can say is that it DID run at 1ºC and I saw and felt it with my own eyes and hands. We even put a thermometer up to it and it said about 35 ºF
a c 210 à CPUs
April 20, 2013 7:25:10 PM

Then he turned a can of compressed air upside down and sprayed the heatsink with it for a little while and you got your 35F reading from that.

35C sounds far more like it...you sure the thermometer was in C instead of F?
    • First
    • Previous
    • 5 / 11
    • 6
    • 7
    • 8
    • 9
    • More pages
    • Next
    • Newest
      • 1
      • 2
      • 3
      • 4
      • 5 / 11
      • 6
      • 7
      • 8
      • 9
      • 10
!