Sign in with
Sign up | Sign in
Your question
Closed

When will amd release mobos with pcie 3.0 ?

Last response: in Motherboards
Share
February 29, 2012 2:37:54 PM

i dont see any pcie 3.0 motherboards yet from amd, are they waiting for a new line of processors to come out first? apparently amd is releasing there vishera cpus in q3 of this year but its still going to run off am3+

More about : amd release mobos pcie

a b V Motherboard
February 29, 2012 4:17:27 PM

"AMD’s Komodo CPUs were supposed to have up to ten Piledriver x86 cores based on the Bulldozer architecture and carry a new FM2 socket design. The chips were also reported to have an integrated PCI Express 3.0 controller "
http://www.pcwhipped.com/amd-alters-its-cpu-roadmap-vis...
a b V Motherboard
February 29, 2012 4:18:06 PM

AMD doesn't have PCIe 3.0 in the works right now, as far as I'm aware. Don't worry about it because PCIe 3.0 doesn't matter too much for us consumers right now and probably won't for a while.

Either way, AMD's processors aren't looking too good right now, Intel is better for most use. If you don't do highly threaded work (most applications and games are only single or dual threaded) then I don't recommend going AMD.

AS I said, PCIe 3.0 doesn't seem to be a high priority for AMD. Don't expect it until the end of 2012, at the earliest.
Related resources
a b V Motherboard
February 29, 2012 4:22:18 PM

Komodo was discontinued. Besides that, eight cores and six cores are already more than most people can use reasonably unless they are doing some serious multitasking. Most software (especially games too) uses only one or two threads so more than four cores suffers rapid diminishing returns in performance benefits for the majority of computer users.

Ten cores would mean that half of the cores or more would be more or less idle most of the time, making them next to useless unless used in a server or other professional environment. Going beyond four is a poor idea for most and going beyond six is means you must be do some highly parallel work to keep the extra cores beyond four, five, or six off of idling and wasting performance.

Most people benefit from a fewer amount of faster cores (Intel's current method) than a higher number of slower cores (AMD's current method).
a c 715 V Motherboard
February 29, 2012 4:51:05 PM

AMD, who knows they jumped on their Samurai sword. If you want PCIe 3.0 today, it's going to be on the Intel LGA 2011 or Ivy Bridge whenever Intel gets in the mood to release it, it's been delayed, and if I were Intel I'd drag my feet to profit as much as I could from AMD's demise-- Tick Tock is dead.

Basically, x8/x8 PCIe 2.x is fine for ANY GPU out there, and I cannot imagine any time soon that PCIe 2.x x16 will be saturated -- IMO many, many years...
February 29, 2012 5:26:57 PM

blazorthon said:
Komodo was discontinued. Besides that, eight cores and six cores are already more than most people can use reasonably unless they are doing some serious multitasking. Most software (especially games too) uses only one or two threads so more than four cores suffers rapid diminishing returns in performance benefits for the majority of computer users.

Ten cores would mean that half of the cores or more would be more or less idle most of the time, making them next to useless unless used in a server or other professional environment. Going beyond four is a poor idea for most and going beyond six is means you must be do some highly parallel work to keep the extra cores beyond four, five, or six off of idling and wasting performance.

Most people benefit from a fewer amount of faster cores (Intel's current method) than a higher number of slower cores (AMD's current method).


not exactly. theres nothing wrong with having alot of cores like the fx8150. the fx4100 is clocked about the same frequency as the fx8150 and the 8150 is waaaay faster. if your gaming deffinitley go with the 8150 over say the 4100. your going to get alot better FPS and speed even tho all the games out there only use 2-4 cores.
a b V Motherboard
February 29, 2012 11:20:03 PM

The speed benefits of the FX 8150 are because it has four modules instead of four cores split into two modules.

Basically, there are simply more resources beyond the individual cores available. The same isn't as true for Intel. Increasing core count beyond 4 on Intel doesn't improve performance because Intel doesn't have a module architecture like FX. To be honest, I think that the improvements of AMD's FX CPUs with greater numbers of cores peaks at 8 cores anyway. That might be why AMD canceled Komodo, or at least put it on indefinite hold.

There are also more cores for other programs and such to run off of, leaving more performance per core for the game itself. FX 6 and 8 core chips are still inferior to i3s and i5s despite the i3 and i5 having fewer cores.

A dual core i3 (all i3s should be dual core) will beat any FX processor because it's performance per core is so much higher, something like 50% higher when at the same clock frequency. Games are often only single or dual threaded with quad threaded and better being less common, although games are becoming more well-threaded.

There's nothing wrong with having huge core counts, but it doesn't scale performance nearly as well as improving performance per core with most software and games unless you have software that utilizes those cores. Most software doesn't and most software that does isn't used by most people.
March 1, 2012 12:28:42 AM

blazorthon said:
The speed benefits of the FX 8150 are because it has four modules instead of four cores split into two modules.

Basically, there are simply more resources beyond the individual cores available. The same isn't as true for Intel. Increasing core count beyond 4 on Intel doesn't improve performance because Intel doesn't have a module architecture like FX. To be honest, I think that the improvements of AMD's FX CPUs with greater numbers of cores peaks at 8 cores anyway. That might be why AMD canceled Komodo, or at least put it on indefinite hold.

There are also more cores for other programs and such to run off of, leaving more performance per core for the game itself. FX 6 and 8 core chips are still inferior to i3s and i5s despite the i3 and i5 having fewer cores.

A dual core i3 (all i3s should be dual core) will beat any FX processor because it's performance per core is so much higher, something like 50% higher when at the same clock frequency. Games are often only single or dual threaded with quad threaded and better being less common, although games are becoming more well-threaded.

There's nothing wrong with having huge core counts, but it doesn't scale performance nearly as well as improving performance per core with most software and games unless you have software that utilizes those cores. Most software doesn't and most software that does isn't used by most people.


no the reason the fx 8150 is faster and better then the 4100 is because of the amount of threads it has. 8 cores and 8 threads. no cpu being built has less threads then there core amount. you can take a high end 6 core cpu from intel and it will have 12 threads. always go with a higher core processor. having a 6 or even an 8 core proessor is not overkill for gaming or even browing the web.
a b V Motherboard
March 1, 2012 5:12:00 AM

All of the six core Intel CPUs are shown to be identical to their quad core i7 brethren in gaming. The FX's extra cores don't matter a whole lot in games that can't use the cores. The FX is faster because if a game uses four or fewer threads, each thread can have more resources to itself while four of the cores are closer to idling than the other four.

Windows 7 doesn't do a good job of this, but Windows 8 is supposed to do an even better job of this that will show why the 8 core FX and 6 core FX is better than the quad core FX, yet the same is not true for 6 core i7s vs. quad core i7s and six core Phenom IIs vs. quad core Phenom IIs and Core 2 Quads.

You could have a 16 core Interlagos CPU for all that it matters, it won't be able to go faster than the 8 core FXs simply because even if it were at the same clock frequency, games just don't use too many cores/threads.

Browsing the web? are you kidding me? Firefox uses one thread so it won't see any benefit going from a quad core to even a 16 core of the same architecture. Chrome is single threaded too, but spawns many processes so it can make some use of multiple cores, but going beyond a quad core will simply mean you have so much performance that the light load of web browsing can't load up the CPU anyway, so once again. increased core counts won't help much here either.

Internet Explorer does a similar thing to Chrome, but IE sucks in so many other ways so I don't care. I'm not sure about Opera, but I think it is like Firefox. Maybe I'll check later.

For regular web browsing, even with large numbers of tabs, anything more than a dual core is unnecessary. I have my older laptop with a Turion 64 x2 @ 2.00GHz and I use as many as hundreds pf tabs without a slowdown until I run out of memory, not an easy thing to do in Firefox anyway.

Even a Phenom II or Core 2 Duo or even an FX would be like twice as fast per core, maybe a little less or more depending on their clock frequencies. Intel's i3s and up would be like triple, maybe more, the performance that I have for lightly threaded work such as Firefox web browsing. Despite all of this, I can do it without a problem.

That tells me that even a quad core is way overkill for web browsing even with single threaded browsers like Firefox and well threaded browsers like Chrome right now, let alone six or more cores of ANY modern-ish architecture. Please remember that my CPU is a mobile, cut down Athlon 64 x2... Not even a mid end CPU for it's time, let alone a low end CPU now. It is slower than the slowest processors you get notebook/laptop computers with now. Even the Celeron G530 ($52 on Newegg, 2.4GHz dual core Sandy Bridge CPU for LGA 1155) can shred my CPU by like 100%.
a b V Motherboard
March 1, 2012 2:17:15 PM

Do you have any idea how many programs would need to be open to make a difference between a quad core and an eight core CPU? More cores do NOT make the OS any smoother when it already has more performance than it knows what to do with.

I have Firefox, Chome, and a VM that has 512MB of memory allocated and has Chrome inside the VM, all running at once on my little laptop with an old dual core and 2GB of DDR2-667 and it doesn't slow me down much at all. Even then, the biggest performance hits I'm taking are caused by lack of memory.

If I had 4GB of memory then everything would be even smoother. Having an FX-8150 wouldn't make much difference at this point. It takes some more intensive work than web browsing and such to stress even a single, slow core, let alone eight faster ones than mine. I estimate the FX 8150 to be about twice as fast per core as my laptop and having four times more cores meaning it should be very roughly eight times faster.

You don't think that I will have my browser running even twice as fast, do you? I ask because it simply won't. Will it run faster? Of course. Much faster? not until I have even more tabs open than I do now and/or cycle through more tabs at a time.

If I had eight Firefox instances, each with over a hundred tabs, and 16GB of memory so the memory doesn't bottleneck me, then it would obviously see a difference. However, I highly doubt you nor the OP will do something like that.

Stuff like web browsing also have other bottlenecks, specifically the internet connection's latencies and bandwidths. Doing the work load that you have stated would not make much difference between going from the FX-4100 to the FX-8150, if any at all. The encoding might run faster, but not the OS and not Firefox.

You can argue that it will be smoother, but I can tell you why it won't.
a b V Motherboard
March 1, 2012 2:41:02 PM

I'll say that if you try running several more intensive programs than web browsing and the like, then yes you will see a performance difference. However, I have a machine with a Phenom II x6 1090T and it is hard for me to load it up with casual work and it is slower than an FX-8120 and FX-8150.

Now if I have something like several multi-GB archives being extracted to and from different hard drives so that the hard drives don't bottleneck it much, then yes, I can load up my Phenom II x6 after I also have some other things going like a larger VM and Firefox+Chrome, both with many tabs open and a few other things running.

You underestimate the performance of quad core CPUs and you underestimate the performance of eight core CPUs, but you also overestimate the impact that they have because you don't seem to realize that your example programs are light programs and the OS is also not very heavy, unless you have Vista. You also underestimate the impact of other bottlenecks on the system.

There's memory capacity, bandwidth, and latency. There are the bandwidths and latencies of the Internet. There are the hard drives and other storage mediums. There is also the graphics cards. Then there is the CPU, assuming that you don't have other problems like an overstressed PCI bus (although that isn't common, especially with the increasing rarity of PCI cards) and more.

The remaining bottlenecks are mainly software. If you are having such problems that it takes you an eight core CPU to show benefit over a quad core CPU for regular, casual work, then there is obviously something going on with your machine.

Perhaps you have Vista and/or have too much background work being done. You might want to look into optimizing your system because it is probably wasting a lot of performance on stuff that shouldn't be running. You might even have malware problems, or your anti-malware is garbage like Norton or Mcafee.
March 1, 2012 5:17:26 PM

blazorthon said:
All of the six core Intel CPUs are shown to be identical to their quad core i7 brethren in gaming. The FX's extra cores don't matter a whole lot in games that can't use the cores. The FX is faster because if a game uses four or fewer threads, each thread can have more resources to itself while four of the cores are closer to idling than the other four.

Windows 7 doesn't do a good job of this, but Windows 8 is supposed to do an even better job of this that will show why the 8 core FX and 6 core FX is better than the quad core FX, yet the same is not true for 6 core i7s vs. quad core i7s and six core Phenom IIs vs. quad core Phenom IIs and Core 2 Quads.

You could have a 16 core Interlagos CPU for all that it matters, it won't be able to go faster than the 8 core FXs simply because even if it were at the same clock frequency, games just don't use too many cores/threads.

Browsing the web? are you kidding me? Firefox uses one thread so it won't see any benefit going from a quad core to even a 16 core of the same architecture. Chrome is single threaded too, but spawns many processes so it can make some use of multiple cores, but going beyond a quad core will simply mean you have so much performance that the light load of web browsing can't load up the CPU anyway, so once again. increased core counts won't help much here either.

Internet Explorer does a similar thing to Chrome, but IE sucks in so many other ways so I don't care. I'm not sure about Opera, but I think it is like Firefox. Maybe I'll check later.

For regular web browsing, even with large numbers of tabs, anything more than a dual core is unnecessary. I have my older laptop with a Turion 64 x2 @ 2.00GHz and I use as many as hundreds pf tabs without a slowdown until I run out of memory, not an easy thing to do in Firefox anyway.

Even a Phenom II or Core 2 Duo or even an FX would be like twice as fast per core, maybe a little less or more depending on their clock frequencies. Intel's i3s and up would be like triple, maybe more, the performance that I have for lightly threaded work such as Firefox web browsing. Despite all of this, I can do it without a problem.

That tells me that even a quad core is way overkill for web browsing even with single threaded browsers like Firefox and well threaded browsers like Chrome right now, let alone six or more cores of ANY modern-ish architecture. Please remember that my CPU is a mobile, cut down Athlon 64 x2... Not even a mid end CPU for it's time, let alone a low end CPU now. It is slower than the slowest processors you get notebook/laptop computers with now. Even the Celeron G530 ($52 on Newegg, 2.4GHz dual core Sandy Bridge CPU for LGA 1155) can shred my CPU by like 100%.


the thing is that when theres more cores theres more threads. theres no such thing as overkill with more cores with intel or amd. your going to get more threads and that means faster speeds.
a b V Motherboard
March 1, 2012 6:50:14 PM

diablo24life said:
the thing is that when theres more cores theres more threads. theres no such thing as overkill with more cores with intel or amd. your going to get more threads and that means faster speeds.


No, there's room for more threads to work at once. How many "threads" a CPU has just means how many it can run at once. When we say threads in the software sense, we mean the actual threads being run. Yes, Windows can move itself out through more than four threads, but it's most intensive threads can't do that so there is little difference with the OS going from 4 to 8. All Windows can do is run more of it's small threads at once, and really, that doesn't make a difference, it just means that there is like .3% more CPU power for other stuff on each core.

You don't seem to understand how hardware and software work very well. A thread count in the sense of hardware is just how many threads it can work with at once. A thread count in the sense of software is how many cores it can effectively use. If you have a CPU that supports more threads than the software, then there won't be a big difference than a CPU of the same speed per core with the amount of cores that equal the amount of threads uses by that piece of software.

Web browsing is just working with relatively small amounts of data that is bottlenecked by the browser itself, the hard drive/storage medium where the browser is located and it's cache is located, and the internet connection's bandwidths and latencies. Getting a faster CPU with more cores makes no difference if you are bottlenecked by any of the above or you don't use hundreds of tabs in a browser that makes use of multiple cores (as I said, Firefox does NOT make use of multiple cores right now nor in the near future).

A CPU with more threads available than software uses just means that some of the stuff like OS work and background tasks can execute on a different core than the software in question and all of it can work at once. This does not make a notable difference unless you have a LOT of background work being done.

Like I said, the only ways that something like that would make a difference is if you have a setup with Vista and/r crap like Norton/Mcafee and/or malware problems, to name a few possibilities. Also, if you have unnecessary services and such running then you are also losing performance and might notice a difference from a CPU upgrade. However, the same difference would be noticed if you fix the problems at the source instead of working around them.
March 1, 2012 8:58:09 PM

blazorthon said:
No, there's room for more threads to work at once. How many "threads" a CPU has just means how many it can run at once. When we say threads in the software sense, we mean the actual threads being run. Yes, Windows can move itself out through more than four threads, but it's most intensive threads can't do that so there is little difference with the OS going from 4 to 8. All Windows can do is run more of it's small threads at once, and really, that doesn't make a difference, it just means that there is like .3% more CPU power for other stuff on each core.

You don't seem to understand how hardware and software work very well. A thread count in the sense of hardware is just how many threads it can work with at once. A thread count in the sense of software is how many cores it can effectively use. If you have a CPU that supports more threads than the software, then there won't be a big difference than a CPU of the same speed per core with the amount of cores that equal the amount of threads uses by that piece of software.

Web browsing is just working with relatively small amounts of data that is bottlenecked by the browser itself, the hard drive/storage medium where the browser is located and it's cache is located, and the internet connection's bandwidths and latencies. Getting a faster CPU with more cores makes no difference if you are bottlenecked by any of the above or you don't use hundreds of tabs in a browser that makes use of multiple cores (as I said, Firefox does NOT make use of multiple cores right now nor in the near future).

A CPU with more threads available than software uses just means that some of the stuff like OS work and background tasks can execute on a different core than the software in question and all of it can work at once. This does not make a notable difference unless you have a LOT of background work being done.

Like I said, the only ways that something like that would make a difference is if you have a setup with Vista and/r crap like Norton/Mcafee and/or malware problems, to name a few possibilities. Also, if you have unnecessary services and such running then you are also losing performance and might notice a difference from a CPU upgrade. However, the same difference would be noticed if you fix the problems at the source instead of working around them.


ROFL uh what are you trying to say here? what is your point with these 4 paragraphs you wrote????
a b V Motherboard
March 1, 2012 9:34:25 PM

If you don't understand the point yet then I guess you could leave. What would you rather have, a 16 core Interlagos CPU at 2.6GHz, or a quad core i5-2400 at 3.1GHz, ignoring any price difference?
March 1, 2012 9:55:49 PM

blazorthon said:
If you don't understand the point yet then I guess you could leave. What would you rather have, a 16 core Interlagos CPU at 2.6GHz, or an i5-2400 at 3.1GHz, ignoring any price difference?


i started this thread. it takes you 4 paragraphs to make an "attempt" at explaining something. you claim to know everything about computers, yet you dont even know that more cores equal better performance because it will have more threads. have you ever heard of the core2 duo?? did you know that was a 2 core 4 thread processor? thats the reason they were more succesful then a dual core. do you even know what te job of threads are in a processor??? gotta love these 12 year old it techs who come on toms hardware and forget what the heck they were talking about, then decide that writing 5 paragraphs must mean they know something.
a b V Motherboard
March 1, 2012 10:13:17 PM

MY GOD did you just tell me that a Core 2 Duo is a 4 thread CPU?!?! It is NOT!! That would mean that it had hyper-threading and NO Core 2 CPU had hyper-threading. All Core 2 CPUs have equal amounts of threads and cores. Core 2 Duos have two threads and two cores, core 2 Quads have four cores and four threads. I know a LOT about CPUs. For example, a Core 2 Quad is actually two Core 2 Duo dies on a single chip.

Multiple threads per core on Intel CPUs is called Hyper-threading, first debuted on some older P4 CPUs. After the P4, it was also used on some extreme edition Pentium Ds that had two cores and four threads. After that there was Core 2 that ditched hyper-threading completely. Hyper-threading was next seen in Nehalem dual core i3s and i5s as well as quad core and six core i7s.

It has remained on the Sandy Bridge i3s and i7s, but there is no dual core Sandy Bridge i5 so no Sandy Bridge i5s has it. Ivy Bridge is expected to continue Sandy's way of Hyper-threaded dual core i3s and hyper-threaded quad core i7s. The Sandy Bridge E six and quad core i7s also have Hyper-threading, all i7s should have it as of yet as should all i3s.

You do NOT understand CPUs much at all, despite you claiming you do.

Having more threads helps with dual core CPUs because they don't have more cores and threads than most machines know what to do with. It helps in the quad core i7s only if you do some seriously intensive multi-threaded work because unlike four threads, there is even less software that supports 8 threads that quad core desktop i7s offer. Going beyond four cores has been shown to be inferior to improving the performance of each thread for software that does not use more threads than available to it. IE a dual core i3 at 3GHz beats eight core FX CPUs in gaming because games only use one or two threads and the i3's two physical cores are faster than two FX cores are.

The Hyper-threaded threads are also much slower than physical cores are, at best hitting around 30% of the same physical cores on the processor. Because of this, Windows was optimized to schedule intensive tasks to the physical threads/cores and mainly use the hyperthreaded threads when the physical thread is waiting for something.

A hyper threaded thread actually shares a core with a physical thread and can only use the parts of the core that the physical thread is not using. If both the physical and hyper-threaded thread of a core needed the same resources at the same time then the hyper-threaded thread would need to wait for the physical thread to finish some work before it can be executed.

Please answer the question I asked, which would you prefer for a gaming computer, a 16 core Interlagos (that is a Bulldozer server CPU made from two Valencia 8 core dies on a single chip), or a quad core i5 (it is a single die, not two dual core dies on a single chip like Core 2 Quads).

Sorry, but you are NOT going to outsmart me about CPUs. I designed some hardware myself as a hobby, I know how it works.
March 1, 2012 10:28:32 PM

blazorthon said:
MY GOD did you just tell me that a Core 2 Duo is a 4 thread CPU?!?! It is NOT!! That would mean that it had hyper-threading and NO Core 2 CPU had hyper-threading. All Core 2 CPUs have equal amounts of threads and cores. Core 2 Duos have two threads and two cores, core 2 Quads have four cores and four threads. I know a LOT about CPUs. For example, a Core 2 Quad is actually two Core 2 Duo dies on a single chip.

Multiple threads per core on Intel CPUs is called Hyper-threading, first debuted on some older P4 CPUs. After the P4, it was also used on some extreme edition Pentium Ds that had two cores and four threads. After that there was Core 2 that ditched hyper-threading completely. Hyper-threading was next seen in Nehalem dual core i3s and i5s as well as quad core and six core i7s.

It has remained on the Sandy Bridge i3s and i7s, but there is no dual core Sandy Bridge i5 so no Sandy Bridge i5s has it. Ivy Bridge is expected to continue Sandy's way of Hyper-threaded dual core i3s and hyper-threaded quad core i7s. The Sandy Bridge E six and quad core i7s also have Hyper-threading, all i7s should have it as of yet as should all i3s.

You do NOT understand CPUs much at all, despite you claiming you do.

Having more threads helps with dual core CPUs because they don't have more cores and threads than most machines know what to do with. It helps in the quad core i7s only if you do some seriously intensive multi-threaded work because unlike four threads, there is even less software that supports 8 threads that quad core desktop i7s offer. Going beyond four cores has been shown to be inferior to improving the performance of each thread for software that does not use more threads than available to it. IE a dual core i3 at 3GHz beats eight core FX CPUs in gaming because games only use one or two threads and the i3's two physical cores are faster than two FX cores are.

The Hyper-threaded threads are also much slower than physical cores are, at best hitting around 30% of the same physical cores on the processor. Because of this, Windows was optimized to schedule intensive tasks to the physical threads/cores and mainly use the hyperthreaded threads when the physical thread is waiting for something.

A hyper threaded thread actually shares a core with a physical thread and can only use the parts of the core that the physical thread is not using. If both the physical and hyper-threaded thread of a core needed the same resources at the same time then the hyper-threaded thread would need to wait for the physical thread to finish some work before it can be executed.

Please answer the question I asked, which would you prefer for a gaming computer, a 16 core Interlagos (that is a Bulldozer server CPU made from two Valencia 8 core dies on a single chip), or a quad core i5 (it is a single die, not two dual core dies on a single chip like Core 2 Quads).

Sorry, but you are NOT going to outsmart me about CPUs. I designed some hardware myself as a hobby, I know how it works.


??? are u serious? so any processor with 4 threads have HT???? are you retarded. the core i 3 does not have HT, thats only in i5 and i7. you might want to go back and double check that. and why are you comparing intel ro amd?? we were talking about the 8150 and 4100 LMAO. ahh god is there nay limit to these 14 year olds embarassment threshold these days? are you aware that the 8150 benchs an average score of 8000 and intels i7 2600k is an average of 9800??? amd and intel have 2 different architectures genius. then explain why the i3 fails miserably compared to an i5 or i7??? the whole world is laughing at you buddy. are you seriously 12 years old?
a b V Motherboard
March 1, 2012 10:30:00 PM

I would also like to bring to your attention the fact that a P4 with hyper-threading enabled could actually be slower than with hyper-threading disabled if the software and OS were specifically coded for it. Having more threads does not help without software and OS support. Having more threads per core can be detrimental to performance, so no, more threads is not a deciding factor in performance simply by being a larger number.

Most software is not coded for using many threads, but hyper-threading doesn't normally hurt performance anymore because modern OSs are aware of it. However, it doesn't help performance if the software besides the OS are not also aware of it. Most software is not, your argument is invalid for that software. Besides that, an i7 with four cores and four virtual threads beats an AMD Interlagos or Magny-Cours CPU that has 8 cores or even more.

The performance per core in the i7 is so great that larger amounts of AMD cores can't always beat it, unless they are more than double it's core count. Enable it's hyper-threading and the quad core i7s will beat the eight core FXs, Valencias, and Interlagos CPUs, despite the fact that the virtual threads only increase performance by about 30%.

If you want, I can go MUCH further into detail and even explain why the Intel CPUs beat AMD CPUs that have more cores by delving into the architectures themselves. You're not arguing with some idiot, you are arguing with an expert and you're not getting anywhere with it.I still have my Pentium 3 machine from a decade ago that I built myself. I used to have my even older Pentium 2 machine, but I recycled it already because it failed on me.

I am not a gamer idiot either. My usual desktop has an AMD Phenom II x6 that hangs between an i5 and an FX-8150 in multi-threaded performance despite it's inferior gaming performance to cheaper Phenom II quad core CPUs and even cheaper i3s because when I start up many Virtual machines with several browsers, several compressing/decompressing archives, burning a CD, and watching a movie all at the same time, I like it not slowing down much. It doesn't even have a video card, it uses the integrated Radeon 4290 of the motherboard... That is even weaker than Intel's integrated HD 2000 and HD 3000, and neither of them are fast enough for modern gaming.

I will tell you right now that the 16 core Interlagos at 2.6GHz will not even come close to the quad core i5-2400 for almost all desktop workloads, including gaming and other more intense work. The 16 core CPU will only win in situations that use more than 8 of it's 16 threads, if not it might need 10 threads to beat the i5. Not much home software can use this many threads.
March 1, 2012 10:40:46 PM

blazorthon said:
I would also like to bring to your attention the fact that a P4 with hyper-threading enabled could actually be slower than with hyper-threading disabled if the software and OS were specifically coded for it. Having more threads does not help without software and OS support. Having more threads per core can be detrimental to performance, so no, more threads is not a deciding factor in performance simply by being a larger number.

Most software is not coded for using many threads, but hyper-threading doesn't normally hurt performance anymore because modern OSs are aware of it. However, it doesn't help performance if the software besides the OS are not also aware of it. Most software is not, your argument is invalid for that software. Besides that, an i7 with four cores and four virtual threads beats an AMD Interlagos or Magny-Cours CPU that has 8 cores or even more.

The performance per core in the i7 is so great that larger amounts of AMD cores can't always beat it, unless they are more than double it's core count. Enable it's hyper-threading and the quad core i7s will beat the eight core FXs, Valencias, and Interlagos CPUs, despite the fact that the virtual threads only increase performance by about 30%.

If you want, I can go MUCH further into detail and even explain why the Intel CPUs beat AMD CPUs that have more cores by delving into the architectures themselves. You're not arguing with some idiot, you are arguing with an expert and you're not getting anywhere with it.I still have my Pentium 3 machine from a decade ago that I built myself. I used to have my even older Pentium 2 machine, but I recycled it already because it failed on me.

I am not a gamer idiot either. My usual desktop has an AMD Phenom II x6 that hangs between an i5 and an FX-8150 in multi-threaded performance despite it's inferior gaming performance to cheaper Phenom II quad core CPUs and even cheaper i3s because when I start up many Virtual machines with several browsers, several compressing/decompressing archives, burning a CD, and watching a movie all at the same time, I like it not slowing down much. It doesn't even have a video card, it uses the integrated Radeon 4290 of the motherboard... That is even weaker than Intel's integrated HD 2000 and HD 3000, and neither of them are fast enough for modern gaming.

I will tell you right now that the 16 core Interlagos at 2.6GHz will not even come close to the quad core i5-2400 for almost all desktop workloads, including gaming and other more intense work. The 16 core CPU will only win in situations that use more than 8 of it's 16 threads, if not it might need 10 threads to beat the i5. Not much home software can use this many threads.



seems like you forgot your entire point? when i asked you before what were you going on about, you couldnt even answer the question because you forgot. you were to busy about writing 4 paragraphs. usually people who write 4 paragraphs to sum up one point are bullshiting. have you ever read an ebook before?? really doesnt look like you can dig yourself out of this one buddy. i think ill stick with an 8150 over a 4100 anyday, you know seeing how it has an average bench of 800 compared to 4000.
a b V Motherboard
March 1, 2012 11:26:12 PM

8150 is pointless, 8120 is the same CPU with a lower multiplier. Up the multiplier and it's the same. Like many AMD CPUs, the 8150 isn't really even a higher binned CPU, you just pay to have it pre-overclocked. The 8120 will use the same amount of power as the 8150 and provide the same performance.

I did not forget my point, I got sick of restating it. I explained why more threads does not necessarily mean more performance. If you had read them then you should have known the whole point.

The i5 does NOT have hyper-threading, it is a quad core CPU with only four threads. The i3 is a dual core CPU with hyper-threading and has two cores and two threads per core. You do not know what you are talking about. You could simply go to newegg.com and look at the LGA 155 i3s to know I am right, they are ALL dual core. LGA 1155 is the name of the socket used by Sandy Bridge, it is used by the H61, H67, P67, and Z68 chipsets, all of which support the same processors, but with different connectivity, budget, IGP, and overclocking options.

Celeron = single or dual core CPU without hyper-threading. one or two physical threads and no logical threads (the proper name for hyper-threaded threads).

Pentium = dual core CPU with more L3 cache and higher clock rates at the same power usage than the Celeron and still doesn't have hyper-threading. two physical threads and no logical threads.

i3 is a dual core CPU with more cache and higher clock rates than a Pentium and the i3s have Hyper-threading. 2 physical threads and 2 logical threads.

i5 has four cores with similar clock rates to the i3s and more L3 cache than the i3s, but is without hyper-threading. 4 physical threads and no logical threads.

i7 has slightly higher clock rates and more L3 cache than i5 and has hyper-threading. 4 physical threads and four logical threads.

This is the complete desktop Sandy Bridge lineup. There are also the LGA 2011 Sandy Bridge E processors that go with the X79 chipset that are basically LGA 1155 i7s except with more L3 cache, and instead of a 128 bit memory controller they have a 256 bit memory controller. Some LGA 2011 i7s also have 6 cores instead of 4. They also have hyper-threading. Sandy Bridge E is actually a single die with eight cores and 20MB of cache, but some cache and two or four cores are disabled, depending on the processor.

Sandy Bridge has about 50% more performance per core than FX processors when it is at the same clock frequency as the FX. That means that a six core FX at 3GHz is about equal to a Sandy Bridge i5 that has four cores (all of which do) that runs at 3GHz.

You can further learn from this that at best, the eight core FX CPUs are only 25% faster than an i5 when both are at the same clock frequency. That also shows how an i7 is faster than an FX eight core. This is why that FX processors are called crap by most people, they have worse IPC (Instructions Per Clock, basically means performance per Hz) than Core 2 and Phenom II architectures (bot have about 15% or so more IPC than FX, despite Core 2 being from 2006 and Phenom II being almost as old).

AMD screwed up making the Bulldozer architecture and that is why it has horrible performance per core, forcing AMD to have large core counts to stop Intel from winning in EVERYTHING at the same price point. This method works well in servers, but not so much in the desktop space of most home users because, like I said, most software does not make good use of large amounts of cores.

I apologize because I have been rather rude in explaining this, but I am right. I'm not one of the top 300 members of this site out of about 700,000 members by not understanding computer technology.
March 2, 2012 2:18:58 AM

blazorthon said:
8150 is pointless, 8120 is the same CPU with a lower multiplier. Up the multiplier and it's the same. Like many AMD CPUs, the 8150 isn't really even a higher binned CPU, you just pay to have it pre-overclocked. The 8120 will use the same amount of power as the 8150 and provide the same performance.

I did not forget my point, I got sick of restating it. I explained why more threads does not necessarily mean more performance. If you had read them then you should have known the whole point.

The i5 does NOT have hyper-threading, it is a quad core CPU with only four threads. The i3 is a dual core CPU with hyper-threading and has two cores and two threads per core. You do not know what you are talking about. You could simply go to newegg.com and look at the LGA 155 i3s to know I am right, they are ALL dual core. LGA 1155 is the name of the socket used by Sandy Bridge, it is used by the H61, H67, P67, and Z68 chipsets, all of which support the same processors, but with different connectivity, budget, IGP, and overclocking options.

Celeron = single or dual core CPU without hyper-threading. one or two physical threads and no logical threads (the proper name for hyper-threaded threads).

Pentium = dual core CPU with more L3 cache and higher clock rates at the same power usage than the Celeron and still doesn't have hyper-threading. two physical threads and no logical threads.

i3 is a dual core CPU with more cache and higher clock rates than a Pentium and the i3s have Hyper-threading. 2 physical threads and 2 logical threads.

i5 has four cores with similar clock rates to the i3s and more L3 cache than the i3s, but is without hyper-threading. 4 physical threads and no logical threads.

i7 has slightly higher clock rates and more L3 cache than i5 and has hyper-threading. 4 physical threads and four logical threads.

This is the complete desktop Sandy Bridge lineup. There are also the LGA 2011 Sandy Bridge E processors that go with the X79 chipset that are basically LGA 1155 i7s except with more L3 cache, and instead of a 128 bit memory controller they have a 256 bit memory controller. Some LGA 2011 i7s also have 6 cores instead of 4. They also have hyper-threading. Sandy Bridge E is actually a single die with eight cores and 20MB of cache, but some cache and two or four cores are disabled, depending on the processor.

Sandy Bridge has about 50% more performance per core than FX processors when it is at the same clock frequency as the FX. That means that a six core FX at 3GHz is about equal to a Sandy Bridge i5 that has four cores (all of which do) that runs at 3GHz.

You can further learn from this that at best, the eight core FX CPUs are only 25% faster than an i5 when both are at the same clock frequency. That also shows how an i7 is faster than an FX eight core. This is why that FX processors are called crap by most people, they have worse IPC (Instructions Per Clock, basically means performance per Hz) than Core 2 and Phenom II architectures (bot have about 15% or so more IPC than FX, despite Core 2 being from 2006 and Phenom II being almost as old).

AMD screwed up making the Bulldozer architecture and that is why it has horrible performance per core, forcing AMD to have large core counts to stop Intel from winning in EVERYTHING at the same price point. This method works well in servers, but not so much in the desktop space of most home users because, like I said, most software does not make good use of large amounts of cores.

I apologize because I have been rather rude in explaining this, but I am right. I'm not one of the top 300 members of this site out of about 700,000 members by not understanding computer technology.


ROFL this guy is cutting and pasting other topics from toms hardware onto this thread lmaooo no wonder what you say makes no sense. so do you still think the 8150 is worse then the 4100? you do realise that ALL FX processors are the EXACT same. these change the clock speed and disable cores if there going below a 8 core. its funny you say that because you just said that the i3 has HT which is wrong.
a b V Motherboard
March 2, 2012 2:36:53 AM

I never said that the 8150 is worse than the 4100. Of course I know that they all have the same 8 core die, the 6 and 4 core FXs are just with one or two modules disabled. The i3 has Hyper-threading, you are wrong and if you simply go to newegg.com (the favorite site for buying new computer parts of most Tom's members) you can learn this fact. If you simply look through ANY article here on Tom's you will know I'm right. If you ask anyone else, including the moderators here on Tom's you will know that I am right.

The 8150 isn't worse than the 4100, quite the contrary, but it is a poor purchase nonetheless. If you truly want an 8 core CPU (it doesn't help much unless you do massive amounts of work or work that is highly threaded) then you should have gotten the 8120 instead and upped it's multiplier, it would have then been identical to the 8150 in everything but name, but almost $100 cheaper despite being identical.

AMD is known to not do much binning on their processors like Intel does so lower end versions of the high end CPUs are identical to the high end CPUs, just with a BIOS setting, the CPU multiplier, set to a lower number. The voltage could also be lower, but this is also easily fixed by imputing the voltage of the 8150 if necessary and that is an easy figure to look up.

All i3s have Hyper-threading. All of them. You can go anywhere you want to and check be it Newegg as I suggested, asking another member of Tom's, or even going to Wiki or even to Intel's own website.

In fact, I have no idea what could have given you the idea that i3s don't have Hyper-threading. All of them have it and if you check ANYWHERE you should see that I'm right.

The Nehalem/Westmere (roughly the same architecture, Westmere is a die shrink of Nehalem) i3s have it, the Sandy Bridge i3s have it, and the Ivy Bridge i3s have it. There are no other i3s.

Even all of the mobile i3s from each architecture have hyper-threading. There is no i3 that lacks it.

Also, I am not cutting and pasting anything nor am I bringing up other topics here. Everything I have said backs up what I am trying to explain to you about cores and threads, but you aren't listening.
a b V Motherboard
March 2, 2012 3:39:09 AM

All i3s have hyper-threading.
a b V Motherboard
March 2, 2012 3:43:50 AM

I checked Wiki's list to make sure and double checked with Intel's own specifications sheets on intel.com, all i3s have Hyper-Threading.

Even the first generation i3s have it.
a b V Motherboard
March 2, 2012 3:48:53 AM

Hyper-threaded dual core are better than without it, most of the time. They aren't worse than without it like some of the P4s and PD EEs were, back when support for it wasn't finalized.

Hyper-threading increases multi-threaded performance. Since four threads is noticeably better than two, it helps a little, but not nearly as much as going to an i5 that has four physical cores and no hyper-threading instead of two physical and two logical threads.

Hyper-Threading is what keeps an i3 from being completely beaten by FX quad core CPUs in highly threaded workloads.

Point is that Hyper-Threading is cheap to implement and is better than nothing. It keeps quad cores from killing i3s.
a b V Motherboard
March 2, 2012 4:45:48 AM

Quote:
Well HT is only a novelty trick and is only as fast as the cores that enable it and allot of times even slow down the cores because it is like splitting a core in half. In short HT is a marketing gimmick much like Physx.


Hyper-Threading allows resources not in use by one thread to be used by another thread. Some work on a CPU is spent waiting for data. During this time the CPU core has some ALUs not in use. Hyper-threading allows those unused ALUs to be used during the waiting period, thus making more efficient use of the CPU.

Believe it or not, a lot of CPU time is spent waiting for data. Hyper-Threading doesn't speed up the first thread on each core because that thread can't do anything without the data it is waiting for, but it allows another thread to do some work during the otherwise wasted CPU time.

CPUs are constantly switching through different threads because there are many things running at once even within the OS. Granted, they are small things and don't really see benefit from any performance increase because they are so light, but they do provide an excellent example.

To see all of the threads you have running you can go to the task manager and enable it if you want to see them. Some single threaded software might say something like dozens of threads, but they are not the threads actually doing the work, just different parts of the program and even other things that are simply linked to the program. Firefox still only uses one thread to renderall open web pages even though the task manager might list more than one thread. If you don't believe me, check with Mozilla and they might explain it to you.
a b V Motherboard
March 2, 2012 12:04:18 PM

People pick the i5 because it overclocks and has four real cores, not because Hyper-threading is "busted". If it were busted, then why is is that without Hyper-Threading the i7-2600 is slower than the FX-8150 in highly threaded work, but with Hyper-threading, the i7 is faster in highly threaded work?

Hyper-threading has been confirmed to not be a huge benefit, but it matters for people who use it and the i3's hyper-threading is more likely to see use by an average person simply because it totals only four threads and it only has two physical threads, an easy enough amount to load up, but the i7 has four physical threads and four logical threads. The i7 takes very large workloads to be fully utilized.

The i5 is better than the i3 because real cores are better than logical threads from Hyper-threading, but the logical threads still help the i3.
March 2, 2012 5:10:52 PM

blazorthon said:
I never said that the 8150 is worse than the 4100. Of course I know that they all have the same 8 core die, the 6 and 4 core FXs are just with one or two modules disabled. The i3 has Hyper-threading, you are wrong and if you simply go to newegg.com (the favorite site for buying new computer parts of most Tom's members) you can learn this fact. If you simply look through ANY article here on Tom's you will know I'm right. If you ask anyone else, including the moderators here on Tom's you will know that I am right.

The 8150 isn't worse than the 4100, quite the contrary, but it is a poor purchase nonetheless. If you truly want an 8 core CPU (it doesn't help much unless you do massive amounts of work or work that is highly threaded) then you should have gotten the 8120 instead and upped it's multiplier, it would have then been identical to the 8150 in everything but name, but almost $100 cheaper despite being identical.

AMD is known to not do much binning on their processors like Intel does so lower end versions of the high end CPUs are identical to the high end CPUs, just with a BIOS setting, the CPU multiplier, set to a lower number. The voltage could also be lower, but this is also easily fixed by imputing the voltage of the 8150 if necessary and that is an easy figure to look up.

All i3s have Hyper-threading. All of them. You can go anywhere you want to and check be it Newegg as I suggested, asking another member of Tom's, or even going to Wiki or even to Intel's own website.

In fact, I have no idea what could have given you the idea that i3s don't have Hyper-threading. All of them have it and if you check ANYWHERE you should see that I'm right.

The Nehalem/Westmere (roughly the same architecture, Westmere is a die shrink of Nehalem) i3s have it, the Sandy Bridge i3s have it, and the Ivy Bridge i3s have it. There are no other i3s.

Even all of the mobile i3s from each architecture have hyper-threading. There is no i3 that lacks it.

Also, I am not cutting and pasting anything nor am I bringing up other topics here. Everything I have said backs up what I am trying to explain to you about cores and threads, but you aren't listening.


you said that more cores wont increase performance. you fail to realise that. now your trying to take back your word? what happened to the crap you were spewing about cores dont increase performance? atleast your starting to come around now
a b V Motherboard
March 2, 2012 6:20:56 PM

I didn't say that more cores don't increase performance. I said that increasing core counts to ridiculous numbers doesn't increase performance MUCH. Going up to four will increase performance noticeably over two. Going past four, not as much. Going past 6, the differences start to be miniscule for most users, even most enthusiasts.

Increasing core count beyond four has been proven to not help most software if it isn't able to directly use more than four threads. Different cores of different architectures have different performance. For example, as I already stated, a Sandy Bridge i5 is roughly equal to a six core FX regardless of how well threaded the software is because the i5s cores are about 50% faster than the FX cores.


@LongPastPNR

Now you are just spouting nonsense. I explicitly explained when Hyper-Threading improves performance and why it does. If you don't like the truth then you had better avoid it like a plague


P4 did not always have HTT (the proper acronym for Hyper-threading is HTT meaning Hyper-Threading Technology because HT is reserved for AMD's Hyper Transport technology), only some models had it. The P4s with Hyper-Threading are much more primitive than current implementations of it in Nehalem/Westmere/Sandy Bridge/Ivy Bridge.

P4s also had Win 2000/Millennium/XP, all of which are not optimized for it properly. P4s also had software that wasn't optimized for it. Besides that, Hyper-Threading had nothing to do with the poor performance of P4 compared to the Athlon 64s and Athlon FXs of the day. P4 was a poorly designed architecture. What Hyper-Threading did was allow a P4 to be better in situations where Hyper-Threading helped, which were very common because the P4 spent a lot more time waiting than most other architectures due to it's massive pipeline.

Hyper-Threading often didn't help the P4 in other situations because Windows was stupid and treated it like it was a dual core. The main thread and the hyper-threaded thread would fight for control of the core, significantly slowing down both those situations. XP is a lot better than older Windows versions about this, but it wasn't until Windows Vista and 7 that it was more or less perfected. Windows Vista/7 and somewhat XP are now "aware" of hyper-threading and know how to use it to it's utmost potential, which isn't a huge boon, but still better than nothing. It is enough for the i7 to beat an FX-8150. Without Hyper-threading, the i7 would just beat the FX-6100, and not by much. That is roughly a 30% improvement in multi-threaded performance where all eight threads are in use.

I could go on and on about the P4 architecture and every little thing that Intel did wrong with it if necessary.
March 2, 2012 7:28:52 PM

blazorthon said:
I didn't say that more cores don't increase performance. I said that increasing core counts to ridiculous numbers doesn't increase performance MUCH. Going up to four will increase performance noticeably over two. Going past four, not as much. Going past 6, the differences start to be miniscule for most users, even most enthusiasts.

Increasing core count beyond four has been proven to not help most software if it isn't able to directly use more than four threads. Different cores of different architectures have different performance. For example, as I already stated, a Sandy Bridge i5 is roughly equal to a six core FX regardless of how well threaded the software is because the i5s cores are about 50% faster than the FX cores.


@LongPastPNR

Now you are just spouting nonsense. I explicitly explained when Hyper-Threading improves performance and why it does. If you don't like the truth then you had better avoid it like a plague


P4 did not always have HTT (the proper acronym for Hyper-threading is HTT meaning Hyper-Threading Technology because HT is reserved for AMD's Hyper Transport technology), only some models had it. The P4s with Hyper-Threading are much more primitive than current implementations of it in Nehalem/Westmere/Sandy Bridge/Ivy Bridge.

P4s also had Win 2000/Millennium/XP, all of which are not optimized for it properly. P4s also had software that wasn't optimized for it. Besides that, Hyper-Threading had nothing to do with the poor performance of P4 compared to the Athlon 64s and Athlon FXs of the day. P4 was a poorly designed architecture. What Hyper-Threading did was allow a P4 to be better in situations where Hyper-Threading helped, which were very common because the P4 spent a lot more time waiting than most other architectures due to it's massive pipeline.

Hyper-Threading often didn't help the P4 in other situations because Windows was stupid and treated it like it was a dual core. The main thread and the hyper-threaded thread would fight for control of the core, significantly slowing down both those situations. XP is a lot better than older Windows versions about this, but it wasn't until Windows Vista and 7 that it was more or less perfected. Windows Vista/7 and somewhat XP are now "aware" of hyper-threading and know how to use it to it's utmost potential, which isn't a huge boon, but still better than nothing. It is enough for the i7 to beat an FX-8150. Without Hyper-threading, the i7 would just beat the FX-6100, and not by much. That is roughly a 30% improvement in multi-threaded performance where all eight threads are in use.

I could go on and on about the P4 architecture and every little thing that Intel did wrong with it if necessary.


yes you did say that more cores doesnt increase performance. you must have a huge problem with denial. also what EXACTLY was your main point you were trying to make here on ur initial post??? cores = performance and speed, WHY DO YOU THINK AMD might release a 10 core processor??? do some resaerch before trying to sound like a know it all IT. and that cpu is aimed towards gamers and desktop performance and not server stations or hard graphics design. SO THERE YOU GO NOOB
a b V Motherboard
March 2, 2012 9:52:56 PM

AMD CANCELED the 10 core processor for a very good reason. It would have been NO better than the eight core FXs which aren't much different from the 6 core FXs. FX has slightly better scaling for lightly threaded work when core count is increased than Intel and AMD's previous archs, only SLIGHTLY. The problem with this is even the better scaling becomes null after the eight core FXs.

FX scales a little better purely because of it being modular and two cores share resources, not because more cores improve performance for most software. The ONLY way for more cores to substantially effect performance is if, and ONLY IF, the software supports large numbers of threads.

Why do you think that AMD canceled the 10 core desktop processors? Tell me why the i3s, which are just dual core CPUs with Hyper-threading, are faster for gaming than the FX-8150. Please, explain THAT. You need to do some research. In fact, going above the FX-4100 shows LITTLE performance difference in gaming and other lightly/single threaded work. It only helps to have large core counts if you have software that uses large core counts or you use a LOT of lightly threaded software.

The amount of cores used by a program depend on how many it was designed to use. It is much more difficult to code a program to use more than one thread most of the time, let alone using more than two. That is why software is moving forward in core utilization so slowly.

I never said that more cores doesn't increase performance, I said it doesn't increase performance for most software. Gaming included. Have you actually even tested any of this? I ask you this because I have tested it. I have a six core AMD Phenom II 1090T. I overclocked it to 4GHz. Since it is about twice as fast per Hz as my laptop's CPU, and clocked twice as high, and has three times more cores, and that machine has six times more RAM that is twice as fast, my computer should be about twelve times faster than my laptop, right? Guess what, it ISN'T that much faster for lightly threaded and single threaded work.

You don't seem to understand software, hardware, nor operating systems much if you think that stuff will run faster JUST because it has more cores. As I've said before, a quad core i5 is roughly equal to a six core FX despite it having fewer cores and not having Hyper-Threading.

A ten core CPU in a desktop is next to useless compared to faster, lower core counts. That is why all AMD CPUs lose in gaming and lightly threaded work compared to similarly priced Intel CPUs right now and it's been like that ever since Core 2 came out back in 2006/2007.

A ten core CPU would help people who do a LOT of highly threaded work, but that doesn't include the majority of computer users. That is why servers tend to have more cores, everything they tend to do uses as many threads as they can get. Same thing goes for a lot of workstation software.

Going beyond 16 cores (or two eight core CPUs) requires a Server OS or a non-Windwos OS because Windows 7 supports only up to 16 cores. Most people simply can't utilize that many cores. AMD also canceled Komodo ten core CPUs because of power usage. It would have probably been a 150w TDP part and that uses a lot of power. We already have 125w CPUs (such as most of AMDs better CPUs) and that is already a lot of power, most people do not want one that uses even more power, especially since it wouldn't be much use to most people.

Even just to fit within a 150w TDP it would also probably need to have lower clock frequencies than the other CPUs. Lowering clock frequency would reduce performance in anything that doesn't use all ten cores and anything that did use all ten cores still wouldn't see much improvement over the 8 core FXs. It would roughly match a quad core i7 with Hyper-Threading enabled in workloads that can use all ten threads.

A CPU that just matches the i7s while using almost twice as much electricity is not an attractive option, especially since it's performance in single and lightly threaded work would be deplorable, slightly worse than the FX-4100.

AMD has learned this and decided that ten core FXs are not good for desktops. AMD made a smart move here by canceling a CPU that would not work well in the environment it was planned on being a part of.

Cores don't equal performance and speed, they equal performance and speed where software uses them all. If you can find software that uses ten cores and where a 15-20% performance boost really are game changers that aren't worth just getting an already better i7 anyway. The ten core CPU would need to lower the prices of the other FXs in order for it to be an attractive option because it would need to be cheaper than the i7 or it is not going to sell much for what should be obvious reasons.

A ten core FX would fight with the i7s in multi-threaded performance, use a lot of power, and completely fail at lightly threaded performance. The i7, on the other hand, uses similar amounts of power to the FX-4100 (often less) and would have huge performance per core, almost double at that point because of the clock rate difference that keeps the ten core FX from using more power than all but the expensive CPU coolers, power supplies, and motherboards can handle.

On top of that, the ten core FX would have a much smaller profit because it would cost a lot more for it to be made because of it's huge die size. The larger a die, the amount of money it costs goes up exponentially. This happens because fewer dies fit on a single wafer and yields are lower because the larger it is, the more likely it is for there to be a problem with it that requires either cores, cache, etc. to be disabled, or the entire chip to be thrown away.

On top of that, AMD's main chip factories are GlobalFoundries, a fabrication company known for inferior yields and poorer performance than Intel and other companies. All of this adds up to a very expensive processor that is an under-performer for most software even when compared to Phenom II and Nehalem and roughly on par with i7s only in the most highly threaded work.

The low yields mean low stock and low binning, so clock rates would be similar to server Interlagos Opterons, but with even those Opterons using half the power. It would be a disaster for AMD in almost every possible way.

AMD has enough problems with the eight core dies that many of them are turned into six and quad core CPUs, so why exasperate them with a ten core die? Like I said, complete disaster. I guess you didn't think about all of this before making your claim. I can explain even more reasons why it wouldn't be a good idea, but I think that this is enough to get the point across.

Honestly, I'm not trying to make you feel/look stupid, but I'm trying to explain how this stuff works. Technology just isn't as simple as you seem to think it is. It is fairly complex.
March 2, 2012 10:25:03 PM

blazorthon said:
AMD CANCELED the 10 core processor for a very good reason. It would have been NO better than the eight core FXs which aren't much different from the 6 core FXs. FX has slightly better scaling for lightly threaded work when core count is increased than Intel and AMD's previous archs, only SLIGHTLY. The problem with this is even the better scaling becomes null after the eight core FXs.

FX scales a little better purely because of it being modular and two cores share resources, not because more cores improve performance for most software. The ONLY way for more cores to substantially effect performance is if, and ONLY IF, the software supports large numbers of threads.

Why do you think that AMD canceled the 10 core desktop processors? Tell me why the i3s, which are just dual core CPUs with Hyper-threading, are faster for gaming than the FX-8150. Please, explain THAT. You need to do some research. In fact, going above the FX-4100 shows LITTLE performance difference in gaming and other lightly/single threaded work. It only helps to have large core counts if you have software that uses large core counts or you use a LOT of lightly threaded software.

The amount of cores used by a program depend on how many it was designed to use. It is much more difficult to code a program to use more than one thread most of the time, let alone using more than two. That is why software is moving forward in core utilization so slowly.

I never said that more cores doesn't increase performance, I said it doesn't increase performance for most software. Gaming included. Have you actually even tested any of this? I ask you this because I have tested it. I have a six core AMD Phenom II 1090T. I overclocked it to 4GHz. Since it is about twice as fast per Hz as my laptop's CPU, and clocked twice as high, and has three times more cores, and that machine has six times more RAM that is twice as fast, my computer should be about twelve times faster than my laptop, right? Guess what, it ISN'T that much faster for lightly threaded and single threaded work.

You don't seem to understand software, hardware, nor operating systems much if you think that stuff will run faster JUST because it has more cores. As I've said before, a quad core i5 is roughly equal to a six core FX despite it having fewer cores and not having Hyper-Threading.

A ten core CPU in a desktop is next to useless compared to faster, lower core counts. That is why all AMD CPUs lose in gaming and lightly threaded work compared to similarly priced Intel CPUs right now and it's been like that ever since Core 2 came out back in 2006/2007.

A ten core CPU would help people who do a LOT of highly threaded work, but that doesn't include the majority of computer users. That is why servers tend to have more cores, everything they tend to do uses as many threads as they can get. Same thing goes for a lot of workstation software.

Going beyond 16 cores (or two eight core CPUs) requires a Server OS or a non-Windwos OS because Windows 7 supports only up to 16 cores. Most people simply can't utilize that many cores. AMD also canceled Komodo ten core CPUs because of power usage. It would have probably been a 150w TDP part and that uses a lot of power. We already have 125w CPUs (such as most of AMDs better CPUs) and that is already a lot of power, most people do not want one that uses even more power, especially since it wouldn't be much use to most people.

Even just to fit within a 150w TDP it would also probably need to have lower clock frequencies than the other CPUs. Lowering clock frequency would reduce performance in anything that doesn't use all ten cores and anything that did use all ten cores still wouldn't see much improvement over the 8 core FXs. It would roughly match a quad core i7 with Hyper-Threading enabled in workloads that can use all ten threads.

A CPU that just matches the i7s while using almost twice as much electricity is not an attractive option, especially since it's performance in single and lightly threaded work would be deplorable, slightly worse than the FX-4100.

AMD has learned this and decided that ten core FXs are not good for desktops. AMD made a smart move here by canceling a CPU that would not work well in the environment it was planned on being a part of.

Cores don't equal performance and speed, they equal performance and speed where software uses them all. If you can find software that uses ten cores and where a 15-20% performance boost really are game changers that aren't worth just getting an already better i7 anyway. The ten core CPU would need to lower the prices of the other FXs in order for it to be an attractive option because it would need to be cheaper than the i7 or it is not going to sell much for what should be obvious reasons.

A ten core FX would fight with the i7s in multi-threaded performance, use a lot of power, and completely fail at lightly threaded performance. The i7, on the other hand, uses similar amounts of power to the FX-4100 (often less) and would have huge performance per core, almost double at that point because of the clock rate difference that keeps the ten core FX from using more power than all but the expensive CPU coolers, power supplies, and motherboards can handle.

On top of that, the ten core FX would have a much smaller profit because it would cost a lot more for it to be made because of it's huge die size. The larger a die, the amount of money it costs goes up exponentially. This happens because fewer dies fit on a single wafer and yields are lower because the larger it is, the more likely it is for there to be a problem with it that requires either cores, cache, etc. to be disabled, or the entire chip to be thrown away.

On top of that, AMD's main chip factories are GlobalFoundries, a fabrication company known for inferior yields and poorer performance than Intel and other companies. All of this adds up to a very expensive processor that is an under-performer for most software even when compared to Phenom II and Nehalem and roughly on par with i7s only in the most highly threaded work.

The low yields mean low stock and low binning, so clock rates would be similar to server Interlagos Opterons, but with even those Opterons using half the power. It would be a disaster for AMD in almost every possible way.

AMD has enough problems with the eight core dies that many of them are turned into six and quad core CPUs, so why exasperate them with a ten core die? Like I said, complete disaster. I guess you didn't think about all of this before making your claim. I can explain even more reasons why it wouldn't be a good idea, but I think that this is enough to get the point across.

Honestly, I'm not trying to make you feel/look stupid, but I'm trying to explain how this stuff works. Technology just isn't as simple as you seem to think it is. It is fairly complex.


again now didnt even answer my question about what your inital point is LOL instead you go on about amds 10 core cpu. infact amd has not canceled anything. its still unknown what there doing for the q3 launch this year. also the icore 3 is differnt then a duel core buddy. standard duel cores only have 2 threads. the i 3 has 4 so what you said was false. AMD is not increase voltage of there new cpus past 125 watts so even if they did bring out a 10 core it would be past a miniium tdp of 125. amd is keeping there processors on the same nm this launch so how could it cost more power?? the 10 core wouldnt obviously outperform the 8 core fx
a b V Motherboard
March 2, 2012 10:59:35 PM

The i3 is a standard dual core CPU with Hyper-Threading. It wins against FX out of it's performance per physical core in single/dual threaded work and hyper-threading keeps it right behind the FX-4100 in highly threaded work. Glad to see that you finally admitted that i3s have Hyper-Threading (this is why they have four threads, as I stated earlier). However, even if you disable Hyper-Threading (leaving the i3 with only it's two physical threads), it can and will outperform FX in gaming. It will be closer, but it will still win. Hyper-Threading doesn't improve per thread performance, it allows resources being wasted by a thread to be utilized by another thread so that aggregate throughput is increased.

The ten core on the same 32nm process as the FX-8150 would outperform the 8150 in highly threaded work despite it's lower clock frequency. Increasing clock frequency increases power exponentially so it doesn't need to go down too much to lower power usage significantly. Once again, I'm not trying to make fun of you, but this shows that you just don't understand technology too much. Increasing core count increases power usage more or less linearly, but increasing clock frequency increases power usage exponentially.

I would expect the ten core to have a base clock frequency range from 2.2GHz and 3GHz standard and turbo going higher. I base this number on the 12/16 core Interlagos CPUs because it would probably be similar to them, foregoing any improvements made in the design by AMD. I say similar because the 12/16 core Interlagos CPUs are dual die CPUs instead of a single large die. The 16 core CPUs go up to 140w TDPs, the single, large die 10 core would be somewhere around there at similar clock frequencies.

Also worth noting is that these server processors are more efficient than desktop versions, hence the higher power usage of the 10 core CPU. The ten core would outperform the eight core FXs and should catch up to the i7s in highly threaded work.

I talked about the ten core CPU and why it isn't practical because you brought it up again. Yes, AMD confirmed canceling it. AMD is supposed to launch Piledriver and Trinity this year. Piledriver is supposed to max out at eight cores, Trinity will probably max out at four, but I haven't looked into this for confirmation. At most, Trinity won't have more than 6 and that I can guarantee.

Voltage and wattage are very different. Voltage times amperage equals wattage. Although increasing voltage obviously increases wattage, if you look at that formula, the two are also obviously not equal.

You meant maximum TDP (I think) and it wouldn't be the first time a CPU had a 150w TDP.
a b V Motherboard
March 2, 2012 11:04:16 PM

Quote:
Using Sales lingo and marketing rhetoric does not help your argument HT sucked on the Pentium4 because the Pentium 4 sucked and HT is better on the i3 because the i3 chip is better see the signifiers i7 2600K is faster because it is just a faster chip not because Intel made a Marketing Mantra out of HT.


Hyper-threading on the P4 sucked for the reasons I specified. Yes, P4 sucked and as I said I could go on and on about why it sucked, but that doesn't change anything. If you run Windows 7 on a P4 with Hyper-Threading, it will be better than it was on XP and Windows 2000. I have tested this myself. The whole system might be slower because Windows 7 is a heavier system than XP (more bloat, etc.), but the difference between Hyper-threading enabled and disabled shows performance improvements because of the OS optimizations.

Marketing has nothing to do with what I said.
a b V Motherboard
March 2, 2012 11:13:08 PM

@diablo24life

The purpose behind the post you are questioning was exactly what it implied if you read it, explaining part of how cores and threads work in the different processors and operating systems to effect performance.

I was trying to explain why you were wrong about cores and provide credibility, although it seems to have gone over your head.
March 2, 2012 11:38:51 PM

blazorthon said:
The i3 is a standard dual core CPU with Hyper-Threading. It wins against FX out of it's performance per physical core in single/dual threaded work and hyper-threading keeps it right behind the FX-4100 in highly threaded work. Glad to see that you finally admitted that i3s have Hyper-Threading (this is why they have four threads, as I stated earlier). However, even if you disable Hyper-Threading (leaving the i3 with only it's two physical threads), it can and will outperform FX in gaming. It will be closer, but it will still win. Hyper-Threading doesn't improve per thread performance, it allows resources being wasted by a thread to be utilized by another thread so that aggregate throughput is increased.

The ten core on the same 32nm process as the FX-8150 would outperform the 8150 in highly threaded work despite it's lower clock frequency. Increasing clock frequency increases power exponentially so it doesn't need to go down too much to lower power usage significantly. Once again, I'm not trying to make fun of you, but this shows that you just don't understand technology too much. Increasing core count increases power usage more or less linearly, but increasing clock frequency increases power usage exponentially.

I would expect the ten core to have a base clock frequency range from 2.2GHz and 3GHz standard and turbo going higher. I base this number on the 12/16 core Interlagos CPUs because it would probably be similar to them, foregoing any improvements made in the design by AMD. I say similar because the 12/16 core Interlagos CPUs are dual die CPUs instead of a single large die. The 16 core CPUs go up to 140w TDPs, the single, large die 10 core would be somewhere around there at similar clock frequencies.

Also worth noting is that these server processors are more efficient than desktop versions, hence the higher power usage of the 10 core CPU. The ten core would outperform the eight core FXs and should catch up to the i7s in highly threaded work.

I talked about the ten core CPU and why it isn't practical because you brought it up again. Yes, AMD confirmed canceling it. AMD is supposed to launch Piledriver and Trinity this year. Piledriver is supposed to max out at eight cores, Trinity will probably max out at four, but I haven't looked into this for confirmation. At most, Trinity won't have more than 6 and that I can guarantee.

Voltage and wattage are very different. Voltage times amperage equals wattage. Although increasing voltage obviously increases wattage, if you look at that formula, the two are also obviously not equal.

You meant maximum TDP (I think) and it wouldn't be the first time a CPU had a 150w TDP.

March 2, 2012 11:47:16 PM

blazorthon said:
The i3 is a standard dual core CPU with Hyper-Threading. It wins against FX out of it's performance per physical core in single/dual threaded work and hyper-threading keeps it right behind the FX-4100 in highly threaded work. Glad to see that you finally admitted that i3s have Hyper-Threading (this is why they have four threads, as I stated earlier). However, even if you disable Hyper-Threading (leaving the i3 with only it's two physical threads), it can and will outperform FX in gaming. It will be closer, but it will still win. Hyper-Threading doesn't improve per thread performance, it allows resources being wasted by a thread to be utilized by another thread so that aggregate throughput is increased.

The ten core on the same 32nm process as the FX-8150 would outperform the 8150 in highly threaded work despite it's lower clock frequency. Increasing clock frequency increases power exponentially so it doesn't need to go down too much to lower power usage significantly. Once again, I'm not trying to make fun of you, but this shows that you just don't understand technology too much. Increasing core count increases power usage more or less linearly, but increasing clock frequency increases power usage exponentially.

I would expect the ten core to have a base clock frequency range from 2.2GHz and 3GHz standard and turbo going higher. I base this number on the 12/16 core Interlagos CPUs because it would probably be similar to them, foregoing any improvements made in the design by AMD. I say similar because the 12/16 core Interlagos CPUs are dual die CPUs instead of a single large die. The 16 core CPUs go up to 140w TDPs, the single, large die 10 core would be somewhere around there at similar clock frequencies.

Also worth noting is that these server processors are more efficient than desktop versions, hence the higher power usage of the 10 core CPU. The ten core would outperform the eight core FXs and should catch up to the i7s in highly threaded work.

I talked about the ten core CPU and why it isn't practical because you brought it up again. Yes, AMD confirmed canceling it. AMD is supposed to launch Piledriver and Trinity this year. Piledriver is supposed to max out at eight cores, Trinity will probably max out at four, but I haven't looked into this for confirmation. At most, Trinity won't have more than 6 and that I can guarantee.

Voltage and wattage are very different. Voltage times amperage equals wattage. Although increasing voltage obviously increases wattage, if you look at that formula, the two are also obviously not equal.

You meant maximum TDP (I think) and it wouldn't be the first time a CPU had a 150w TDP.


a standard duel core? an i3? no a standard duel core would be an e5300. an i3 is deffintley not standard with 4 threads and the added tecknology it has. btw the name piledriver is just the name of there new cores, the LAUNCH of the new desktop cpus is codenam "vishera" amd or intel would not release a new cpu higher then 125 tdp on an am3+ socket. you wont see anything go beyong 125. there trying to shrink energy cost fyi lmao. thats why they try to descrease the nm in size becuae of lower power consumption. your the one who said a 10 core wouldnt outperform say the 4100 because not many programs require that amount of cores. whenever theere is more cores your cache is going to be higher, cache isnt diverted to each indivual core. you might want to double check what your talking about a 10core or 8 core processor will beat a 4 core anyday
a b V Motherboard
March 3, 2012 12:34:26 AM

The only added technology an i3 has is Hyper-Threading. That is why it has four threads. Disabling Hyper-Threading will make it have only two threads, there is no other "added Technology" that makes it non-standard for a dual core CPU. I know that Piledriver is just the name of the cores, but FX is also often referred to by it's architecture's name. The new Vishera platform might still be referred to by AMD as FX in the model names and it is more relevant to call them by the architecture name than the platform name. It is more relevant because the architecture's name is the entire architecture of the CPU, it defines the CPU. Vishera is just the platform name. It doesn't define the CPU, it defines the platform.



I also doubt that AMD will ever release something that goes beyond 125w on a desktop socket, I was using it to make a point as to why the ten core Komodo was impractical and why it was canceled. Intel also doesn't seem to want to go above 130w TDPs (Intel doesn't do 125w, they tend do 130w for their most power hungry CPUs) right now, but they have done it in the past. There are 150w P4s, PDs, and one or two 150w Core 2 Quads.

You don't just try to shrink in the process size (it means the distance between transistors as measured in nm) to reduce power usage, it is done to increase transistor density. This allows more transistors to fit in a given space and/or the same or similar amount of transistors in a smaller space.

I said that the hypothetical ten core FX would be faster than everything except the i7s with Hyper-Threading enabled in highly threaded work and that it would roughly match the i7s in said work. The i7s are quad cores with Hyper-Threading, but their performance per core is greater than FX so they are faster than FX with similar core counts. Hyper-Threading allows them to beat FX CPUs that have many more cores than the i7. However, the i7 will roughly match a ten core FX that is similar to the hypothetical specs I mentioned.

Actually, the FX quads, sixes, and eight cores CPUs all have the same amount of shared, L3 cache: 8 megabytes. The same goes for the Valencia Opterons, also regardless of core count. All of the Interlagos Opterons have 16MB of shared L3 cache, once again, regardless of core count.

All FX CPUs have 8MB of L3 cache per die. Intel has different amounts of cache depending on if it is a Celeron single core, Celeron dual core, Pentium, i3, i5, or i7. Celerons adhere to your claim, but the rest don't. Celerons have 1MB of L3 per core. Pentiums all have 2MB of L3. All i3s have 3MB of L3. All i5s have 6MB of L3. All i7s have 8MB of L3.

These rules only apply to Sandy Bridge CPUs. Nehalem/Westmere is kinda a mess without general rules. Ivy Bridge might have different rules for it's L3 cache. All other levels of cache, L1 and L2, are identical across all Sandy bridge cores, dual 32KB for L1 and 256KB for L2.

Cache within a module is shared between two cores so if one of the cores is not in use, the other core gets to use the leftover cache and other resources. A great example of this is the considerably improved performance per core and gaming performance and gaming performance power efficiency when one core out of each module of an eight core FX is disabled.

By disabling a core from each module, the left over resources are used only by one core, improving that core's performance. For example, instead of sharing two megabytes of L2 cache (that is how much each module has) between two cores and two FPUs etc., the one core has all of it. Having half of the cores disabled also decreases power usage, allowing for a higher overclock. Granted, this reduces highly threaded performance, but that doesn't matter as much for gaming. Another site proved this too, I believe it was xbitlabs that demonstrated this. I still find it odd that this phenomenon wasn't followed up on with other sites and into more detail, but oh well.

No, a processor with more cores does NOT necessarily beat higher core counts. You need to check. An i3 will beat the FX-8150 that has four times as many cores in gaming. This is proven many times even on this very website, tomshardware.com, the testers of this site such as Cleeve (Cleeve does a lot of the work here on Tom's) have already done several tests. Many other sites have also done tests. FX never wins in gaming because of the reasons I have already explained. FX matches Intel in a few games, but it does not win over Intel. For gaming, Intel is always at the top, whereas AMD is only sometimes at or near the top with Intel.
March 3, 2012 1:09:26 AM

blazorthon said:
The only added technology an i3 has is Hyper-Threading. That is why it has four threads. Disabling Hyper-Threading will make it have only two threads, there is no other "added Technology" that makes it non-standard for a dual core CPU. I know that Piledriver is just the name of the cores, but FX is also often referred to by it's architecture's name. The new Vishera platform might still be referred to by AMD as FX in the model names and it is more relevant to call them by the architecture name than the platform name. It is more relevant because the architecture's name is the entire architecture of the CPU, it defines the CPU. Vishera is just the platform name. It doesn't define the CPU, it defines the platform.



I also doubt that AMD will ever release something that goes beyond 125w on a desktop socket, I was using it to make a point as to why the ten core Komodo was impractical and why it was canceled. Intel also doesn't seem to want to go above 130w TDPs (Intel doesn't do 125w, they tend do 130w for their most power hungry CPUs) right now, but they have done it in the past. There are 150w P4s, PDs, and one or two 150w Core 2 Quads.

You don't just try to shrink in the process size (it means the distance between transistors as measured in nm) to reduce power usage, it is done to increase transistor density. This allows more transistors to fit in a given space and/or the same or similar amount of transistors in a smaller space.

I said that the hypothetical ten core FX would be faster than everything except the i7s with Hyper-Threading enabled in highly threaded work and that it would roughly match the i7s in said work. The i7s are quad cores with Hyper-Threading, but their performance per core is greater than FX so they are faster than FX with similar core counts. Hyper-Threading allows them to beat FX CPUs that have many more cores than the i7. However, the i7 will roughly match a ten core FX that is similar to the hypothetical specs I mentioned.

Actually, the FX quads, sixes, and eight cores CPUs all have the same amount of shared, L3 cache: 8 megabytes. The same goes for the Valencia Opterons, also regardless of core count. All of the Interlagos Opterons have 16MB of shared L3 cache, once again, regardless of core count.

All FX CPUs have 8MB of L3 cache per die. Intel has different amounts of cache depending on if it is a Celeron single core, Celeron dual core, Pentium, i3, i5, or i7. Celerons adhere to your claim, but the rest don't. Celerons have 1MB of L3 per core. Pentiums all have 2MB of L3. All i3s have 3MB of L3. All i5s have 6MB of L3. All i7s have 8MB of L3.

These rules only apply to Sandy Bridge CPUs. Nehalem/Westmere is kinda a mess without general rules. Ivy Bridge might have different rules for it's L3 cache. All other levels of cache, L1 and L2, are identical across all Sandy bridge cores, dual 32KB for L1 and 256KB for L2.

Cache within a module is shared between two cores so if one of the cores is not in use, the other core gets to use the leftover cache and other resources. A great example of this is the considerably improved performance per core and gaming performance and gaming performance power efficiency when one core out of each module of an eight core FX is disabled.

By disabling a core from each module, the left over resources are used only by one core, improving that core's performance. For example, instead of sharing two megabytes of L2 cache (that is how much each module has) between two cores and two FPUs etc., the one core has all of it. Having half of the cores disabled also decreases power usage, allowing for a higher overclock. Granted, this reduces highly threaded performance, but that doesn't matter as much for gaming. Another site proved this too, I believe it was xbitlabs that demonstrated this. I still find it odd that this phenomenon wasn't followed up on with other sites and into more detail, but oh well.

No, a processor with more cores does NOT necessarily beat higher core counts. You need to check. An i3 will beat the FX-8150 that has four times as many cores in gaming. This is proven many times even on this very website, tomshardware.com, the testers of this site such as Cleeve (Cleeve does a lot of the work here on Tom's) have already done several tests. Many other sites have also done tests. FX never wins in gaming because of the reasons I have already explained. FX matches Intel in a few games, but it does not win over Intel. For gaming, Intel is always at the top, whereas AMD is only sometimes at or near the top with Intel.



the amount of l3 cache for the fx is the same. but the amount of l2 cache is differnt depending on the amount of cores. thats why the 8150 outperforms the 4100 along with its increased cores and threads. the i3 is not a basic duel core. how could something with added technologies and HT be compared to an e5300? it cant be because it runs circles around it. ROFL oh really eh? the i3 can beat the 8150? google amd vs intel gaming and search for icpworlds channel. they compared even a 6100 to the i3 and the i3 couldnt even beat it in its higher strength. i3s are complete crap for gaming ive had one. and sounds to me like you dont speak from experience. i have the exact same set up only switching my cpu and the 8150 kills it.
a b V Motherboard
March 3, 2012 2:06:10 AM

diablo24life said:
the amount of l3 cache for the fx is the same. but the amount of l2 cache is differnt depending on the amount of cores. thats why the 8150 outperforms the 4100 along with its increased cores and threads. the i3 is not a basic duel core. how could something with added technologies and HT be compared to an e5300? it cant be because it runs circles around it. ROFL oh really eh? the i3 can beat the 8150? google amd vs intel gaming and search for icpworlds channel. they compared even a 6100 to the i3 and the i3 couldnt even beat it in its higher strength. i3s are complete crap for gaming ive had one. and sounds to me like you dont speak from experience. i have the exact same set up only switching my cpu and the 8150 kills it.


You repeated what I said about the L2 cache and L1 cache of FX, it is 2MB per dual core module. However, the L2 cache is only shared between the two cores in each module so even having more modules doesn't make a per core improvement due to cache being shared between modules. The 8150 outperforms the 4100 because of the following reasons:
1. It has more cores. This allows some work load that would normally be stuck with four cores, such as background tasks, to spread out a little more across the eight cores. This allows the more intense software, such as the lightly/single threaded games, to utilize what few cores it can use slightly more because there is less usage from other stuff on each core.

2. The 8150 can allocate the more intense threads to a single module whilst the less intensive threads share a module, further improving the single/lightly threaded performance slightly more.

3. The 8150 has a higher Turbo Core than the 4100.

The i3 has no added technology beyond Hyper-Threading to differentiate it from other dual core CPUs. If you disable Hyper-Threading then it will not have two physical threads and two logical threads, it will have two physical threads just like any other dual core processor that lacks Hyper-Threading. The i3 outperforms them all simply because it has a more efficient architecture that improves it's IPC, aka Instructions Per Clock.

One example of the i3's advantages is it's faster cache than AMD and previous Intel designs. AMD tries to use huge caches to make up for the cache's low performance. This helps, but not enough. A small, fast cache will usually beat a larger, slower cache. This method is akin to using multiple slow cores to fight a CPU with fewer faster cores, it only works in situations that use the most of the cache. Most programs don't care, and for those that do, Intel has it's L3 cache so it doesn't matter much. I can get more technical and explain this further if necessary.

The i3 also makes more efficient use of memory bandwidth and memory latency than AMD and previous Intel designs. The i3 is simply a better design than the rest. Remember, different CPU architectures perform differently. Also, just because something performs significantly faster than another doesn't mean that they aren't both dual cores, the same goes for any other core count. For example, the Core 2 CPUs such as the e5300 you mention fly circles around the Pentium Ds that they replace. Both are standard, dual core CPUs, one architecture is simply better. The same reason allowed Athlon 64s to beat Pentium 4s and Pentium Ds (both have the same architecture, it is called Netburst).

I don't know why the review you list shows the opposite of every other review that I've seen and my own tests. Perhaps there were some other variables. Thank you for bringing it up, I will look into it and see if I can find out what happened there. Please understand that some reviews are done improperly and when they don't follow the overwhelming majority, there is probably something wrong with them. This is not always the case and perhaps I am wrong, so like I said, I'll look into it, but every review I've seen and my own tests show otherwise. Perhaps they only tested it in the few highly threaded games, perhaps the Intel system had a bottleneck, perhaps they know of some optimizations that most people don't. Whatever the case, it would be a lot easier and faster for me to confirm your findings if you give me a link to this review.

i3s are crap? Did you only use a weak graphics card on the integrated graphics? Integrated graphics (Except for AMD's APU families) is not strong enough for gaming. In fact, Intel's graphics sometimes has errors with modern games and can't play at all. Intel's graphics is better for machines that don't game. It is great for watching even 1080p movies and TV or regular work such as web browsing (I do this even on the MUCH weaker GMA 950 from an older system I have, also my laptop has weaker graphics than Intel's HD 2000/3000). The HD 3000 can even play 3D 1080p movies and TV, although that is something I have not tested myself and I can't tell you how good/bad it is. Considering that 3D is only twice as graphics intensive as regular 2D movies and TV, it can probably handle it easily, but like I said this is something I can't confirm personally.

However, i3s are more than enough for even full, maxed out 1080p gaming if you pair them with a video card that can also handle it. For example, an i3 is powerful enough to have even the big Radeon 6900 cards and equivalent Nvidia cards. However, I wouldn't pair them with more than a Radeon 6870 simply because if I can afford that good of a video card, then surely I can afford an i5 too.

I speak from experience. FYI, two GTX 295s in SLI are bottlenecked by an i3-2100, but one GTX 295 is okay for an i3. Also FYI, the GTX 295 is a rough equivalent for the Radeon 6970/4870X2 and GTX 570/480 cards in performance. Although it has a little bit too little memory for some games at or beyond 1080p now, it has the raw GPU performance for even the top games maxed out at 1080p/1920x1200 and even beyond it in most games besides the most intense games like Metro 2033 and BF3.

There is a HUGE difference between gaming and TV regardless of resolution. TV/movies are far less intense than even older games to the point where even my crap GMA 950 can do 720p/1080p TV, but not even gaming at the minimum resolutions and settings in any modern game, so don't get confused there. I assume you already know at least this much and I apologize if it seems like I am patronizing you, but I had to say it to make sure you knew.

Besides, how can you switch between an i3 and an FX-8150 in the same setup? They don't work on the same motherboards, no cross-socket compatibility. Do you mean the 4100? I already stated that the 8150 is often superior in gaming to the 4100 and it is compatible with the same motherboards as the 8150 so I have to assume that you meant the 4100.
a b V Motherboard
March 3, 2012 2:29:07 AM

Quote:
The only reason the Pentium 4 would have been a bit faster on Win7 is because Win7 is a snappier better more optimized OS over XP so any chip will perform a bit better on Win7 and Logic Failed you shot yourself in the foot again.


Why did I say that the P4 would have more efficient Hyper-Threading on Win 7 than on XP? Oh yeah, I said Hyper-Threading would be more efficient on Win 7 because Win 7 has optimizations for it. Your logic is a copy of mine except with incorrect terms, you are the one who shot him/herself in the foot here.

Even on Windows 7, the P4's more primitive implementation of Hyper-Threading is still not as efficient as the implementation used in Sandy Bridge i3s and i7s. Same goes for using XP, the better implementation is more efficient on XP than the P4's implementation is on XP. Technology does this thing called advancing and progressing over time, perhaps you should look it up. Hyper-Threading is more EFFICIENT on the newer processors because it is implemented better. That the base speed of the i3s and i7s is better than the P4s and PDs is because of their superior architectures, process nodes, cache, etc.

My logic is not failing because it is not only right, but backed by Intel themselves on their own website and every reviewer that cares to compare the (relatively) ancient Netburst arch processors with Hyper-Threading to the modern processors with it. Unfortunately, not many seemed to care, but oh well. I guess I was lucky that my old P4 630 HT from my Dell Dimension E510 didn't fail on my yet. I don't have the i3 anymore so I can't give you a comparison, but I can show the difference in efficiency between Windows 7 and XP if you like, although it might take a few days because I would need to setup two operating systems and then benchmark them both.
a b V Motherboard
March 3, 2012 2:45:28 AM

Quote:
2069132,52,727194 said:
LOL GTX 295 gets owned by an Ati Radeon HD 5870 LOL - http://www.youtube.com/watch?v=UMtFVAEm3h4
said:


LOL you are an idiot. Look at the FPS given at the end of the video. The GTX 295 has higher FPS, that is performance. It means Frames Per Second in the context of video card performance. The power usage is higher because the GTX 295 is a less power efficient card, but it has higher performance. The 5870 is right behind it. Learn to read benchmarks before making posts about them. Unlike you, Cleeve and the other members of Tom's know that the GTX 295 is equal to the Radeon 6970, Radeon 4870X2, GTX 570, and GTX 480 in raw performance.

If you read the benchmarks at the end of the video, you will see that the Radeon 5870 has an average FPS just under 30FPS and the GTX 295 is just under 35FPS.

Power usage, well it's not an efficient card because it is older. It has two inefficient GPUs and the 5870 has one more efficient GPU, but the aggregate performance of the two GPUs of the GTX 295 is slightly greater than the single GPU of the 5870. The Radeon 4870X2 uses similarly high amounts of power and the GTX 480 is also pretty inefficient and could use a similar amount of electricity.

Any real gaming enthusiast and game savvy person knows how to read benchmarks, but I see that you DON'T. There is a difference between playing games and fundamentally understanding them and the video cards. The GTX 295 is superior to the Radeon 5870 in performance, not so much in power usage.

You can go to the Graphics Card Hierarchy here at Toms Hardware to find out for yourself. No gaming enthusiast is stupid enough to think that power usage is performance, so you are not a real enthusiast.
a b V Motherboard
March 3, 2012 2:56:06 AM

Quote:
It doesn't matter what CPU used in Win7 is will perform better than in XP end of story cause Win7 is just a better evolution in OS design this is HT or not LOL HT is a joke.


Wrong again. Windows 7 usually is slower than XP because it is a much heavier, more bloated OS. That is why XP requires something liek an old few hundred MHz CPU whilst 7 needs at least a 1.8GHz or so CPU, as listed by Microsoft. Windows Vista is even worse. XP also uses far less RAM (that shows that it is less bloated).

I have tested all of this myself as well. I first tested it on my laptop (Windows 7 was slower than XP and used about four times more RAM), then on my old Compaq that had a Pentium D. The Pentium D machine was even more noticeably slower because a Pentium D is slower than a Turion 64 x2 such as the CPU in my laptop.

Now I know for a fact that you know little to nothing about Windows AND CPUs unless you are pretending to be ignorant, nice going there.
a b V Motherboard
March 3, 2012 2:59:41 AM

Quote:
Now you are starting in with the Ad homonym attacks and that 5870 was dame near neck and neck for most of the benchmark run no need to try and fool me. GTX 295 = 2x GTX 260 and is Outdated super hot and has terrible scaling and massive power draw and 5870 is pretty well as fast LOL.


The 5870 is also outdated, yet you brought it up. The GTX 295 may be outdated, but it is still slightly faster than the 5870 (about 10-20% faster) and among the fastest graphics cards in the world even today. This is because it is a flagship dual GPU card whereas the 5870 and 6970 are both only flagship single GPU cards. I say "only" because they don't win. The 6970 is about identical to the GTX 295 with the 5870 trailing right behind the GTX 295 and 6970 and the other similarly performing cards.

Yes, 30FPS compared to 35FPS is pretty close, but 35 is still a win and you said that the 5870 is the winner so you were flat out wrong. Besides, if the scaling on the 295 is that bad, then why is it WINNING? Yes, it has poor multi-GPU scaling, but it is still enough to pull ahead of the 5870 and 6950 (6950 and 5870 are rough equivalents, also similar to the Radeon 4850X2 and the GTX 560 TI and the GTX 470).

I can keep going with graphics and the internals of a graphics card just as well as a CPU. You can't prove someone like me wrong because I am right. If I am not sure, then I say I am not sure, I don't say I'm right.
a b V Motherboard
March 3, 2012 3:04:14 AM

Quote:
Funny cause I tested the same and found Win7 to be way snappier on low end net book WinXP was good in it's day but it is long in the tooth and a glimmer of what it once was now under the shadow of Win7.


It depends on whether or not you updated the XP fully to SP3 with all updates. XP before SP2 is kinda crapish, a lot slower and a considerably less stable. I suspect that you didn't have SP3 and full updates.

XP SP3 has greater throughput performance (games and such other intense work) than Windows 7, but if the processor is enough for Windows 7 then it can be a little snappier. However, XP SP3 is much better than SP2 which is WAY better than SP1 or XP without a Service Pack update.

Windows 7 is also only better than XP SP3 if the machine has 1GB or more RAM, preferably of DDR2 instead of DDR or SDRAM.
a b V Motherboard
March 3, 2012 3:13:25 AM

Quote:
Everyone had SP3 and I am not a noob.


I didn't call you a noob and not everyone had SP3. Maybe your XP setup simply got too much crap. A system slows down over time, especially if you don't take certain care of it that if down improperly can ruin the computer. One example is the registry. Every time program is installed or saves certain settings, it goes to the registry. Uninstalling the stuff will delete the registry keys it had (most of the time, it is very rare for an uninstaller to get them all), but it will not let the registry shrink after uninstalling stuf so once the registry increases, it stays larger and slightly slower.

The registry is accessed thousands of times a second. Some programs can shrink it, but it is a slightly dangerous process that can destroy it if done improperly or an error occurs. There are many other things that slow a computer down over time that go unnoticed except for the performance difference. The only way to compare two operating systems is to install an OS and then set it up, then test it. You can't test it after it has been running off and on for years.
a b V Motherboard
March 3, 2012 3:16:46 AM

Quote:
Ya the 5870 is the Winner because it is cheaper cooler running takes less power and meets the performance in most games next to the GTX295 and does it all with one cool GPU not to overheating double barrels of Quasi aka GTX 295. Every one knows that the Radeon 58xx was much father superior to the Nvidias of the time and it took x2 of the Nvidias to compete ala GTX 295 LOL. GTX295 is a failure ;-)


GTX 295 didn't compete with the Radeon 5870... It competed with the 4870X2. Yet another fail from YOU. The GTX 470 competed with the Radeon 5870 and they are roughly equal in performance. Nvidia also had the even faster GTX 480, although it was an inefficient video card that used a lot of power and underperformed compared to it's projected performance. However, it was still king of the hill for all single GPU cards until the GTX 580 was made, a *perfected* version of the GTX 480 that performed where the 480 should have with the correct power usage. The 580 went uncontested until January of this year as the fastest single GPU video card on the planet. It only lost it's title when the Radeon 7970 came out. The Radeon 7950 also came out a month after the Radeon 7970 and it matches the GTX 580 in performance.

Excluding the Radeon 4870X2 and GTX 295, all of the above video cards from Nvidia and AMD are single GPU cards.

You show more ignorance of computers with every post.

EDIT: The only thing that AMD had that could compete with the GTX 580 in raw performance was the Radeon 5970, a dual GPU card. So if dual GPU cards means fail, then it was AMD that failed. The 5970 is also less future proofed most of the time because most of them only have 1GB of memory per GPU (the 5970 has 2GB total, 1GB per GPU. GPUs don't share memory like multi-CPU servers share memory between CPUs so they each need their own).

The GTX 580 has 1.5GB instead of 1GB and thus can do higher resolutions and quality settings than the 2GB 5970. There are some rare 4GB 5970s that have 2GB per GPU, but there are also 3GB GTX 580s that are still better.
a b V Motherboard
March 3, 2012 3:18:03 AM

Quote:
Clean install Apples to Apples test and good night to you I have had enough of this LOL.


All right, I'll bite: What machine did you use, exact model and/or specifications please.

Also, what have you had enough of? You are easily proven wrong just by looking at the Graphics Hierarchy Chart and the Gaming CPU Chart and a review or two on Hyper-Threading, all of this is available on Tom's.

Also, Tom's did a comparison of performance between XP, Vista, and 7 already too. Guess what won? XP. There is a reason that the most respected tech site had an article say something contradicting you as does my own tests, you are wrong.

XP doesn't have native support for more higher than DX9.0c, but it can get support for DX10 and DX11 through some third part programs. Unfortunately, Tom's didn't test these programs, but the DX9 that was tested was fastest on XP, proving me right about it.

Besides, I have an XP machine with 512MB memory. Guess how much XP itself uses on it? Less than 70MB, usually closer to 50MB. 512MB isn't even enough to run Windows 7 and Vista comfortably, yet I can do a lot with the spare 440-460MB I have left over with XP.

Windows 7 can't ever use less than 148MB and that is only highly customized versions that can do it (for example, eXperience's Windows 7 Lite version can do 148MB, it is the lightest). However, my XP can do even 128MB and be usable and Win 7 isn't usable with 148MB, not even the eXperience version is. eXperience needs 256MB to be usable, at a minimum, it can just boot at 148MB. I've tested all of this too. The problem with the eXperience version is that it is an illegal cracked copy so it can't be used legally for a standard machine unless you modify it your self. Given your track record here, I assume that modifying an OS installation ISO is beyond your skill level, so please forgive me if you are capable of it.

7 wastes RAM. A normal Win 7 can't do less than 1GB very well at all most of the time, but sometimes can do 512MB... No less. XP can do 512MB just fine whilst Windows 7 struggles. Windows 7 starter might be able to do 512MB, but I'm not sure and that I can't test. Starter is an OEM only version so it's not quite as simple as going to Microsoft.com and downloading it like the retail versions.
March 3, 2012 6:41:55 PM

blazorthon said:
You repeated what I said about the L2 cache and L1 cache of FX, it is 2MB per dual core module. However, the L2 cache is only shared between the two cores in each module so even having more modules doesn't make a per core improvement due to cache being shared between modules. The 8150 outperforms the 4100 because of the following reasons:
1. It has more cores. This allows some work load that would normally be stuck with four cores, such as background tasks, to spread out a little more across the eight cores. This allows the more intense software, such as the lightly/single threaded games, to utilize what few cores it can use slightly more because there is less usage from other stuff on each core.

2. The 8150 can allocate the more intense threads to a single module whilst the less intensive threads share a module, further improving the single/lightly threaded performance slightly more.

3. The 8150 has a higher Turbo Core than the 4100.

The i3 has no added technology beyond Hyper-Threading to differentiate it from other dual core CPUs. If you disable Hyper-Threading then it will not have two physical threads and two logical threads, it will have two physical threads just like any other dual core processor that lacks Hyper-Threading. The i3 outperforms them all simply because it has a more efficient architecture that improves it's IPC, aka Instructions Per Clock.

One example of the i3's advantages is it's faster cache than AMD and previous Intel designs. AMD tries to use huge caches to make up for the cache's low performance. This helps, but not enough. A small, fast cache will usually beat a larger, slower cache. This method is akin to using multiple slow cores to fight a CPU with fewer faster cores, it only works in situations that use the most of the cache. Most programs don't care, and for those that do, Intel has it's L3 cache so it doesn't matter much. I can get more technical and explain this further if necessary.

The i3 also makes more efficient use of memory bandwidth and memory latency than AMD and previous Intel designs. The i3 is simply a better design than the rest. Remember, different CPU architectures perform differently. Also, just because something performs significantly faster than another doesn't mean that they aren't both dual cores, the same goes for any other core count. For example, the Core 2 CPUs such as the e5300 you mention fly circles around the Pentium Ds that they replace. Both are standard, dual core CPUs, one architecture is simply better. The same reason allowed Athlon 64s to beat Pentium 4s and Pentium Ds (both have the same architecture, it is called Netburst).

I don't know why the review you list shows the opposite of every other review that I've seen and my own tests. Perhaps there were some other variables. Thank you for bringing it up, I will look into it and see if I can find out what happened there. Please understand that some reviews are done improperly and when they don't follow the overwhelming majority, there is probably something wrong with them. This is not always the case and perhaps I am wrong, so like I said, I'll look into it, but every review I've seen and my own tests show otherwise. Perhaps they only tested it in the few highly threaded games, perhaps the Intel system had a bottleneck, perhaps they know of some optimizations that most people don't. Whatever the case, it would be a lot easier and faster for me to confirm your findings if you give me a link to this review.

i3s are crap? Did you only use a weak graphics card on the integrated graphics? Integrated graphics (Except for AMD's APU families) is not strong enough for gaming. In fact, Intel's graphics sometimes has errors with modern games and can't play at all. Intel's graphics is better for machines that don't game. It is great for watching even 1080p movies and TV or regular work such as web browsing (I do this even on the MUCH weaker GMA 950 from an older system I have, also my laptop has weaker graphics than Intel's HD 2000/3000). The HD 3000 can even play 3D 1080p movies and TV, although that is something I have not tested myself and I can't tell you how good/bad it is. Considering that 3D is only twice as graphics intensive as regular 2D movies and TV, it can probably handle it easily, but like I said this is something I can't confirm personally.

However, i3s are more than enough for even full, maxed out 1080p gaming if you pair them with a video card that can also handle it. For example, an i3 is powerful enough to have even the big Radeon 6900 cards and equivalent Nvidia cards. However, I wouldn't pair them with more than a Radeon 6870 simply because if I can afford that good of a video card, then surely I can afford an i5 too.

I speak from experience. FYI, two GTX 295s in SLI are bottlenecked by an i3-2100, but one GTX 295 is okay for an i3. Also FYI, the GTX 295 is a rough equivalent for the Radeon 6970/4870X2 and GTX 570/480 cards in performance. Although it has a little bit too little memory for some games at or beyond 1080p now, it has the raw GPU performance for even the top games maxed out at 1080p/1920x1200 and even beyond it in most games besides the most intense games like Metro 2033 and BF3.

There is a HUGE difference between gaming and TV regardless of resolution. TV/movies are far less intense than even older games to the point where even my crap GMA 950 can do 720p/1080p TV, but not even gaming at the minimum resolutions and settings in any modern game, so don't get confused there. I assume you already know at least this much and I apologize if it seems like I am patronizing you, but I had to say it to make sure you knew.

Besides, how can you switch between an i3 and an FX-8150 in the same setup? They don't work on the same motherboards, no cross-socket compatibility. Do you mean the 4100? I already stated that the 8150 is often superior in gaming to the 4100 and it is compatible with the same motherboards as the 8150 so I have to assume that you meant the 4100.


thanks for bringing that up actually.the amount of L2 cache increase performance overall. not just for each indivual core. you really dont understand what the function of L2 cache is. L1 cache is nothing compared to L2. why do you think the FX6100 outperforms the 4100? in gaming with games only using 2-4 cores and even in simple aplications the 6100 outperforms the 4100. thats because of its cache difference and threads. the extra cache and threads carry over loool this is quite histerical thou you repeat a bunch of tangents completley off topic. incase your unaware i have an intel and amd motherboard that has the same performance and price comparison. your havent even tested them out. you most likley dont have the resources.
!