AMD FX-8370E Review: Pulling The Handbrake For More Efficiency
Tags:
-
CPUs
-
Components
-
Processors
-
AMD
Last response: in Reviews comments
FormatC
September 22, 2014 11:55:24 AM
Going more slowly is more efficient. That’s what AMD must have thought when they designed its new FX-8370E processor, thus closing a gap in the company's line-up. We evaluate whether this CPU is really more efficient and what happens when we overclock it.
AMD FX-8370E Review: Pulling The Handbrake For More Efficiency : Read more
AMD FX-8370E Review: Pulling The Handbrake For More Efficiency : Read more
More about : amd 8370e review pulling handbrake efficiency
-
Reply to FormatC
The_Doc
September 22, 2014 12:45:37 PM
How to start a benchmark review? But of course, let's show how powerful is AMD in single core!
I think we all get it Vishera isn't exactly wonderful in single core operations, but:
A) I have yet to see any software which requires A LOT of single core power, it's 2014, if something is still single-core, it probably doesn't need all that power or il old enough to make even Vishera good at it.
B) You are comparing a 2012 architecture to a 4790K, It's like comparing Pentium 4 to a Pentium G3258.
I think we all get it Vishera isn't exactly wonderful in single core operations, but:
A) I have yet to see any software which requires A LOT of single core power, it's 2014, if something is still single-core, it probably doesn't need all that power or il old enough to make even Vishera good at it.
B) You are comparing a 2012 architecture to a 4790K, It's like comparing Pentium 4 to a Pentium G3258.
-
Reply to The_Doc
m
-3
l
Related resources
- How to get my AMD FX-6300 to run more efficiently? - Forum
- From past to present who has been more efficient AMD or Nvidia? - Forum
- A respnse to AMD being more power efficient - Forum
- Reason why AMD is more efficient then FAB 4 - Forum
- AMD pulled off more victories than Intel $ 4 $ - Forum
husker
September 22, 2014 1:48:46 PM
Article quote: "However, it's probable that AMD sent us a sample chosen specifically for this purpose. Plus, there is almost certainly variance from one -8370E to the next. And so it's hard to know if the FX-8370E is actually better."
If you pre-suppose that your sample is tainted why bother to do the testing and the article in the first place. Perhaps this is a case where your should purchase the product of the shelf in order to better serve your readers.
If you pre-suppose that your sample is tainted why bother to do the testing and the article in the first place. Perhaps this is a case where your should purchase the product of the shelf in order to better serve your readers.
-
Reply to husker
m
10
l
1991ATServerTower
September 22, 2014 2:21:54 PM
oxxfatelostxxo
September 22, 2014 2:54:33 PM
"If you pre-suppose that your sample is tainted why bother to do the testing and the article in the first place. Perhaps this is a case where your should purchase the product of the shelf in order to better serve your readers."
1) almost every vendor does this, cpus, graphics, ect..
2) the chip they received is exactly what you get when you buy it off the shelf, however every cpu/gpu ect varies by a small amount. The vendors simply make sure that review sites get the top end of that group. In all honesty we are probably talking 3% performance from the majority at most.
1) almost every vendor does this, cpus, graphics, ect..
2) the chip they received is exactly what you get when you buy it off the shelf, however every cpu/gpu ect varies by a small amount. The vendors simply make sure that review sites get the top end of that group. In all honesty we are probably talking 3% performance from the majority at most.
-
Reply to oxxfatelostxxo
m
5
l
-
Reply to m32
m
1
l
Chris Droste
September 22, 2014 4:50:16 PM
hmp_goose
September 22, 2014 4:58:03 PM
While it's nice to see a sweet spot staked out for the OC, and really nice to hear about how much smaller the heatsink can be, what I'd like to see if the E OCs cooler/ less wattage then the two non-Es. I like to think a 8350 is better then a 8320 if you care about power consumption at all, and want to see if the trend continues with the 8370 & 8370E …
-
Reply to hmp_goose
m
1
l
MeteorsRaining said:
The price point is a deal breaker. Its a fairly good CPU for AMD builders, but can't give it the tag of budget builder, you get i5 non-K in that price. Its moving into a higher (cost wise) territory with weak arsenal. Quote:
Yes 4.5 GHz and higher is possible, but at a certain point you're going to spend too much on a beefy motherboard and high-end cooler, negating the value of overclocking outright.Far too many people forget the whole cost of OCing a chip. Sure, a 4.5 83XX can slightly beat a stock i5, but at what cost? The 6300 is a far more compelling CPU for tweakers. If you're lucky on a few sales, you can get the chip, cooler, and mboard for the same $200. And as pointed out here, unless you're pairing it with a top-shelf GPU, you won't see any gaming benefits with a pricier platform.
The_Doc said:
B) You are comparing a 2012 architecture to a 4790K, It's like comparing Pentium 4 to a Pentium G3258.This is AMD's latest offering. The Haswell refresh is Intel's latest offering. Whatever the products' pedigrees, why shouldn't the two latest SKUs be compared?
-
Reply to RedJaron
m
2
l
Keenan Johnson
September 22, 2014 8:19:17 PM
The only issue I see with this CPU is the price of it. I really would not personally see a reason to go with this over the old 8320 or even the 8320E, which are both priced very well for their performance and will overclock similarly to this one, although the non E variants seem to run hotter. At stock, the 8370E will likely match the 8320, but costs too much more. The TDP number to me only tells me how stong a cooler I need WHEN I overclock. Bottom line is, what does it cost and how fast is it for that cost. At $200, I'd have a hard time not forking out the extra $40 for the I5 4670K. Again, not due to it's TDP rating, but what it can do for the money I spend.
-
Reply to Keenan Johnson
m
2
l
Amdlova said:
the cpu price is not the problem, but a good motherboard for the cpu is too high. I don't want a crap north chipset with crap south chipset. AND i see the 990fx With fear. Amd need Update the chipset...agreed, this cpu need new (limited) mobo to operate.. this making it's a minus point...
anyways we need to keep advocating good balanced built more often..
I see lot's of people keep waste money in one (op) part to only be limited by another parts in his system...
(the true potential of the system is nowhere to be seen)
-
Reply to rdc85
m
2
l
Keenan Johnson
September 22, 2014 8:52:59 PM
rdc85 said:
Amdlova said:
the cpu price is not the problem, but a good motherboard for the cpu is too high. I don't want a crap north chipset with crap south chipset. AND i see the 990fx With fear. Amd need Update the chipset...agreed, this cpu need new (limited) mobo to operate.. this making it's a minus point...
anyways we need to keep advocating good balanced built more often..
I see lot's of people keep waste money in one (op) part to only be limited by another parts in his system...
(the true potential of the system is nowhere to be seen)
Agreed, too many people, and some that I personally know will throw a high end K chip in their rig and match it with a $120 GPU while not wanting to overclock said CPU, and then get mad because they can't max out new titles. Recently, a friend's brand new i7 rig was out ran by my overclocked FX rig in a bet on the Metro LL benchmark due to his GTX 650 GPU vs my heavily overclocked R9 280X
He honestly thought he would win, and he was not happy with his purchase after that. It took awhile to explain to him (He's very new to the PC gaming world) that his prebuilt "gaming" rig was hideously imbalanced. It happens all too often, the new to PC guys who buy some of those prebuilts get ripped pretty hard sometimes
-
Reply to Keenan Johnson
m
2
l
rmpumper
September 22, 2014 10:00:13 PM
Shin-san
September 22, 2014 11:28:43 PM
Something tells me that Bulldozer might have been to get into game consoles. It's an architecture that's easy for people to learn, but hard to extract full power from.
However, it seems that AMD won't be making any new CPU architectures until 2016. I'm doubtful that AMD will manage to push the clock any further in the near-future, though 5 GHz is possible. A 200W part will make your PC a space heater.
For the 2016 build, there's a chance that AMD may be revamping the CPU drastically, but there's also the chance that AMD will just give up. The third alternative is that they will release a CPU update for game consoles.
I'm also doubtful about the hybrid x86/ARM chip they want to make. In theory, it's sound, but I'm thinking of the complications from programming the thing, plus the potential for bugs.
However, it seems that AMD won't be making any new CPU architectures until 2016. I'm doubtful that AMD will manage to push the clock any further in the near-future, though 5 GHz is possible. A 200W part will make your PC a space heater.
For the 2016 build, there's a chance that AMD may be revamping the CPU drastically, but there's also the chance that AMD will just give up. The third alternative is that they will release a CPU update for game consoles.
I'm also doubtful about the hybrid x86/ARM chip they want to make. In theory, it's sound, but I'm thinking of the complications from programming the thing, plus the potential for bugs.
-
Reply to Shin-san
m
0
l
vaughn2k
September 23, 2014 2:38:55 AM
tomc100
September 23, 2014 3:02:46 AM
Lee Yong Quan
September 23, 2014 3:27:19 AM
xenol
September 23, 2014 4:46:18 AM
The_Doc,
Single-core performance is the most important benchmark you can have. It doesn't apply only to programs that just use a single thread, it's a very important tool to determine GAMING performance.
It's the explanation for why most games run better on Intel hardware. If a game can only use about THREE cores of a CPU then the per-core performance needs to be high or you might get bottlenecking.
In SKYRIM for example in some benchmarks there's a 45% difference between a Haswell i5 and the FX-8350.
Single-core performance is the most important benchmark you can have. It doesn't apply only to programs that just use a single thread, it's a very important tool to determine GAMING performance.
It's the explanation for why most games run better on Intel hardware. If a game can only use about THREE cores of a CPU then the per-core performance needs to be high or you might get bottlenecking.
In SKYRIM for example in some benchmarks there's a 45% difference between a Haswell i5 and the FX-8350.
-
Reply to photonboy
m
0
l
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_9...
IMO, W1zzard on techpowerup said it best in (the last) three words after benchmarking the gtx 980.
IMO, W1zzard on techpowerup said it best in (the last) three words after benchmarking the gtx 980.
-
Reply to Au_equus
m
0
l
Andy Chow
September 23, 2014 8:02:25 AM
silverblue
September 23, 2014 10:07:16 AM
I had the same thought. I'd like to think they didn't so there's something left on the table to improve upon.
The -E processors are Piledriver with the power usage generally under control, assuming retail and review samples perform identically. It's sad that it's taken two years to get something that might perform close to an SB i5 for similar power usage in heavier workloads; still, it's no Steamroller FX, but would that even help outside of the one area that FX is already relatively strong?
The -E processors are Piledriver with the power usage generally under control, assuming retail and review samples perform identically. It's sad that it's taken two years to get something that might perform close to an SB i5 for similar power usage in heavier workloads; still, it's no Steamroller FX, but would that even help outside of the one area that FX is already relatively strong?
-
Reply to silverblue
m
0
l
rwinches
September 23, 2014 11:44:09 AM
kenryk01
September 23, 2014 12:45:15 PM
"New" 32nm cpus in 2014... Honestly, what is AMD thinking? Broadwell will send them bankrupt if they continue with the now almost two and a half years old Piledriver microarchitecture. I know they brought some improvements to it, but it just can't compete with the 22nm and, soon enough, 14nm Intel products. Even the mobile chipsets are moving from 28nm to 20nm. Come on, AMD, we could really use the competition right now.
-
Reply to kenryk01
m
0
l
kenryk01
September 23, 2014 12:46:15 PM
"New" 32nm cpus in 2014... Honestly, what is AMD thinking? Broadwell will send them bankrupt if they continue with the now almost two and a half years old Piledriver microarchitecture. I know they brought some improvements to it, but it just can't compete with the 22nm and, soon enough, 14nm Intel products. Even the mobile chipsets are moving from 28nm to 20nm. Come on, AMD, we could really use the competition right now.
-
Reply to kenryk01
m
0
l
coolitic
September 23, 2014 1:37:23 PM
Keenan Johnson
September 23, 2014 8:42:02 PM
rmpumper said:
I will get downvoted for this comment but I have to say this: I find it funny that AMD fans are always claiming that power consumption is irrelevant while at the same time AMD are doing everything they can to improve efficiency.No down vote, but I'd like to clarify a bit. I'm not a fanboy by any means, and if I sent that vibe out that's not what I meant to do. I actually had some buyers remorse before I had my rig all set up as I opted for the cheap 8320 ($140 at the time) over an i5 3570K so I could step up my GPU and have a decent Asus board. Now that it's running I'm ok with it as it's much faster than my old rig. If Intel had an i7 that was in my price range for a new build and it had good performance for it's cost, but had a TDP of 140 Watts that would not deter me at all. The only CPU's I think are too much are the FX 93xx as there is no affordable way to cool them lol.
All that out of the way, this is a reasonable CPU option for those who care more about efficiency or are Mobo TDP limited and want to upgrade from a Bulldozer CPU, but I still think it costs too much for it's performance. AMD will do the most damage right now (Piledriver) in the price to performance aspect of things, not flat out speed and efficiency. This one is approaching i5 territory for price and the i5 is faster. In my humble opinion, they would have been better off retiring some of the older models and sending the new ones in.
-
Reply to Keenan Johnson
m
0
l
Keenan Johnson said:
rdc85 said:
Amdlova said:
.........
.... It happens all too often, the new to PC guys who buy some of those prebuilts get ripped pretty hard sometimes
....the problem is to many or most of review site "only" test or benchmark new parts with highest system they have..
(when they test proc they use higest GPU they have 290/780ti, when they test gpu they use i7 only)
Nothing wrong with this, their intend or method is correct..
(to know how good the new parts without bottleneck by other parts in the system)
but they seem forget/neglect to bench/review using mid level system to promote balanced build..
It could be they lack of time, fund, or simply don't care...
it's good for people who already informed but it's not educating for new people..
edit: in other thought
"maybe this is a reason why I'm waiting/trust for Tom's review more than other site review
" -
Reply to rdc85
m
0
l
"""Depending on the game in question, AMD’s new processor has the potential to keep you happy around the AMD Radeon R9 270X/285 or Nvidia GeForce GTX 760 or 660 Ti level.
A higher- or even high-end graphics card doesn’t make sense, as pairing it with AMD's FX-8370E simply limits the card's potential."""
Wow, what a horrifically misleading over simplification of the issue. I expect better from Toms hardware articles and reviews than this. This is the sort of poorly derived philosophy you can scrap from the side of the forum barrel filled with amateurs.
The CPU should be selected based on the FPS one wants to get in the type of games and conditions someone wants to play, period, the GPU has nothing to do with this as no amount of GPU big or small can solve a compute side problem for performance.
The GPU should be selected based on the VISUAL QUALITY (that includes resolution) one wants to play at, (factoring in the desired FPS). This part of the component selection has nothing to do with the CPU.
Match the CPU to the compute workload. Match the GPU to the render workload. Any philosophy that attempts to match the CPU to the GPU, or vice versa, is fundamentally flawed and I am ashamed to be reading this sort of drivel on what is supposed to be one of the most highly regarded hardware review sites around.
The R7 250X, R9 270, and R9 290, will all achieve approximately the same FPS when connected to 720P, 1080P, and 1440P resolution displays respectively, all other things being equal. Note, the size and cost of the GPU here basically quadruples from the 250X to the 290, yet all 3 configurations produce the same FPS with all other conditions being equal (same CPU). The thing that changes with higher end GPUs, is the visual quality available at the desired resolution. If the goal is 60FPS at 720P, or 60FPS at 1440P, the compute requirements of the game won't change much, you'll have to pick a CPU that can do the desired FPS in the game and conditions intended regardless of what GPU is selected.
Any CPU can be an appropriate match for any GPU if the CPU has been selected to fulfill the compute workload presented by the game and FPS desired, and the GPU has been selected to fulfill the visual quality and FPS desired. If the goal is to play on a 4K monitor at 30FPS, then you'll need a flagship GPU for that. Fulfilling the compute side of the 30FPS goal requires nothing more than a $75 CPU for 99% of games out there. A 750k makes a good match to an R9 290 for such a goal. Conversely, if the goal is to play a compute intensive multiplayer game at a competitive 144FPS, then an overclocked i5/i7 is the only CPU worth consideration regardless of which GPU is chosen. The render workload per frame is adjustable, so any competent GPU could hit the 144FPS goal with proper settings in most games, the GPU "size" has nothing to do with the CPU selected, it has to do with the desired visual quality.
A higher- or even high-end graphics card doesn’t make sense, as pairing it with AMD's FX-8370E simply limits the card's potential."""
Wow, what a horrifically misleading over simplification of the issue. I expect better from Toms hardware articles and reviews than this. This is the sort of poorly derived philosophy you can scrap from the side of the forum barrel filled with amateurs.
The CPU should be selected based on the FPS one wants to get in the type of games and conditions someone wants to play, period, the GPU has nothing to do with this as no amount of GPU big or small can solve a compute side problem for performance.
The GPU should be selected based on the VISUAL QUALITY (that includes resolution) one wants to play at, (factoring in the desired FPS). This part of the component selection has nothing to do with the CPU.
Match the CPU to the compute workload. Match the GPU to the render workload. Any philosophy that attempts to match the CPU to the GPU, or vice versa, is fundamentally flawed and I am ashamed to be reading this sort of drivel on what is supposed to be one of the most highly regarded hardware review sites around.
The R7 250X, R9 270, and R9 290, will all achieve approximately the same FPS when connected to 720P, 1080P, and 1440P resolution displays respectively, all other things being equal. Note, the size and cost of the GPU here basically quadruples from the 250X to the 290, yet all 3 configurations produce the same FPS with all other conditions being equal (same CPU). The thing that changes with higher end GPUs, is the visual quality available at the desired resolution. If the goal is 60FPS at 720P, or 60FPS at 1440P, the compute requirements of the game won't change much, you'll have to pick a CPU that can do the desired FPS in the game and conditions intended regardless of what GPU is selected.
Any CPU can be an appropriate match for any GPU if the CPU has been selected to fulfill the compute workload presented by the game and FPS desired, and the GPU has been selected to fulfill the visual quality and FPS desired. If the goal is to play on a 4K monitor at 30FPS, then you'll need a flagship GPU for that. Fulfilling the compute side of the 30FPS goal requires nothing more than a $75 CPU for 99% of games out there. A 750k makes a good match to an R9 290 for such a goal. Conversely, if the goal is to play a compute intensive multiplayer game at a competitive 144FPS, then an overclocked i5/i7 is the only CPU worth consideration regardless of which GPU is chosen. The render workload per frame is adjustable, so any competent GPU could hit the 144FPS goal with proper settings in most games, the GPU "size" has nothing to do with the CPU selected, it has to do with the desired visual quality.
-
Reply to mdocod
m
0
l
martel80
September 24, 2014 12:32:31 AM
husker said:
If you pre-suppose that your sample is tainted why bother to do the testing and the article in the first place. Perhaps this is a case where your should purchase the product of the shelf in order to better serve your readers.I agree. Couldn't they negotiate the following agreement with all the companies they do reviews for?
1) Buy a review sample at retail
2) Benchmark it
3) Send it back to the company to have ~80% of the purchase price reimbursed (can't expect the company to pay for retail margins), or have it reimbursed without sending it back.
-
Reply to martel80
m
0
l
Shaco01
September 24, 2014 2:00:01 AM
Shaco01
September 24, 2014 2:04:07 AM
Is it me or is Tomshardware is an Intel fanboys? They do it again in this article.
"Jee, AMD might have sent us rigged hardware to review" and " Jee, the CPU performs as good as overpriced i7 CPU's from Intel in most relevant scenarios, but jeee, it's AMD so it's bad cuz we've been saying this for every CPU article we write".
"Jee, AMD might have sent us rigged hardware to review" and " Jee, the CPU performs as good as overpriced i7 CPU's from Intel in most relevant scenarios, but jeee, it's AMD so it's bad cuz we've been saying this for every CPU article we write".
-
Reply to Shaco01
m
0
l
Thor God Of Thunder
September 24, 2014 2:32:38 AM
I have an i5 3570K that I bought Sept. 2013. It runs at 4.5ghz at 1.26v. It runs at that speed as low as 1.22v I paid $180 for it and $30 for a XigmaTek Gaia 1283 cooler. At the time my only other option was to go Haswell but it was new and the OC's were not looking good at 4.2-4.5ghz. I didn't worry about the 1155 chipset was ending to the 1150 as I knew the new DDR4 was around the corner with new chipsets.
I am not an Intel fanboy, I just buy what performs best. I have a desk drawer full of old AMD CPU's back to before Barton and After Barton with dual core AMD 64's X2 3800+ and a couple others.
I am not an Intel fanboy, I just buy what performs best. I have a desk drawer full of old AMD CPU's back to before Barton and After Barton with dual core AMD 64's X2 3800+ and a couple others.
-
Reply to Thor God Of Thunder
m
0
l
saturn85
September 24, 2014 4:20:22 AM
http://www.guru3d.com/articles_pages/amd_fx_8370_and_83...
review from guru3d shows that the FX 8370 and FX 8370E have massive memory bandwidth gain compared to the FX 8350.
Memory read performance:
FX 8370 1600MHz score 23428MB/s.
FX 8370E 1600MHz score 23304MB/s.
while
FX 8350 1600MHz score 14161MB/s.
can tomshardware team confirm this?
is AMD improved their cpu memory controller in this chips?
review from guru3d shows that the FX 8370 and FX 8370E have massive memory bandwidth gain compared to the FX 8350.
Memory read performance:
FX 8370 1600MHz score 23428MB/s.
FX 8370E 1600MHz score 23304MB/s.
while
FX 8350 1600MHz score 14161MB/s.
can tomshardware team confirm this?
is AMD improved their cpu memory controller in this chips?
-
Reply to saturn85
m
0
l
silverblue
September 24, 2014 7:23:48 AM
I don't think so; I think AIDA64 was changed about a year back to reflect memory bandwidth from each module (or in Intel's case, physical core) rather than each thread. As such, because the FX-8350 results may not have been updated, it would show a large jump when in fact none exists.
http://www.bitsandchips.it/7-software/2934-aida64-i-ben...
http://www.bitsandchips.it/7-software/2934-aida64-i-ben...
-
Reply to silverblue
m
0
l
Drejeck
September 24, 2014 7:34:06 AM
I really can't blame AMD for introducing it at $200 when there are so many out there who still insist (in defiance of all evidence) that the FX-8xxx's are comparable to Core i5's. May as well get that $200 for a few weeks or months while they can, but price drops are inevitable later on.
If those people want to pay $200 for FX-8370E's, I don't mind. Their sacrifice helps keep AMD around.
If those people want to pay $200 for FX-8370E's, I don't mind. Their sacrifice helps keep AMD around.
-
Reply to oxiide
m
0
l
Well considering Intel spends more money on R&D then intelligence makes per year and Intel tick tock system it's will be a while before AMD will be in direct competition unless they design something truly revolutionary. Intel is set on a path and buying up any company that develops something new. So AMD need to strike while people are complacent.
-
Reply to thequn
m
0
l
Well considering Intel spends more money on R&D then intelligence makes per year and Intel tick tock system it's will be a while before AMD will be in direct competition unless they design something truly revolutionary. Intel is set on a path and buying up any company that develops something new. So AMD need to strike while people are complacent.
-
Reply to thequn
m
0
l
The FX-8320E for $140 seems like a great deal to me (at least, that was the price recently on amazon). It may not have the raw execution resources per core to compete in real-time workloads, but will still make a fantastic workstation chip at that price point. Keep in mind that benchmarks of real-time workloads and benchmarks of applications workloads can't be held to the same standard as they effect us differently. Real-time workload performance is something we actually notice in terms of user experience (gaming), while application performance is harder to pin down in terms of how it "feels." Whether a task finishes sooner or later is actually not as relevant as whether the machine stays responsive to our real-time input while working on multiple things at the same time. In this regard, a $140 FX-8320E is better suited to a typical multi-tasking workstation environment than a competing i3-4350. Especially when you consider that the FX-8320E supports ECC memory (on many inexpensive motherboards), and IOMMU's.
-
Reply to mdocod
m
0
l
mdocod said:
The FX-8320E for $140 seems like a great deal to me (at least, that was the price recently on amazon). It may not have the raw execution resources per core to compete in real-time workloads, but will still make a fantastic workstation chip at that price point. Keep in mind that benchmarks of real-time workloads and benchmarks of applications workloads can't be held to the same standard as they effect us differently. Real-time workload performance is something we actually notice in terms of user experience (gaming), while application performance is harder to pin down in terms of how it "feels." Whether a task finishes sooner or later is actually not as relevant as whether the machine stays responsive to our real-time input while working on multiple things at the same time. In this regard, a $140 FX-8320E is better suited to a typical multi-tasking workstation environment than a competing i3-4350. Especially when you consider that the FX-8320E supports ECC memory (on many inexpensive motherboards), and IOMMU's. A real-world benchmark is any where the results represent performance that is practical in some way; that includes games and applications. The opposite is a synthetic benchmark, like Cinebench, 3DMark/PCMark, Fritz, etc.
A $140 FX-8320E might represent better performance per dollar, but professional computation is not really a "value-oriented" market. Companies who need performance workstations are happy to invest in $2000 Xeons because they still easily make their money back. Even individuals who need beefy workstations for their livelihood are going to get a better return from a Core i7 rather than faffing about with -8320E's.
-
Reply to oxiide
m
0
l
oxiide said:
A real-world benchmark is any where the results represent performance that is practical in some way; that includes games and applications. The opposite is a synthetic benchmark, like Cinebench, 3DMark/PCMark, Fritz, etc.A $140 FX-8320E might represent better performance per dollar, but professional computation is not really a "value-oriented" market. Companies who need performance workstations are happy to invest in $2000 Xeons because they still easily make their money back. Even individuals who need beefy workstations for their livelihood are going to get a better return from a Core i7 rather than faffing about with -8320E's.
Considering how many of those $2000 Xeons are slower in real world single client workloads than a $250 E3 Xeon, Any company who is throwing that sort of money at the problem without an assessment of the workloads generated by the user isn't going to last long.
-
Reply to mdocod
m
0
l
- 1 / 2
- 2
- Newest
!