After introducing the flagship FirePro W9100, AMD now has a FirePro W8100 in its portfolio. Somewhat lower specs (like 8 GB of memory, a slower GPU, and fewer shader units) should position it in the workstation world where the Radeon R9 290 is in gaming.
We already covered the Hawaii GPU's debut in the workstation space (AMD FirePro W9100 Review: Hawaii Puts On Its Suit And Tie). That story began with a question: did AMD throw caution to the wind and design a professional-class card designed for maximum rendering horsepower, rather than target a sweet spot? The company's approach changes with its FirePro W8100, so now we have to ask if the newer board is fast enough to warrant its asking price.
Where is AMD trying to go with the W8100? In the presentation slides for its W9100, the company set that card up as a competitor going against Nvidia's Quadro K5000, and was taken by surprise when it succeeded on all fronts. Based on the outcome, AMD became a bit more confident, and is now positioning the FirePro W8100 as the right card to go against the Quadro K5000. Its W9100 now shoots even higher.
What about pricing, you ask? The W8100 hasn't shown up for sale yet, but it's expected by the end of July at a price point of $2500 (compared to the K5000's $1800). Given those figures, AMD needs to hope its card still performs significantly better.
| Products |
|
|
|
|
| Pricing |
|
|
|
|
| Compute Units | 2560 Stream processors | 2816 Stream processors | 1536 CUDA cores | 2880 CUDA cores |
| Core Clock | 824 MHz | 933 MHz | 706 MHz | 902 MHz |
| FP32 Performance (SP) | 4.2 TFLOPS | 5.2 TFLOPS | 2.2 TFLOPS | 5.2 TFLOPS |
| FP64 Performance (DP) | 2.1 TFLOPS | 2.6 TFLOPS | 0.1 TFLOPS | 1.7 TFLOPS |
| Memory Size | 8 GB | 16 GB | 4 GB | 12 GB |
| Memory Bus | 512-bit | 512-bit | 256-bit | 384-bit |
| Memory Bandwidth | 320 GB/s | 320 GB/s | 173 GB/s | 288 GB/s |
| ECC | Yes | Yes | No | Yes |
| PCIe Bandwidth | 32 GB/s | 32 GB/s | 16 GB/s | 32 GB/s |
| 4K Displays @ 30 Hz | 6 | 6 | 2 | 2 |
| 4K Displays @ 60 Hz | 3 | 3 | 2 | 2 |
| Power Consumption (Measured) | 188 W 3D, 188 W GPGPU | 245 W 3D, 260 W GPGPU | 126 W 3D, 145 W GPGPU | 187 W 3D, 202 W GPGPU |
Spoiler alert! In the table above, we see that the FirePro W8100’s measured power consumption is approximately 28 percent lower than the W9100's. With a compute-oriented load applied, it draws noticeably less power than Nvidia’s Quadro K6000 for the very first time, and it is about on par with it in 3D tasks. This lower power consumption is roughly what you can expect, keeping in mind lower performance, when both figures are expressed as percentages. Of course, that doesn't mean our real-world benchmarks will yield the same findings, so the test results should be interesting.
Quo Vadis, AMD FirePro W8100?
When you have performance to offer, new opportunities present themselves. AMD identifies CAD and engineering, media and entertainment, medicine, and finance as some of the FirePro family's more traditional strengths. But with its big Hawaii GPU and the GCN architecture's alacrity in compute-intensive tasks, AMD wants to lock down its share of the virtualization, cloud gaming, and signage segments as well.
The ambition makes sense. Workstation-oriented apps benefit more and more from the performance of modern GPUs, after all. Nowadays, you can even run multiple CAD and CAE workflows at the same time. Cranking along on the next version of a drawing while rendering the previous one isn't a pipe dream. This stuff is actually doable. And the sky's the limit with a design equally adept in 3D- and general-purpose tasks.

AMD is already a seasoned vet when it comes to 3D. Now GPGPU is where it's trying to lead development. In order to better facilitate that initiative, the company is throwing its support behind the OpenCL standard as an alternative to Stream and CUDA. As we've seen in several different applications already, when there's a computationally difficult job that can be parallelized, the potential performance gains are well worth optimizing for.
There's also a notable trend toward the adoption of 4K (3840x2160) in the workplace. Those higher resolutions give engineers and artists a lot more room to work with. And while more detail obviously benefits 3D applications, even 2D tasks like programming are greatly enhanced by the extra screen space and pixel density of a 4K display.
Similarly, professional media-oriented titles see a lot of benefit as it becomes possible to edit high-res video in real time at full resolution. A workstation board like the W8100 should speed up the processing of video and photo filters, along with accelerating encoding/decoding. The professional graphics card market is clearly changing, and the lines between various segments are getting blurrier, even as the workloads and data sets are more specific than ever. CAD, CAE, M&E, oil and gas...the W8100 is AMD’s most recent effort to grab a larger share of all of them by further diversifying its portfolio of FirePro products.
AMD says that the FirePro W8100 is supposed to have a great price/performance ratio, and, in light of the card’s price, it could be onto something special. Is it the real deal though, faced with a less expensive Quadro K5000 as competition?
- Introducing AMD's FirePro W8100 Workstation Graphics Card
- Dimensions, Weight, Features and Pictures
- How We Test AMD's FirePro W8100
- OpenCL: Compute, Cryptography, and Bandwidth
- OpenCL: Financial Mathematics and Scientific Computations
- 2D Performance: GDI and GDI+
- SPECviewperf 12: CATIA, Creo and Maya 2013
- SPECviewperf 12: Showcase, Siemens NX and SolidWorks
- SPECviewperf 12: Synthetic Simulations
- OpenCL: 4K Video Post-Processing
- OpenCL: Rendering Performance
- DirectX 11 Gaming: 1920x1080
- DirectX 11 Gaming: 3840x2160
- How We Test Power Consumption
- Power Consumption: Detailed Results
- Heat and Noise
- A Jack Of All Trades For A Good Price




Wait a minute. If this kind of cooling is better than the ones that used on R9 290 and they had this kind of technology from HD5800 series, then why in the hell they didn't use it on R9 290 series instead of using this crap cooler they used?
That's almost an understatement. The K5000 is almost constantly 50% slower than the W8100, with a few 25% cases difference. For 700 $ more, the W8100 looks like a great buy.
Well this doesn't approve that the cooler they used is superior. It might be higher TDP rated but that doesn't mean that its better than a lower TDP rated. We know how this rated works, the number is not by any means absolute. And we have seen in the past (especially at CPU coolers) higher TDP rated coolers to loose against lower TDP coolers for a lot of reasons (better quality, better tech, better materials, heatpipe placement etc etc).
I think the real reason might come from your review.
I don't believe in coincidence, but they decided to use it on a more expensive professional GPU with great success.
How do we know that the cooler used in W8100 wasn't approved for R9 290(X) cause of its higher cost perhaps?
ps: Am I asking too much if I ask from any reviewer on Tom's to test this cooler on a R9 290? (if its compatible ofc...)
The focus on the Firepro W8100 and Quadro K5000 being competitors as something to directly compare is a bit misleading and distracts attention from the impressive features of the Firepro W8100.
The W8100 does outperform the Quadro K5000 is some important ways, but to be in marketing competition, the performance should be to be in the same general league. The W8100 is 56% more expensive- the price difference of $900 is more than enough to buy a K4000 (About $750).
On a marketing-basis a $60,000 car that is 50% faster is not a direct competitor to a $38,000 one. The use and expectations of performance and quality are different. The logic is to say, "If you're thinking of buying a Quadro K5000, you should know that for 56% more you can have 25-50% higher performance in several important but not all categories." These purchases are most often budget driven- how many have unlimited funds- and the buyer of a $1,600 card will be a different person from someone with a $2,500 budget. The buyer's quest is more often based on how much performance is expected combined with how much is possible within the budget.
These cards may have the same applications, but for the W8100 to be better value than a K5000, it should have a consistent 56% performance advantage. A better comparison would be to consider for example, the W7000 and Quadro K4000. Both about $750, but the W7000 is 256-bit, has 4GB. a 154GB/s bandwidth. and 1280 stream processors against the K4000's 192-bit 3GB, 134GB/s and 768 CUDA cores. On Passmark Performance Test, a W7000 3D score near but not the top is about 4300 and 2D at about 1000 while the K4000 scores near the top at about 3000 3D and 1100 2D. The news for AMD is even better when considering that a $1,600 Quadro K5000- double the W7000 cost but also 4GB and 256-bit- near the top 3D scores are about 4300 and in 2D about 900. For me, a better marketing strategy would be to compare the K5000 to the W7000 and the W8100 to a mythological "K5500" that would cost $2,800 (midway between 4 and 12GB and $1600 and $5000).
This means that the person looking for the best performance for $750 -and uses the applications the W-series is good at- has an easy choice in the W7000.
Still, the features- especially the 512-bit and 8GB plus overall performance make the W8100 one to consider in the upper end of workstation cards. This should be a very good animation /film editing card. The comments about AMD being more forward looking than NVIDIA may be correct though the comments about the quality of Quadro drivers also seems true. This furthers the trend of GPUs tending to concentrate in certain functions-( the W8100 in OpenCL for example), having to consider GPU's one by one according to the applications used. More and more, with complex 3D modeling and animation software, specific software drives graphics card choices and except for the very top of the lines, the cards seem to less all-rounders than before- not good at everything.
BambiBoom
Typo: "... our processor runs at a base close rate ... "
I assume that should be, 'clock rate'.
Btw, how come the test suite has changed so that there is no longer any
app being used such as AE for which NVIDIA cards can be strong because
of CUDA support?
Ian.
A down-vote eh? I guess the proverbial NVIDIA-haters still lurk, unwilling to
present any rationale as usual.
And falchard is right, Viewperf tests showed enormous differences between
pro & gamer cards in previous years, but it seems vendors are deliberately
blurring the tech now, optimising for consumer APIs (ie. not OGL), which
means pro tests often run well on gamer cards. In which case where is their
rationale for the cost difference? Apart from support and supposedly better
drivers, basic performance used to be a major factor of choosing a pro card
and a sensible justification for the extra cost, but this appears to be not the
case anymore; check Viewperf11 scores for any gamer vs. pro card, the only
test where a gamer card isn't massively slower is ENSIGHT-04. For MAYA-03,
a Quadro 4000 is 3X faster than a GTX 580; for PROE-05, a Q4K is 10X faster;
for TCVIS-02, a Q4K is 30X faster.
Today though, with Viewperf12, a 580 is faster than a K5000 for MAYA-04,
about the same for CREO-01, about the same for SHOWCASE-01 and
not that much slower for SW-03. Only for CATIA-04 and SNX-02 does the
expected difference persist.
Meanwhile we get OpenCL touted everywhere, even though there are plenty
of apps which can exploit CUDA, but little attempt to properly compare the
two when the option to use the latter is also available, eg. 3DS Max, Maya,
Cinema4D, AE, LW, SI, etc.
Ian.
PS. nebun, the core structure on these cards is completely different. The number
of cores is a totally useless measure, it tells one nothing. One can't even compare
between different cards from the same vendor, eg. a GTX 770 has way more cores
than a GTX 580, but a 580 hammers the 780 for CUDA. Indeed, a 580 beats all
the 600 series cards for CUDA despite having far few cores (it's because the newer
cards use a much lower core clock, less bandwidth per core, etc.)
It's nice to see that AMD is starting to close the gap on it's products. They seriously need to consider updating their cooling solutions and improving power. I would be interested to see if these workstation cards throttle down as often as their desktop counterparts. In my experience most of the current Hawaii chips are running higher voltages than needed and they could save both power and heat by running them down a bit. It should allow the boards to stay stable and compete better in many workloads.
It is known that AMD cards use more than double power compared to nvidia while idling in multi monitor scenarios. Seeing how this is a professional GPU, chances are that it will be used in a multi and not in a single monitor environment. I'd like to know if the workstation class cards address this problem better than their gaming cousins.
https://blogs.adobe.com/premierepro/2011/02/cuda-mercury-playback-engine-and-adobe-premiere-pro.html
Cuda has been in both for years, NV made MPE with Adobe. This hasn't changed. OpenCL has been added, but Cuda is still there.
http://www.dslrfilmnoob.com/2014/04/26/opencl-vs-cuda-adobe-premiere-cc-rendering-test/
April 26th 2014 test, Cuda vs. OpenCL, GTX 670 vs. 290x. ~Tie. So pretty sure a 780TI would smoke the 290x, but it does show as he said they've been improving opencl. However, a 670 would be handily trounce by Nv's 780ti here, so it applies to the 290x too.
http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga
GTX 670 specs-1344 cuda cores, 3.5B transistors, 256b wide, 6ghz mem etc)
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review
2880 cuda cores, 7.1B, 384bit bus width, 7ghz mem etc...Not even in the same league for cuda testing.
Is this why tomshardware avoids doing the Cuda vs. OpenCL and acts as if it doesn't exist here even though you toss out a comment saying cuda is good for this stuff IN this very article (then why not show it?)?
https://www.youtube.com/watch?v=XTIqzzTNag0&html5=1
Not my native language, but at 1:45 into the vid you can see him making the selection of CUDA or OpenCL or SOFTWARE. How difficult is it for you guys to click a circle? I could dig further to get a english vid, but you get the point and that took 15 seconds to find..LOL.
Fake articles need to stop just like that qcom S805 preview crap with no K1, when we already know it is TROUNCED by K1, per anandtech, slashgear, Hexus, Pcper (both xiaomi mipad and shield tablet reviews, and the mipad was july 21st), etc who all show the same numbers easily compared etc with all the usual suspect benchmarks.
Wait a minute. If this kind of cooling is better than the ones that used on R9 290 and they had this kind of technology from HD5800 series, then why in the hell they didn't use it on R9 290 series instead of using this crap cooler they used?
There are two possible reasons for using a different cooler on the Radeons. The most obvious is cost, the Hawaii GPU costs a lot to manufacture compared to previous generations, almost as much as a GK110 in the Titan. Less obvious may be that it was meant as a show of confidence in card manufacturers.