Sign in with
Sign up | Sign in

Workstation Graphics: 14 FirePro And Quadro Cards

Workstation Graphics: 14 FirePro And Quadro Cards
By

We put 14 professional and seven gaming graphics cards from two generations through a number of workstation, general-purpose computing, and synthetic applications. By the end of our nearly 70 charts, you should know which board is right for your workload.

A few months back, we benchmarked the current crop of workstation graphics cards in some of the latest titles, just for kicks (How Well Do Workstation Graphics Cards Play Games?). At the same time, we were in the process of putting the latest FirePro and Quadro products through our professional graphics suite, along with a number of desktop-oriented gaming boards. Well, after literally several hundred hours of benchmarking, we have the data to go along with that follow-up story (and the results that go into our Workstation Graphics 2013 Charts).

Competing Graphics Cards Overview

Our field of contenders includes all of the heavy hitters. We have Nvidia's flagship Quadro 6000, as well as AMD's FirePro W9000, though our focus is more on the sub-$1000 category, since that's more in-line with practical budgets, even in the professional space.

A lot of readers requested that we also include desktop-oriented cards to see how they compare in workstation-class applications, so we added seven of those, too. It's actually interesting to track their performance in workloads like rendering, 2D drawing, and CAD with DirectX graphics output.

Here's a list of all of the cards we benchmarked:


Nvidia
AMD
Workstation
(Current Generation)
Quadro K5000
Quadro K4000
FirePro W9000
FirePro W8000
FirePro W7000
FirePro W5000
Workstation
(Previous Generation)
Quadro 6000
Quadro 5000
Quadro 4000
Quadro 2000
FirePro V7900
FirePro V5900
FirePro V4900
FirePro V3900
Gaming
(Current Generation)
GeForce GTX 690
GeForce GTX Titan
GeForce GTX 680
Radeon HD 7990
Radeon HD 7970 GHz Edition
Gaming
(Previous Generation)
GeForce GTX 580
Radeon HD 6970

What We Couldn’t And Wouldn’t Include

AMD wasn't able to send over a FirePro W600 for our comparison. Interestingly, the company was willing to send over a FirePro S10000. This is a shame, since we wanted to dedicate analysis to the W600, since Nvidia doesn’t offer anything even remotely like it. A single-slot graphics card that can drive six monitors or projectors at the same time, and can even output six different audio streams would have been worth the effort, we think. Meanwhile, the FirePro S10000 card mentioned above, as well as Nvidia’s Tesla cards, are just too big for this story, though we do have a piece in the works covering Tesla. We also didn't include Nvidia's smaller Quadro 400 or 600, since they would have taken forever in some of our benchmarks, and wouldn’t have generated useable results in others due to their very limited performance.

Display all 52 comments.
This thread is closed for comments
Top Comments
  • 10 Hide
    bambiboom , July 8, 2013 1:36 AM
    Gentlemen?,

    A valiant effort, but in my view, a very important aspect of the comparisons has been neglected, namely, image quality,

    It is useful to make quantitative comparisons of workstation cards performing the same tasks, but when gaming / consumer cards are also compared only in terms of speed, the results are not necessarily reflective of these cards' use in content creation. Yes, speed is critical in navigating 3D models- shifting polygons, but the end result of those models is likely to be renderings or animations in which the final quality- refinement of detail and subtlety is more critical than in games.

    A fundamental aspect that reflects on the results in this comparison is that the test platform using an i3-3770K is not indicative of a workstation platform for which the workstation cards were designed and the drivers optimized. There are a number of very good reason for Xeons and Opterons and especially, for the existence of dual CPU's with lots of threads. There are other aspects of these components that bear on results, e.g., the memory bandwidth of the i7-3770K is only about half of a Xeon E5-1660. Note too, that that there are good reasons why Xeons have locked multipliers and can not be overclocked- speed is not their measure of success in priority to precision and extreme stability. Also important in this comparison is the presence of ECC RAM which is present in both the system and workstation GPU memory, which was treated a bit lightly, but that is essential for precision, especially in simulations and tasks like financial analysis. Also, ECC affects system speed in it's error correcting duties and parity checks and therefore runs slower than non-ECC. Again, to be truly indicative of workstation cards, it would be more useful to use a workstation to make the comparisons.

    An aspect of this report that was not sufficiently clarified, is that the rendering based applications are entirely reflective of CPU performance. Rendering is one of the few tasks that can use all the available system threads and anyone who renders images from 3D models and especially doing animations will today have a dual CPU six or eight core Xeon. I That comparisons were made involving rendering applications on a four-core machine in conditions of which the number of cores / threads matters significantly. believe that some of the dramatic differences in Maya performance in these tests may have been related to the platform used. I have a Previous generation dual four-core system yielding eight cores and sixteen threads at 3.16GHz (Xeon X5460) and during rendering, all eight cores go from 58C to 93C and the RAM (DDR2-667 ECC)from 68C to 85C in about ten minutes.

    Also, it's possible that the significant variation in rendering performance then may be due to system throttling and the GPU drivers that are finishing every frame under error-correcting RAM. In this task, the image quality is dependent on precision polygon calculation and i.e, particle placement, such that there are no artifacts, that shadows and color gradients are accurate and refined. Gaming cards emphasize frame rates and are optimized to finish frames more "casually" to achieve higher frame rates. This is why a GTX can't be used for Solidworks modeling either as tasks like structural, thermal, and gas flow simulations must have error correcting memory and Solidworks can produce as much as 128X anti-aliasing where a GTX will produce 16X. When a GTX is pushed in this way, especially on a consumer platform they perform poorly. Again, the image precision and quality aspect was lost in favor of a comparison of speed only.

    The introduction of tests involving single and double precision and comments regarding the fundamental differences of priority in the drivers were useful and in my view might have been more extensive as this gets more to the heart of the differences between consumer and workstation cards.

    Making quantitative comparisons of image quality is contradictory by definition, but in my view, quality is fundamental to an understanding of these graphics cards. As well, this would assist in explaining to content cobsumers the most important reason content creators are willing to spend $3,500 on a Quadro 6000 when an $800 GTX will make some things faster. Yes, AutoCad 2D is purposely made to run on almost any system- but when the going gets tough***, the tough get a dual Xeon, a pile of ECC, and a Quadro / Firepro!

    ***(Everything else!)

    Cheers, BambiBoom

Other Comments
  • 4 Hide
    DelightfulDucklings , July 7, 2013 10:01 PM
    That is a lot of tests
  • 0 Hide
    Amdlova , July 7, 2013 10:04 PM
    how tesla compare with titan ?
  • 2 Hide
    FloKid , July 7, 2013 10:52 PM
    So what are the workstation cards for anyways? Seems like the gaming cards beat them pretty bad at most tests.
  • 0 Hide
    Cryio , July 8, 2013 12:04 AM
    Man. If the GeForce 8 made a killing back in the day, the HD 7000 series show no sign of stopping, wether it's gaming of workstation.
  • 10 Hide
    bambiboom , July 8, 2013 1:36 AM
    Gentlemen?,

    A valiant effort, but in my view, a very important aspect of the comparisons has been neglected, namely, image quality,

    It is useful to make quantitative comparisons of workstation cards performing the same tasks, but when gaming / consumer cards are also compared only in terms of speed, the results are not necessarily reflective of these cards' use in content creation. Yes, speed is critical in navigating 3D models- shifting polygons, but the end result of those models is likely to be renderings or animations in which the final quality- refinement of detail and subtlety is more critical than in games.

    A fundamental aspect that reflects on the results in this comparison is that the test platform using an i3-3770K is not indicative of a workstation platform for which the workstation cards were designed and the drivers optimized. There are a number of very good reason for Xeons and Opterons and especially, for the existence of dual CPU's with lots of threads. There are other aspects of these components that bear on results, e.g., the memory bandwidth of the i7-3770K is only about half of a Xeon E5-1660. Note too, that that there are good reasons why Xeons have locked multipliers and can not be overclocked- speed is not their measure of success in priority to precision and extreme stability. Also important in this comparison is the presence of ECC RAM which is present in both the system and workstation GPU memory, which was treated a bit lightly, but that is essential for precision, especially in simulations and tasks like financial analysis. Also, ECC affects system speed in it's error correcting duties and parity checks and therefore runs slower than non-ECC. Again, to be truly indicative of workstation cards, it would be more useful to use a workstation to make the comparisons.

    An aspect of this report that was not sufficiently clarified, is that the rendering based applications are entirely reflective of CPU performance. Rendering is one of the few tasks that can use all the available system threads and anyone who renders images from 3D models and especially doing animations will today have a dual CPU six or eight core Xeon. I That comparisons were made involving rendering applications on a four-core machine in conditions of which the number of cores / threads matters significantly. believe that some of the dramatic differences in Maya performance in these tests may have been related to the platform used. I have a Previous generation dual four-core system yielding eight cores and sixteen threads at 3.16GHz (Xeon X5460) and during rendering, all eight cores go from 58C to 93C and the RAM (DDR2-667 ECC)from 68C to 85C in about ten minutes.

    Also, it's possible that the significant variation in rendering performance then may be due to system throttling and the GPU drivers that are finishing every frame under error-correcting RAM. In this task, the image quality is dependent on precision polygon calculation and i.e, particle placement, such that there are no artifacts, that shadows and color gradients are accurate and refined. Gaming cards emphasize frame rates and are optimized to finish frames more "casually" to achieve higher frame rates. This is why a GTX can't be used for Solidworks modeling either as tasks like structural, thermal, and gas flow simulations must have error correcting memory and Solidworks can produce as much as 128X anti-aliasing where a GTX will produce 16X. When a GTX is pushed in this way, especially on a consumer platform they perform poorly. Again, the image precision and quality aspect was lost in favor of a comparison of speed only.

    The introduction of tests involving single and double precision and comments regarding the fundamental differences of priority in the drivers were useful and in my view might have been more extensive as this gets more to the heart of the differences between consumer and workstation cards.

    Making quantitative comparisons of image quality is contradictory by definition, but in my view, quality is fundamental to an understanding of these graphics cards. As well, this would assist in explaining to content cobsumers the most important reason content creators are willing to spend $3,500 on a Quadro 6000 when an $800 GTX will make some things faster. Yes, AutoCad 2D is purposely made to run on almost any system- but when the going gets tough***, the tough get a dual Xeon, a pile of ECC, and a Quadro / Firepro!

    ***(Everything else!)

    Cheers, BambiBoom

  • 1 Hide
    falchard , July 8, 2013 2:05 AM
    Actually most of these tests don't hit on why you get a Workstation card. In a CAD environment the goal is to get the most amount of polies on screen in real time. SPECview is the benchmark suite to test this and you can see the difference the card makes.
    In the tests I found the CUDA numbers disappointing, but you would get a Tesla card for CUDA not a workstation card.
    On the OpenCL numbers it paints a different picture where there is almost no difference between the consumer card and the workstation card. I was actually expecting the workstation cards to perform better, but once again I think that's an avenue of FireStream and Tesla cards.
  • 1 Hide
    rmpumper , July 8, 2013 3:37 AM
    Quote:
    So what are the workstation cards for anyways? Seems like the gaming cards beat them pretty bad at most tests.


    People buy workstation cards for better viewport performance and better image quality and as you can see from specviewperf numbers, gaming GPUs are completely useless for that.
  • 0 Hide
    catmull-rom , July 8, 2013 5:20 AM
    I don't really get why pro cards are recommended so easily? I know this site want's manufacturers to keep sending them cards but the data just doesn't support a simple end conclusion.

    I totally get that in some work-areas you want ecc, you want certified drivers, you want as much stability and security and / or extra performance in specific areas. Compared to the work the hardware cost is of little importance, so I totally agree, get a pro workstation with a pro card. You want to be on the safest side while doing big engineering projects, parts for planes, scientific and / or financial calculations etc.

    But that being said, and especially for the content creation / entertainment / media sector you really need think if a pro card is useful and worth it. Most 3D apps work great on game cards, and as you can see as far as rendering is concerned game cards are your best choice for speed if you can live with the limitations. Also for a lot of CAD work you can get away fine with a game card.

    So it's not just Autocad or Inventor which don't need a pro card. Most people will be just fine with them on 3ds max or alike, rhino and solidworks.

    I don't get why there are no test scores with Solidworks and game cards in this article? Game cards work fine mostly and pro cards offer little extra featurewise in this app. The driver issue really seems like a bad excuse not to have some game scores in there.

    Also, I have never really looked at Specview. It's seems to heavily favor pro cards while it doesn't tell you most apps will work fine with game cards.
  • 0 Hide
    vhjmd , July 8, 2013 5:34 AM
    Wonderful article, you should make an update with intel HD 3000 and HD 4000 because at least Siemens NX now supports officially those cards because have performance for Open GL.
  • 4 Hide
    mapesdhs , July 8, 2013 5:45 AM
    With the pro cards at last not hindered by slower-clocked workstation
    CPUs, we can finally see these cards show their true potential. You're
    getting results that more closely match my own this time, confirming what
    I suspected, that workstation CPUs' low clock rates hold back the
    Viewperf 11 tests significantly in some cases. Many of them seem very
    sensitive to absolute clock rate, especially ProE.

    And interesting to compare btw given that your test system has a 4.5GHz
    3770K. Mine has a 5GHz 2700K; for the Lightwave test with a Quadro 4000,
    I get 93.21, some 10% faster than with the 3770K. I'm intrigued that you
    get such a high score for the Maya test though, mine is much lower
    (54.13); driver differences perhaps? By contrast, my tcvis/snx scores are
    almost identical.

    I mentioned ProE (I get 16.63 for a Quadro 4K + 2700K/5.0); Igor, can you
    confirm whether or not the ProE test is single-threaded? Someone told me
    ProE is single-threaded, but I've not checked yet.


    FloKid, I don't know how you could miss the numbers but in some cases
    the gamer cards are an order of magnitude slower than the pro cards,
    especially in the Viewperf tests. As rmpumper says, pro cards often give
    massively better viewport performance.


    bambiboom, although you're right about image quality, you're wrong about
    performance with workstation CPUs - many pro apps benefit much more from
    absolute higher speed of a single CPU with less threads, rather than just
    lots of threads. I have a dual-X5570 Dell T7500 and it's often smoked for
    pro apps by my 5GHz 2700K (even more so by my 3930K); compare to my
    Viewperf results as linked above. Mind you, as I'm sure you'd be the
    first to point out, this doesn't take into account real-world situations
    where one might also be dealing with large data sets, lots of I/O and
    other preprocessing in a pro app such as propprietory database traversal,
    etc., in which case yes indeed a lots-of-threads workstation matters, as
    might ECC RAM and other issues. It varies. You're definitely right though
    about image precision, RAM reliability, etc.


    falchard, the problem with Tesla cards is cost. I know someone who'd
    love to put three Teslas in his system, but he can't afford to. Thus, in
    the meantime, three GTX 580s is a good compromise (his primary card
    is a Quadro 4K).


    catmull-rom, if I can quote, you said, "... if you can live with the
    limitations.", but therein lies the issue: the limitation is with
    problems such as rendering artifacts which are normally deemed
    unacceptable (potentially disastrous for some types of task such as
    medical imaging, financial transaction processing and GIS). Also, to
    understand Viewperf and other pro apps, you need to understand viewport
    performance, and the big differences in driver support that exist between
    gamer and pro cards. Pro & gamer cards are optimised for different types
    of 3D primitive/function, eg. pro apps often use a lot of antialiased
    lines (games don't), while gamer cards use a lot of 2-sided textures (pro
    apps don't). This is reflected in the drivers, which is why (for example)
    a line test in Maya can be 10X faster on a pro card, while a game test
    like 3DMark06 can be 10X faster on a gamer card.

    Also, as Teddy Gage pointed out on the creativecow site recently, pro
    cards have more reliable drivers (very important indeed), greater viewport
    accuracy, better binned chips (better fault testing), run cooler, are smaller,
    use less power and come with better customer support.

    For comparing the two types of card, speed is just one of a great many
    factors to consider, and in many cases is not the most important factor.
    Saving several hundred $ by buying a gamer card is pointless if the app
    crashes because of a memory error during a 12-hour render. The time lost
    could be catastrophic it means one misses a submission deadline; that's
    just not viable for the pro users I know.

    Ian.

  • -8 Hide
    cityuser , July 8, 2013 5:47 AM
    May be THG has the usual favour result for nVidia card so AMD do not waste time to send the card you want.
    I mean whatever AMD do, the result always look bad "generated" from THG.
  • 4 Hide
    mapesdhs , July 8, 2013 6:04 AM
    Quote:
    May be THG has the usual favour result for nVidia card ...


    A strange thing to say given how well AMD's cards clearly do in many of the tests. Can you please
    explain in exactly what manner toms is being biased? Are you saying they're rigging the tests in
    some way? If so, how?

    Build the system, install the OS/drivers/apps, run the tests. If the data ends up looking better for
    NVIDIA in some cases, that's a problem for AMD, not THG.

    Ian.

  • 3 Hide
    bambiboom , July 8, 2013 7:50 AM
    Quote:
    With the pro cards at last not hindered by slower-clocked workstation
    CPUs, we can finally see these cards show their true potential. You're
    getting results that more closely match my own this time, confirming what
    I suspected, that workstation CPUs' low clock rates hold back the
    Viewperf 11 tests significantly in some cases. Many of them seem very
    sensitive to absolute clock rate, especially ProE.



    And interesting to compare btw given that your test system has a 4.5GHz
    3770K. Mine has a 5GHz 2700K; for the Lightwave test with a Quadro 4000,
    I get 93.21, some 10% faster than with the 3770K. I'm intrigued that you
    get such a high score for the Maya test though, mine is much lower
    (54.13); driver differences perhaps? By contrast, my tcvis/snx scores are
    almost identical.

    I mentioned ProE (I get 16.63 for a Quadro 4K + 2700K/5.0); Igor, can you
    confirm whether or not the ProE test is single-threaded? Someone told me
    ProE is single-threaded, but I've not checked yet.


    FloKid, I don't know how you could miss the numbers but in some cases
    the gamer cards are an order of magnitude slower than the pro cards,
    especially in the Viewperf tests. As rmpumper says, pro cards often give
    massively better viewport performance.


    bambiboom, although you're right about image quality, you're wrong about
    performance with workstation CPUs - many pro apps benefit much more from
    absolute higher speed of a single CPU with less threads, rather than just
    lots of threads. I have a dual-X5570 Dell T7500 and it's often smoked for
    pro apps by my 5GHz 2700K (even more so by my 3930K); compare to my
    Viewperf results as linked above. Mind you, as I'm sure you'd be the
    first to point out, this doesn't take into account real-world situations
    where one might also be dealing with large data sets, lots of I/O and
    other preprocessing in a pro app such as propprietory database traversal,
    etc., in which case yes indeed a lots-of-threads workstation matters, as
    might ECC RAM and other issues. It varies. You're definitely right though
    about image precision, RAM reliability, etc.


    falchard, the problem with Tesla cards is cost. I know someone who'd
    love to put three Teslas in his system, but he can't afford to. Thus, in
    the meantime, three GTX 580s is a good compromise (his primary card
    is a Quadro 4K).


    catmull-rom, if I can quote, you said, "... if you can live with the
    limitations.", but therein lies the issue: the limitation is with
    problems such as rendering artifacts which are normally deemed
    unacceptable (potentially disastrous for some types of task such as
    medical imaging, financial transaction processing and GIS). Also, to
    understand Viewperf and other pro apps, you need to understand viewport
    performance, and the big differences in driver support that exist between
    gamer and pro cards. Pro & gamer cards are optimised for different types
    of 3D primitive/function, eg. pro apps often use a lot of antialiased
    lines (games don't), while gamer cards use a lot of 2-sided textures (pro
    apps don't). This is reflected in the drivers, which is why (for example)
    a line test in Maya can be 10X faster on a pro card, while a game test
    like 3DMark06 can be 10X faster on a gamer card.

    Also, as Teddy Gage pointed out on the creativecow site recently, pro
    cards have more reliable drivers (very important indeed), greater viewport
    accuracy, better binned chips (better fault testing), run cooler, are smaller,
    use less power and come with better customer support.

    For comparing the two types of card, speed is just one of a great many
    factors to consider, and in many cases is not the most important factor.
    Saving several hundred $ by buying a gamer card is pointless if the app
    crashes because of a memory error during a 12-hour render. The time lost
    could be catastrophic it means one misses a submission deadline; that's
    just not viable for the pro users I know.

    Ian.




    mapesdhs,

    Many excellent points.

    Yes, I agree completely that pure clock speed is useful and desirable in workstations, my point was that if I were predominately rendering, I would rather have more cores / threads than a high clock speed. But yes, I'd love a couple of twleve core Xeons at 4.5GHz. These may be coming too as the next generation of 14nm E7 (2015) are said to be 12-15 core, use DDR4 ,and be quite fast, though I've not heard any specific number. Intel seems to do development from the lower speeds at first.

    Your comments are also very welcome as you mention some of the important experiential qualities that come into play when using workstation applications. One of the problems in this kind of discussion is that those with gaming oriented systems have not experienced use of 3D CAD and rendering applications to the level where the workstation cards become not only useful, but mandatory. Especially important are the viewports , artifacts and reliability.

    After using a Dell Prevision T5400 with the original Quadro FX 580 (512MB) I soon realized that 3D CAD- for which I bought that system would need more 3D capability and memory. I fell for the idea that, as I was primarily a designer and not working at an extraordinarily high level in 3D CAD, that a GeForce GTX should be adequate and possibly faster at that level than a Quadro- and far less expensive. I bought a GTX 285 (1GB) because it was more or less a 1GB version of the 4GB Quadro FX 5800- same GPU, same 512-bit, 240 shaders, only less memory, $350 instead of $3,200, and I could always add a second one in SLI if needed.

    The GTX 285 seemed ostensibly to have all the right hardware and in Sketchup the 3D navigation at first seemed to be blazingly fast. But, after the Sketchup model became larger, the navigation had a quirk in which it would spin in any direction, but if I stopped moving for only a second it would freeze such that most often I'd have to close the program. I stumbled and stuttered around in monochrome, only including as little visible geometry as possible but it was no good- if I for one second included another large component it would freeze.

    The model eventually became 125MB and when I added textures and tried shadows, it made impossible artifacts, a rain of short black lines from any polygon, the shadows became solid planes at bizarre angles, and sometimes, textures would drop out. Extracting renderings from the model- the whole point- were useless as the rendering application would import for about 25 minutes and crash Sketchup. I was never able to make a single rendering of a model more than 9 or 10MB with the GTX 285.

    Then, I began learning Solidworks in preparation to do a 6,000 part assembly- great first project? > and the system would not open viewports, and the limited anti-aliasing made curves so crude I couldn't make accurate solids intersections.

    In short, the situation was impossible and I realized how extremely expensive my cost-savings had been. I went back to the idea of my favourite Quadro, the FX 5800 and bought an FX 4800, same GPU, but 384-bit instead of 512, and 192 CUDA cores instead of 240, and 1.5GB in place of 4GB. Perfect renderings, viewports and x128 anti-aliasing instead of x16. The navigation in my large Sketchup models is not blazing fast, but it doesn't freeze in Solidworks- in short all problems solved. Eventually, I added a second Xeon X5460 and went from 12GB to 15GB to have more cores/threads for rendering and all is well -though this system gets very hot during rendering (it's the DDR2).

    Sorry for the long, historical ramble, but I think that these experiential episodes are the kind of information that, as you mentioned, are among the most important aspects of evaluating workstation graphics cards and missing in a speed-only focus.

    When Quadro K5000's are sold used for $1,000,...

    Cheers, BambiBoom

    "No matter your wealth, power,or friends, the cheapest things in life are free."

    [ Dell Precision T5400 > 2X Xeon X5460 quad core @3.16GHz > 16 GB ECC 667 > Quadro FX 4800 (1.5GB) > WD RE4 / Segt Brcda 500GB > Windows 7 Ultimate 64-bit > AutoCad 2007, Revit 2011, Solidworks 2010, Sketchup 8 Pro, Corel Technical Designer X-5, Adobe CS4 MC, WordP Office X4, MS Office2007]







  • 0 Hide
    happyballz , July 8, 2013 8:09 AM
    These just shows that software itself has very crappy support of gaming-oriented cards. Most high-end gaming cards can do just as good of a job if not better.
  • 1 Hide
    ekho , July 8, 2013 8:14 AM
    Thanks for the article
  • 0 Hide
    tourist , July 8, 2013 9:51 AM
    How do you think the fire pro apu's would stack up as comparison ?
  • 1 Hide
    FormatC , July 8, 2013 10:08 AM
    Ok, a lot of stuff and questions...

    I have here a Dual-Opteron Workstation (4284) with 32 Gigs of ECC to, but I have the same problem with AMD an their test bench with a current Xeon. This CPUs are limiting the newer and powerful pro-cards in many cases. I can't show you the real difference between the possible performance of this cards if I'm using this workstations. If we are benchmarking gamer cards with 5 GHz CPUs nobody is crying. But how much readers are using this higher clocks for gaming? It is the same: we wan't to show the performance of this cards only (without limitations) and not for the complete workstation.

    Yes, the older Pro/E is mostly single-threaded.

    AMD was not able to send us workstation APUs because the big OEMs (Dell, HP) are not interested in to build such systems. It is better to sell a system with an expensive CPU AND an expensive, separate VGA card.

    ECC on VGA cards is important for some things but how many cards are supporting ECC? The new K5000 is not able to do this. I have only the older big Quadros and the W8000/9000 from AMD.
  • 0 Hide
    allanitomwesh , July 8, 2013 10:15 AM
    Go go 7970.
  • 1 Hide
    tourist , July 8, 2013 10:28 AM
    Quote:
    Ok, a lot of stuff and questions...

    I have here a Dual-Opteron Workstation (4284) with 32 Gigs of ECC to, but I have the same problem with AMD an their test bench with a current Xeon. This CPUs are limiting the newer and powerful pro-cards in many cases. I can't show you the real difference between the possible performance of this cards if I'm using this workstations. If we are benchmarking gamer cards with 5 GHz CPUs nobody is crying. But how much readers are using this higher clocks for gaming? It is the same: we wan't to show the performance of this cards only (without limitations) and not for the complete workstation.

    Yes, the older Pro/E is mostly single-threaded.

    AMD was not able to send us workstation APUs because the big OEMs (Dell, HP) are not interested in to build such systems. It is better to sell a system with an expensive CPU AND an expensive, separate VGA card.

    ECC on VGA cards is important for some things but how many cards are supporting ECC? The new K5000 is not able to do this. I have only the older big Quadros and the W8000/9000 from AMD.


    I had thought i read dell was planing a a300 workstation ? Your right about the $$$$ a low end workstation apu using shared ecc memory would not be a cash cow fo them.
  • 0 Hide
    FormatC , July 8, 2013 10:31 AM
    I've tried to get something in this direction. No response from Dell and no comment from AMD.
    This is only good for buyers but not for OEMs like Dell :D 
Display more comments