EVGA gtx 560 ti SLI vs QUADRO 2000 or 4000

Status
Not open for further replies.

jframiro

Distinguished
May 12, 2011
3
0
18,510
I am a complete noob. please bare with me. I am an architecture student whom uses MAYA, autocad, and rhino mostly. I recently bought a pc from cyberpower and love it. or Until I started getting high res poly models in maya. I currently have a i7-2600k with a evga gtx560ti. I need better modeling power. Is it better to do an SLI with another gtx560ti or to go with a quadro 2000 or 4000? I am trying to be cost efficient so i know the 4000 is best but need the reasoning. Also is it possible to run a gtx560ti with a quadro 2000 or 4000? if so does it make it better? thank you for reading. please help!!
 
@jframiro, nVidia Quadro GPU series are designed for such a work. 3D applications, animations, rendering, accelerating CAD & Maya and so on ....

If you notice, Maya, mudbox, Cad never list regular GPUs (or list them at the last place) in their system requirement manuals, they always list the nvidia Quadro and ATI FirePro GPUs.

So definitely you're better with one of the nVidia Quadro, if you're a beginner stick with Quadro 2000, if you're a professional go with Quadro FX 4000/5000 series but they are pretty much expensive.
 
^^^
Great Advice ilysami

Workstation cards are much better at those types of apps
Not only better optimized drivers but even the architecture of the card
is better designed for that work especially high res polys

think of a gaming GPU like a Ferrari
think of a workstation card like an offroad 4x4 vehicle

you wouldnt use the Ferrari to go up a mountain

a gaming GPU is designed for FPS
will sacrifice accuracy for the FPS
it doesnt need to have every frame perfect
just put them up there fast


this a good read on workstation vs gaming cards

http://cadworkstationguide.com/Workstation-Graphics-Cards-Vs-Gaming-Graphics-Cards.html
 

gnomio

Distinguished
Jul 6, 2011
1,120
0
19,310
Again the Gtx 580 and 480 outperform the Quadro even used as workstation graphics. The Gtx 580 beat the Quadro by about 2 seconds. Not bad for a card with a price tag off 1200usd less
MPEGains.png
 
gnomio read the link I posted

only amateurs use gaming cards for workstation app purposes

for a home user using a gaming card is alright
but gaming card arch and drivers are different than workstation cards

it might render it faster but not as accurate
and when one little mistake especially in AutoCAD
can destroy a project

 

gnomio

Distinguished
Jul 6, 2011
1,120
0
19,310

Not amateurs king. Professionals with commen sense
http://ppbm5.com/DB-PPBM5-1.php

Those are all professional workstations King

Quadro cards are consumer cards thats just underclocked for precision.
 
@gnomio even though i don't understand anything from your links but i see that you wanna compare workstation GPUs with Dekstop GPUs, so give a look at this
IMG0029279.png

GTX 580 is the modified GTX 480, even the Quadro FX 580 beats it, can you see now the difference between Quadro and GTX ?
 
quote

Cad applications typically cost between $1000 to in excess of $10,000 per license. Cad applications need workstation graphics cards that can manipulate complex geometry that could be in excess of a billion triangles. They need to be able to deal with real world real size geometry that could include bridges, skyscrapers or a jumbo jet for example.

They need to be able to produce geometry that can be measured to many decimal places in anything from microns to miles. Getting it wrong could result in product recalls or even failure for example. At a higher level some Cad applications require workstation graphics cards where the graphics card utilise GPU computing. That is when the workstation graphics card actually performs more calculations than the workstations actual processor(s) themselves.

Workstation graphics cards and their specialised graphics software drivers are typically designed and configured to deliver anything up to 5x faster performance, computational data integrity and accuracy and up to 8x faster computational simulation for a broad spectrum of design, animation and video software applications. Typically depending on the particular software requirements workstation graphics cards are priced at circa $150-$5000.

Cad graphics cards can seem to be expensive but in the main most Cad workstations and application would only require one in the lower quartile of the potential price spread. But that said, hopefully you can now understand that the question of workstation graphics cards Vs gaming graphics cards for your cad workstation should never be asked. If you have spent thousands on Cad software for your business, make sure it runs on a decent cad workstation with a recommended workstation graphics card.



from link I posted


 
gnomio, all of the benchmarks you are posting are for Adobe Premier, the mercury playback engine is simply CUDA enhanced so which ever has the most power will do better, but premier isn't exactly what i would consider a workstation app, and certainly isn't a relevant comparison to Maya and other CAD applications.

Tom's did a review a few years ago, and if you compare the specs the 280 should be faster in every way than the FX 4800, but the FX 4800 kicks its ass in modeling applications like Maya and Solidworks
http://www.tomshardware.com/reviews/quadro-fx-4800,2258-10.html

Singular benchmarks never prove anything so stop picking and choosing your benchmarks to only show what you want.
 
+1 to hunter

I can tell you with first hand experience that a quadro will work better than a gtx it's based off of for modeling (the quadro having less cudas and lower clocks). You can't compare this with video editing benchmarks, that's a whole different ball game. Also when rendering 3d, most renderers are purely cpu based so gpu is irrelevant there. Cad, maya, max, rhino software renderers and as well as mental ray are all cpu only. He said "modeling power" meaning viewport performance so why bring up rendering performance.

This argument always comes down to price/performance but before I continue with the argument jframiro, you need to check you're gpu and cpu usage, there were some incidences where for some reason the gpu would stay in low power state (2d mode) giving bad performance. I've especially had this happen in maya and just used msi afterburner to set it to full speed.

King smp, can you post that other link, the older one. Your new link is similar but if you read every page, some of the stuff he says is over-exaggerated and wrong, so I see it as unreliable. "each core can process thousands of threads at the same time" Where can I get in on this?
 
Thinking of that statement I see your point k1114
true enough when dealing with GPGPU processing you are dealing with many
"cores" or stream processors
so in a parallel coded operation like encoding the GPGPU can handle
many threads (frames) at one time
but that statement you quoted is misleading

looking at this
quote

These questions are understandable given that GPUs like the ATI Radeon HD 4870 and the ATI FirePro v8750 appear to have the same GPU (RV770) and hardware configuration, but Alexis explained that there are several significant, but unapparent hardware-level differences.

First and foremost, workstation GPUs are different from desktop GPUs at the ASIC and board level. If you were to place a workstation ASIC (the actual GPU chip) in the equivalent consumer grade board, the card would exhibit different behavior. In other words, the GPU dies are not simply interchangeable.

Alexis continued by explaining that workstation hardware offers features that can’t be benchmarked, but really matter to users and cannot be had on desktop hardware. Such features include 30-bit color depth, framelock/genlock functionality, and hardware stereoscopic output


source - http://tech.icrontic.com/article/the-real-difference-between-workstation-and-desktop-gpus/

it gives a more detailed explanation of differences between wkstn and gaming cards
 

gnomio

Distinguished
Jul 6, 2011
1,120
0
19,310

let me say this one more time. THERE IS NO DIFFERENCE BETWEEN THE QUADRO AND THE CONSUMER CARDS.

difference between consumer and"profesional" cards, except for the bios and drivers.. thats it..
Quadros and Tesla cards are lower clocked because stability/long term use that are needed in the industry; different standars to be fullfilled

In the end a GF 100 is A GF 100. Quadro 4000 is A Gtx 460 and will always be one. It won't through some miracle beat a 580 because it doesn't have the Vram nor the Cores to do it. Doesn't matter what application. Hardware mpe it will use its cores and the one with the more faster cores will win. Doesn't matter what software you use. You can't add shaders, triangles per second or bandwidth with drivers. That's already on the card. To compare it for you nicely. The Quadro 4000 can draw around 1 and a half to 2 triangles per cycle. The Gtx 580 can draw 2 to 3 triangles per cycle.

The Quadro isn't some other architecture. Its a GF 100 or GF 101. That's it.
With Maya there's no performance gains using the Quadros that justify the price difference, and they share the same chips as their Geforce counterparts - you just pay a premium for even more crappy drivers.

Quadro cards are released clocked at 600mhz where geforce are clocked at 700 to 800. There's your difference. And the 4000 doesn't even have ECC ram like the Gtx 5xx series does!!!!

The on-board memory is critical but the number of CUDA cores and their speed is also critical. The more CUDA cores (Stream Processors) your GPU has, the better. The faster they are, the faster your scenes will render. Doesn't matter what program. They all work on the same concept.

Quadro comes comes with lots of ram something nvidia are not keen to give to their consumer cards. The 580 got 3gb onboard ecc ram and has 512 CUDA Cores!!!!
The Quadro 5000 got what 352 Cuda cores. Its got less bandwidth its got less Cuda cores its slower. The 5000 got ecc not the 4000 which specs is even poorer.
 
Its amazing what improved optimizations in drivers and the BIOS can do, have you not read any of the links that other people have posted? Did you look at the chart in the article i linked to that compared the GTX 280 to the technically weaker FX 4800 where the GTX 280 gets its ass handed to it in workstation applications?


If you don't think that driver optimization for them add anything, then im sorry, but you are delusional. If you can provide links to benchmarks to prove ANYTHING you are saying then i will accept that im wrong, but you have yet to post a link to back up what you are saying, merely links to benchmarks for a totally different type of application because Adobe premier has about as much in common with maya as a pony has in common with a Ford Mustang.
 
QUOTE

First and foremost, workstation GPUs are different from desktop GPUs at the ASIC and board level. If you were to place a workstation ASIC (the actual GPU chip) in the equivalent consumer grade board, the card would exhibit different behavior. In other words, the GPU dies are not simply interchangeable.

source - AMD Senior Marketing Manager Alexis Mather

 

gnomio

Distinguished
Jul 6, 2011
1,120
0
19,310

no king
G100 is GF 100.
Please read this. This is how a gpu works. Fermi G100 to be precise
http://www.beyond3d.com/content/reviews/55

This is how Cypress works. Its a bit technical but I know you got the head for it.
http://www.beyond3d.com/content/reviews/53

Gtx 580
http://www.nvidia.com/object/product-geforce-gtx-580-us.html

Quadro 4000
http://www.nvidia.com/object/product-quadro-4000-us.html

There is no contest IMO. The Quadro might be a great CAD solution, but the GTX 580 trumps it by a factor of two in raw encoding power.
 


So you say a quadro is better at cad but a gtx is better at encoding? What is this thread about again? No one is refuting it's advantage in raw encoding power.

They are based off the same architecture but they are not the same. How does an i7 2600k differ from a i5 2500k, is there some simple switch to turn on ht? The same concept applies to gtx to quadro, there are minute differences that make them different. If what you say is correct, then why is it not possible to flash a gtx to a quadro? I'm not talking about driver softmods.
 

gnomio

Distinguished
Jul 6, 2011
1,120
0
19,310

no I'm saying it might be a great CAD card but it doesn't touch a Gtx 580. Remember the Gtx 580 3gb is limited by Vram like the others and has the double the processing power. Its so simple. 1Gb more Vram double the processing power what proof do you need more.

But people who use them don't look after they're cooling as I imagine they can run hot with the higher clocks

The thread is about geforce vs Quadro cards. Quadro 4000 2x times more expensive than a Gtx 580 3gb version.
Which is the better bang for buck?

“Using a Quadro FX 4800 with 1500MB of dedicated video memory:

1000 x 1000 = 425 mb

2000 x 2000 = 555 mb

3000 x 3000 = 750 mb

5000 x 5000 = 1125 mb

6000 x 6000 = 1200 mb artifacts start to appear as memory limits are reached

7000 x 7000 = render fails”
http://area.autodesk.com/blogs/shane/the_iray_faq/comments

Vram is very important with these cards as you see the above.
 
http://download.autodesk.com/us/qualcharts/2011/maya2011_qualifiedgraphics_win.pdf

Autodesk puts out hardware specs for maya, direct quote from their section on consumer graphics cards
"Important: Although Autodesk tested the NVIDIA GeForce and ATI Radeon consumer
graphics cards, it is Autodesk, NVIDIA, and AMD policy to only recommend and support
the professional NVIDIA Quadro, ATI FirePro, and ATI FireGL graphics family cards.
See the NVIDIA Quadro vs. GeForce GPUs White Paper [PDF]. "

If you follow the link to the white paper it goes over all the difference between the quadro and consumer graphics card, there are quite a few. Now can we stop trying to claim that the cards are identical when they clearly are not dadiggle?
 
Quote
from gnomio
The thread is about geforce vs Quadro cards. Quadro 4000 2x times more expensive than a Gtx 580 3gb version.
Which is the better bang for buck?


when it comes to profession workstation running apps that cost thousands if not tens of thousands of dollars
and the workstations can easily cost tow thousand just in Xeon/Opterons

and projects that you are doing are worth million dollar accounts

Why would you worry about "Bang for Buck"?

saving you $500 on a card
could easily cost you a 100K project
do you want to be the IT that recommended a Desktop GPU to save a few hundred bucks and when it fails and somebody has to be held accountable?
 

gnomio

Distinguished
Jul 6, 2011
1,120
0
19,310

yeah yeah but that still doesnt make the quadro a better card. Its GF104 and GF106. They use the same methods.
In fact, we struggled with many potential theories, until a fortuitous encounter with a Quadro made the truth painfully obvious: product differentiation, a somewhat en vogue term over at NVIDIA, it seems. The Quadro, in spite of being pretty much the same hardware (this is a signal to all those that believe there's magical hardware in the Quadro because it's more expensive – engage rant mode!), is quite happy doing full speed setup on the untessellated plebs
We can only imagine that this seemed like a good idea to someone. Sure, there's a finite probability that traditional Quadro customers, who are quite corporate and quite fond of extensive support amongst other things, would suddenly turn into full blown hardware ricers, give up all perks that come with the high Quadro price, buy cheap consumer hardware and use that instead.

http://www.beyond3d.com/content/reviews/55/10

while you there read how a GPU really works

I dunno who you referring to but its the 2nd time someone has mentioned that word or whatever it is. Check my ip or whatever you do. I cant really help it if my isp make use of 1 ip for all their customers.
f you use the HTTPS, HTTP or FTP connection model and your network provider only uses a single official IP address for all its customers (like for example MTN) you should configure a "keepalive interval" of maybe 3000 milliseconds instead of using the default 20,000 milliseconds. This will ensure your underlying TCP streams will not be re-used for other users after being idle for a few seconds, and if it happens anyway both server and client notice more quickly. Do this if you see a lot of "can't read from server connection" debug messages in the message log.
http://www.your-freedom.net/index.php?id=mobile

We have to pay for IPs
http://mybroadband.co.za/vb/showthread.php/304569-Inbound-Connections-Unrestricted-APN-%28Now-Testing!%29

So please keep your allegations in your pocket


 
What is really comes down to:

is your usage scenario
in an enterprise environment where possibly millions are at stake
using Autocad Maya etc
then you have to go with the Workstation card
whether or not there is architectural differences can be debated
but the tech support and driver support for the workstation cards
is way beyond what gaming GPUs offer
and in a mission critical financially dependent project the option for
gaming GPUs doesnt make sense

Now in a small business or home uage scenario then going with a gaming GPU
does make alot of sense

I can use myself as an example perfectly
I have a small one man computer repair business
been doing for years now
now I want to expand my revenue potential by expanding
into video work
all small time stuff
it is taking peoples home movies either VCR-DVD conversion,digital media
etc and editing/adding audio soundtrack (music),titles,wipes,fades
etc
then rendering/encoding to produce a DVD
I decided to go with Cyberlink PowerDirector 9
first it uses GPU acceleration with ATI Stream
plus it has a great GUI and is easy to learn and produce good video
and the cost of the software much more reasonable than
Adobe

so going with cost vs performance vs end results

I went with a HD 5670 for now
has 400 stream processors (?) and gives decent results
10 minute video rendered and burnt to DVD in about 15 minutes
this is with a C2D 3ghz and HD 5670
this is minimalist cost effective hardware for a minor task

for small to medium businesses the Desktop GPU is a good choice
for enterprise leve applications the workstation is the right choice

to quote the OP

I am a complete noob. please bare with me. I am an architecture student whom uses MAYA, autocad, and rhino mostly. I recently bought a pc from cyberpower and love it. or Until I started getting high res poly models in maya. I currently have a i7-2600k with a evga gtx560ti. I need better modeling power. Is it better to do an SLI with another gtx560ti or to go with a quadro 2000 or 4000? I am trying to be cost efficient so i know the 4000 is best but need the reasoning.


so in the parameters of the OPs question
as a student the gaming GPU is a good choice
I wouldnt do SLI since I dont believe those programs scale well
sell the 560ti and go with a 570/580 would work out most cost effective

or if the OP is going to use the rig professionally then go with the 4000
but only if the the work and profit can justify it


SO

we are all right

there is no reason to argue

gnomio has a point
in a smaller business than a Gaming GPU makes alot of sense

others are right that in a mision critical corporate environment
the workstation card makes sense
nobody wants to be the Head IT who cheaped out with a gaming card
and a million dollar project went bad


so we are all right


Now can we all just get along?
 
Status
Not open for further replies.