Sign in with
Sign up | Sign in
Your question
Closed

EVGA gtx 560 ti SLI vs QUADRO 2000 or 4000

Last response: in Graphics & Displays
Share
November 12, 2011 2:54:33 AM

I am a complete noob. please bare with me. I am an architecture student whom uses MAYA, autocad, and rhino mostly. I recently bought a pc from cyberpower and love it. or Until I started getting high res poly models in maya. I currently have a i7-2600k with a evga gtx560ti. I need better modeling power. Is it better to do an SLI with another gtx560ti or to go with a quadro 2000 or 4000? I am trying to be cost efficient so i know the 4000 is best but need the reasoning. Also is it possible to run a gtx560ti with a quadro 2000 or 4000? if so does it make it better? thank you for reading. please help!!
a c 109 U Graphics card
November 12, 2011 2:59:28 AM

560ti sli, but for now overclock that 560ti and you should see a significant increase in performance.
Score
0
a c 143 U Graphics card
November 12, 2011 3:12:50 AM

@jframiro, nVidia Quadro GPU series are designed for such a work. 3D applications, animations, rendering, accelerating CAD & Maya and so on ....

If you notice, Maya, mudbox, Cad never list regular GPUs (or list them at the last place) in their system requirement manuals, they always list the nvidia Quadro and ATI FirePro GPUs.

So definitely you're better with one of the nVidia Quadro, if you're a beginner stick with Quadro 2000, if you're a professional go with Quadro FX 4000/5000 series but they are pretty much expensive.
Score
0
Related resources
a b U Graphics card
November 12, 2011 3:38:08 AM

^^^
Great Advice ilysami

Workstation cards are much better at those types of apps
Not only better optimized drivers but even the architecture of the card
is better designed for that work especially high res polys

think of a gaming GPU like a Ferrari
think of a workstation card like an offroad 4x4 vehicle

you wouldnt use the Ferrari to go up a mountain

a gaming GPU is designed for FPS
will sacrifice accuracy for the FPS
it doesnt need to have every frame perfect
just put them up there fast


this a good read on workstation vs gaming cards

http://cadworkstationguide.com/Workstation-Graphics-Car...
Score
0
a b U Graphics card
November 12, 2011 4:00:19 AM

Again the Gtx 580 and 480 outperform the Quadro even used as workstation graphics. The Gtx 580 beat the Quadro by about 2 seconds. Not bad for a card with a price tag off 1200usd less
Score
0
a b U Graphics card
November 12, 2011 4:43:01 AM

gnomio read the link I posted

only amateurs use gaming cards for workstation app purposes

for a home user using a gaming card is alright
but gaming card arch and drivers are different than workstation cards

it might render it faster but not as accurate
and when one little mistake especially in AutoCAD
can destroy a project

Score
0
a b U Graphics card
November 12, 2011 9:04:03 AM

king smp said:
gnomio read the link I posted

only amateurs use gaming cards for workstation app purposes

for a home user using a gaming card is alright
but gaming card arch and drivers are different than workstation cards

it might render it faster but not as accurate
and when one little mistake especially in AutoCAD
can destroy a project

Not amateurs king. Professionals with commen sense
http://ppbm5.com/DB-PPBM5-1.php

Those are all professional workstations King

Quadro cards are consumer cards thats just underclocked for precision.
Score
0
a c 143 U Graphics card
November 12, 2011 11:55:40 AM

@gnomio even though i don't understand anything from your links but i see that you wanna compare workstation GPUs with Dekstop GPUs, so give a look at this

GTX 580 is the modified GTX 480, even the Quadro FX 580 beats it, can you see now the difference between Quadro and GTX ?
Score
0
a b U Graphics card
November 12, 2011 1:56:26 PM

quote

Cad applications typically cost between $1000 to in excess of $10,000 per license. Cad applications need workstation graphics cards that can manipulate complex geometry that could be in excess of a billion triangles. They need to be able to deal with real world real size geometry that could include bridges, skyscrapers or a jumbo jet for example.

They need to be able to produce geometry that can be measured to many decimal places in anything from microns to miles. Getting it wrong could result in product recalls or even failure for example. At a higher level some Cad applications require workstation graphics cards where the graphics card utilise GPU computing. That is when the workstation graphics card actually performs more calculations than the workstations actual processor(s) themselves.

Workstation graphics cards and their specialised graphics software drivers are typically designed and configured to deliver anything up to 5x faster performance, computational data integrity and accuracy and up to 8x faster computational simulation for a broad spectrum of design, animation and video software applications. Typically depending on the particular software requirements workstation graphics cards are priced at circa $150-$5000.

Cad graphics cards can seem to be expensive but in the main most Cad workstations and application would only require one in the lower quartile of the potential price spread. But that said, hopefully you can now understand that the question of workstation graphics cards Vs gaming graphics cards for your cad workstation should never be asked. If you have spent thousands on Cad software for your business, make sure it runs on a decent cad workstation with a recommended workstation graphics card.



from link I posted


Score
0
a c 194 U Graphics card
November 12, 2011 2:38:44 PM

gnomio, all of the benchmarks you are posting are for Adobe Premier, the mercury playback engine is simply CUDA enhanced so which ever has the most power will do better, but premier isn't exactly what i would consider a workstation app, and certainly isn't a relevant comparison to Maya and other CAD applications.

Tom's did a review a few years ago, and if you compare the specs the 280 should be faster in every way than the FX 4800, but the FX 4800 kicks its ass in modeling applications like Maya and Solidworks
http://www.tomshardware.com/reviews/quadro-fx-4800,2258...

Singular benchmarks never prove anything so stop picking and choosing your benchmarks to only show what you want.
Score
0
a c 177 U Graphics card
November 12, 2011 4:54:31 PM

+1 to hunter

I can tell you with first hand experience that a quadro will work better than a gtx it's based off of for modeling (the quadro having less cudas and lower clocks). You can't compare this with video editing benchmarks, that's a whole different ball game. Also when rendering 3d, most renderers are purely cpu based so gpu is irrelevant there. Cad, maya, max, rhino software renderers and as well as mental ray are all cpu only. He said "modeling power" meaning viewport performance so why bring up rendering performance.

This argument always comes down to price/performance but before I continue with the argument jframiro, you need to check you're gpu and cpu usage, there were some incidences where for some reason the gpu would stay in low power state (2d mode) giving bad performance. I've especially had this happen in maya and just used msi afterburner to set it to full speed.

King smp, can you post that other link, the older one. Your new link is similar but if you read every page, some of the stuff he says is over-exaggerated and wrong, so I see it as unreliable. "each core can process thousands of threads at the same time" Where can I get in on this?
Score
0
a b U Graphics card
November 12, 2011 5:25:51 PM

Thinking of that statement I see your point k1114
true enough when dealing with GPGPU processing you are dealing with many
"cores" or stream processors
so in a parallel coded operation like encoding the GPGPU can handle
many threads (frames) at one time
but that statement you quoted is misleading

looking at this
quote

These questions are understandable given that GPUs like the ATI Radeon HD 4870 and the ATI FirePro v8750 appear to have the same GPU (RV770) and hardware configuration, but Alexis explained that there are several significant, but unapparent hardware-level differences.

First and foremost, workstation GPUs are different from desktop GPUs at the ASIC and board level. If you were to place a workstation ASIC (the actual GPU chip) in the equivalent consumer grade board, the card would exhibit different behavior. In other words, the GPU dies are not simply interchangeable.

Alexis continued by explaining that workstation hardware offers features that can’t be benchmarked, but really matter to users and cannot be had on desktop hardware. Such features include 30-bit color depth, framelock/genlock functionality, and hardware stereoscopic output


source - http://tech.icrontic.com/article/the-real-difference-be...

it gives a more detailed explanation of differences between wkstn and gaming cards
Score
0
a b U Graphics card
November 12, 2011 7:54:11 PM

hunter315 said:
gnomio, all of the benchmarks you are posting are for Adobe Premier, the mercury playback engine is simply CUDA enhanced so which ever has the most power will do better, but premier isn't exactly what i would consider a workstation app, and certainly isn't a relevant comparison to Maya and other CAD applications.

Tom's did a review a few years ago, and if you compare the specs the 280 should be faster in every way than the FX 4800, but the FX 4800 kicks its ass in modeling applications like Maya and Solidworks
http://www.tomshardware.com/reviews/quadro-fx-4800,2258...

Singular benchmarks never prove anything so stop picking and choosing your benchmarks to only show what you want.

let me say this one more time. THERE IS NO DIFFERENCE BETWEEN THE QUADRO AND THE CONSUMER CARDS.

difference between consumer and"profesional" cards, except for the bios and drivers.. thats it..
Quadros and Tesla cards are lower clocked because stability/long term use that are needed in the industry; different standars to be fullfilled

In the end a GF 100 is A GF 100. Quadro 4000 is A Gtx 460 and will always be one. It won't through some miracle beat a 580 because it doesn't have the Vram nor the Cores to do it. Doesn't matter what application. Hardware mpe it will use its cores and the one with the more faster cores will win. Doesn't matter what software you use. You can't add shaders, triangles per second or bandwidth with drivers. That's already on the card. To compare it for you nicely. The Quadro 4000 can draw around 1 and a half to 2 triangles per cycle. The Gtx 580 can draw 2 to 3 triangles per cycle.

The Quadro isn't some other architecture. Its a GF 100 or GF 101. That's it.
With Maya there's no performance gains using the Quadros that justify the price difference, and they share the same chips as their Geforce counterparts - you just pay a premium for even more crappy drivers.

Quadro cards are released clocked at 600mhz where geforce are clocked at 700 to 800. There's your difference. And the 4000 doesn't even have ECC ram like the Gtx 5xx series does!!!!

The on-board memory is critical but the number of CUDA cores and their speed is also critical. The more CUDA cores (Stream Processors) your GPU has, the better. The faster they are, the faster your scenes will render. Doesn't matter what program. They all work on the same concept.

Quadro comes comes with lots of ram something nvidia are not keen to give to their consumer cards. The 580 got 3gb onboard ecc ram and has 512 CUDA Cores!!!!
The Quadro 5000 got what 352 Cuda cores. Its got less bandwidth its got less Cuda cores its slower. The 5000 got ecc not the 4000 which specs is even poorer.
Score
0
a c 194 U Graphics card
November 12, 2011 8:05:09 PM

Its amazing what improved optimizations in drivers and the BIOS can do, have you not read any of the links that other people have posted? Did you look at the chart in the article i linked to that compared the GTX 280 to the technically weaker FX 4800 where the GTX 280 gets its ass handed to it in workstation applications?


If you don't think that driver optimization for them add anything, then im sorry, but you are delusional. If you can provide links to benchmarks to prove ANYTHING you are saying then i will accept that im wrong, but you have yet to post a link to back up what you are saying, merely links to benchmarks for a totally different type of application because Adobe premier has about as much in common with maya as a pony has in common with a Ford Mustang.
Score
0
a b U Graphics card
November 12, 2011 8:08:36 PM

QUOTE

First and foremost, workstation GPUs are different from desktop GPUs at the ASIC and board level. If you were to place a workstation ASIC (the actual GPU chip) in the equivalent consumer grade board, the card would exhibit different behavior. In other words, the GPU dies are not simply interchangeable.

source - AMD Senior Marketing Manager Alexis Mather

Score
0
a b U Graphics card
November 12, 2011 8:21:54 PM

king smp said:
QUOTE

First and foremost, workstation GPUs are different from desktop GPUs at the ASIC and board level. If you were to place a workstation ASIC (the actual GPU chip) in the equivalent consumer grade board, the card would exhibit different behavior. In other words, the GPU dies are not simply interchangeable.

source - AMD Senior Marketing Manager Alexis Mather

no king
G100 is GF 100.
Please read this. This is how a gpu works. Fermi G100 to be precise
http://www.beyond3d.com/content/reviews/55

This is how Cypress works. Its a bit technical but I know you got the head for it.
http://www.beyond3d.com/content/reviews/53

Gtx 580
http://www.nvidia.com/object/product-geforce-gtx-580-us...

Quadro 4000
http://www.nvidia.com/object/product-quadro-4000-us.htm...

There is no contest IMO. The Quadro might be a great CAD solution, but the GTX 580 trumps it by a factor of two in raw encoding power.
Score
0
a c 177 U Graphics card
November 12, 2011 9:28:10 PM

gnomio said:
The Quadro might be a great CAD solution, but the GTX 580 trumps it by a factor of two in raw encoding power.


So you say a quadro is better at cad but a gtx is better at encoding? What is this thread about again? No one is refuting it's advantage in raw encoding power.

They are based off the same architecture but they are not the same. How does an i7 2600k differ from a i5 2500k, is there some simple switch to turn on ht? The same concept applies to gtx to quadro, there are minute differences that make them different. If what you say is correct, then why is it not possible to flash a gtx to a quadro? I'm not talking about driver softmods.
Score
0
a b U Graphics card
November 12, 2011 9:46:13 PM

k1114 said:
So you say a quadro is better at cad but a gtx is better at encoding? What is this thread about again? No one is refuting it's advantage in raw encoding power.

They are based off the same architecture but they are not the same. How does an i7 2600k differ from a i5 2500k, is there some simple switch to turn on ht? The same concept applies to gtx to quadro, there are minute differences that make them different. If what you say is correct, then why is it not possible to flash a gtx to a quadro? I'm not talking about driver softmods.

no I'm saying it might be a great CAD card but it doesn't touch a Gtx 580. Remember the Gtx 580 3gb is limited by Vram like the others and has the double the processing power. Its so simple. 1Gb more Vram double the processing power what proof do you need more.

But people who use them don't look after they're cooling as I imagine they can run hot with the higher clocks

The thread is about geforce vs Quadro cards. Quadro 4000 2x times more expensive than a Gtx 580 3gb version.
Which is the better bang for buck?

Quote:
“Using a Quadro FX 4800 with 1500MB of dedicated video memory:

1000 x 1000 = 425 mb

2000 x 2000 = 555 mb

3000 x 3000 = 750 mb

5000 x 5000 = 1125 mb

6000 x 6000 = 1200 mb artifacts start to appear as memory limits are reached

7000 x 7000 = render fails”

http://area.autodesk.com/blogs/shane/the_iray_faq/comme...

Vram is very important with these cards as you see the above.
Score
0
a c 194 U Graphics card
November 12, 2011 10:04:18 PM

http://download.autodesk.com/us/qualcharts/2011/maya201...

Autodesk puts out hardware specs for maya, direct quote from their section on consumer graphics cards
"Important: Although Autodesk tested the NVIDIA GeForce and ATI Radeon consumer
graphics cards, it is Autodesk, NVIDIA, and AMD policy to only recommend and support
the professional NVIDIA Quadro, ATI FirePro, and ATI FireGL graphics family cards.
See the NVIDIA Quadro vs. GeForce GPUs White Paper [PDF]. "

If you follow the link to the white paper it goes over all the difference between the quadro and consumer graphics card, there are quite a few. Now can we stop trying to claim that the cards are identical when they clearly are not dadiggle?
Score
0
a b U Graphics card
November 12, 2011 10:18:26 PM

Quote
from gnomio
The thread is about geforce vs Quadro cards. Quadro 4000 2x times more expensive than a Gtx 580 3gb version.
Which is the better bang for buck?


when it comes to profession workstation running apps that cost thousands if not tens of thousands of dollars
and the workstations can easily cost tow thousand just in Xeon/Opterons

and projects that you are doing are worth million dollar accounts

Why would you worry about "Bang for Buck"?

saving you $500 on a card
could easily cost you a 100K project
do you want to be the IT that recommended a Desktop GPU to save a few hundred bucks and when it fails and somebody has to be held accountable?
Score
0
a c 143 U Graphics card
November 12, 2011 11:54:58 PM

@hunter 315
I know gnomio, always arguing with no clues and all he's sayings are based on personal analysis.
Score
0
a b U Graphics card
November 13, 2011 12:46:03 AM

hunter315 said:
http://download.autodesk.com/us/qualcharts/2011/maya201...

Autodesk puts out hardware specs for maya, direct quote from their section on consumer graphics cards
"Important: Although Autodesk tested the NVIDIA GeForce and ATI Radeon consumer
graphics cards, it is Autodesk, NVIDIA, and AMD policy to only recommend and support
the professional NVIDIA Quadro, ATI FirePro, and ATI FireGL graphics family cards.
See the NVIDIA Quadro vs. GeForce GPUs White Paper [PDF]. "

If you follow the link to the white paper it goes over all the difference between the quadro and consumer graphics card, there are quite a few. Now can we stop trying to claim that the cards are identical when they clearly are not dadiggle?

yeah yeah but that still doesnt make the quadro a better card. Its GF104 and GF106. They use the same methods.
Quote:
In fact, we struggled with many potential theories, until a fortuitous encounter with a Quadro made the truth painfully obvious: product differentiation, a somewhat en vogue term over at NVIDIA, it seems. The Quadro, in spite of being pretty much the same hardware (this is a signal to all those that believe there's magical hardware in the Quadro because it's more expensive – engage rant mode!), is quite happy doing full speed setup on the untessellated plebs
We can only imagine that this seemed like a good idea to someone. Sure, there's a finite probability that traditional Quadro customers, who are quite corporate and quite fond of extensive support amongst other things, would suddenly turn into full blown hardware ricers, give up all perks that come with the high Quadro price, buy cheap consumer hardware and use that instead.


http://www.beyond3d.com/content/reviews/55/10

while you there read how a GPU really works

I dunno who you referring to but its the 2nd time someone has mentioned that word or whatever it is. Check my ip or whatever you do. I cant really help it if my isp make use of 1 ip for all their customers.
f you use the HTTPS, HTTP or FTP connection model and your network provider only uses a single official IP address for all its customers (like for example MTN) you should configure a "keepalive interval" of maybe 3000 milliseconds instead of using the default 20,000 milliseconds. This will ensure your underlying TCP streams will not be re-used for other users after being idle for a few seconds, and if it happens anyway both server and client notice more quickly. Do this if you see a lot of "can't read from server connection" debug messages in the message log.
http://www.your-freedom.net/index.php?id=mobile

We have to pay for IPs
http://mybroadband.co.za/vb/showthread.php/304569-Inbou...!%29

So please keep your allegations in your pocket


Score
0
a c 194 U Graphics card
November 13, 2011 12:54:24 AM

And that post deals with this discussion how? Do you have anything else to add that is might actually be helpful for the OP or are we done here?
Score
0
a b U Graphics card
November 13, 2011 1:11:59 AM

What is really comes down to:

is your usage scenario
in an enterprise environment where possibly millions are at stake
using Autocad Maya etc
then you have to go with the Workstation card
whether or not there is architectural differences can be debated
but the tech support and driver support for the workstation cards
is way beyond what gaming GPUs offer
and in a mission critical financially dependent project the option for
gaming GPUs doesnt make sense

Now in a small business or home uage scenario then going with a gaming GPU
does make alot of sense

I can use myself as an example perfectly
I have a small one man computer repair business
been doing for years now
now I want to expand my revenue potential by expanding
into video work
all small time stuff
it is taking peoples home movies either VCR-DVD conversion,digital media
etc and editing/adding audio soundtrack (music),titles,wipes,fades
etc
then rendering/encoding to produce a DVD
I decided to go with Cyberlink PowerDirector 9
first it uses GPU acceleration with ATI Stream
plus it has a great GUI and is easy to learn and produce good video
and the cost of the software much more reasonable than
Adobe

so going with cost vs performance vs end results

I went with a HD 5670 for now
has 400 stream processors (?) and gives decent results
10 minute video rendered and burnt to DVD in about 15 minutes
this is with a C2D 3ghz and HD 5670
this is minimalist cost effective hardware for a minor task

for small to medium businesses the Desktop GPU is a good choice
for enterprise leve applications the workstation is the right choice

to quote the OP

I am a complete noob. please bare with me. I am an architecture student whom uses MAYA, autocad, and rhino mostly. I recently bought a pc from cyberpower and love it. or Until I started getting high res poly models in maya. I currently have a i7-2600k with a evga gtx560ti. I need better modeling power. Is it better to do an SLI with another gtx560ti or to go with a quadro 2000 or 4000? I am trying to be cost efficient so i know the 4000 is best but need the reasoning.


so in the parameters of the OPs question
as a student the gaming GPU is a good choice
I wouldnt do SLI since I dont believe those programs scale well
sell the 560ti and go with a 570/580 would work out most cost effective

or if the OP is going to use the rig professionally then go with the 4000
but only if the the work and profit can justify it


SO

we are all right

there is no reason to argue

gnomio has a point
in a smaller business than a Gaming GPU makes alot of sense

others are right that in a mision critical corporate environment
the workstation card makes sense
nobody wants to be the Head IT who cheaped out with a gaming card
and a million dollar project went bad


so we are all right


Now can we all just get along?
Score
0
a b U Graphics card
November 13, 2011 1:32:22 AM

My point really is price vs performance

This card
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

gives you the same performance as this one
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

And if your really serious about it then you will get a 980x to go with that Quadro. Thats when time really means money.

But I mean on the side paying 700usd for a card that does the job slower than a 500usd card its a no brainer. Op is already using a 560ti and looks for more performance from the gpu. The 560ti processor is weak according to a 580 and has one third the ram on it
Score
0
a b U Graphics card
November 13, 2011 1:38:58 AM

true
and according to OP original post
you are actually right for his circumstances
as a college student for assignments and studying
the gaming GPU makes alot of sense

Myself and others got caught up in the enterprise way of thinking

in a corporate million dollar project

cost vs performance and "bang for buck" doesnt make sense

in that situation trying to save a penny is costing you a dollar

so actually going by OP question

I would say going with a 580
and selling or using 560ti for secondary system makes alot of sense

I am not sure how 560Tis SLId would scale in Maya Autocad etc
Would have to research it
but hard to find data/benches on that
Score
0
a b U Graphics card
November 13, 2011 1:50:16 AM

from googling
it seems that SLId cards dont help

and really taking a Gaming GPU and modding it with a Workstation
bios and using the workstation drivers is a viable option
for somebody technically experienced

really need the drivers optimized for OpenGL for Maya,AutoCad etc

major reason workstation cards are expensive it the amount of
time put into the drivers being specifically optimized for those workstation
apps

just like a new game needs the drivers optimized
so does workstation apps
which gaming drivers just dont do
Score
0
a b U Graphics card
November 13, 2011 1:57:33 AM

king smp said:
from googling
it seems that SLId cards dont help

and really taking a Gaming GPU and modding it with a Workstation
bios and using the workstation drivers is a viable option
for somebody technically experienced

really need the drivers optimized for OpenGL for Maya,AutoCad etc

major reason workstation cards are expensive it the amount of
time put into the drivers being specifically optimized for those workstation
apps

just like a new game needs the drivers optimized
so does workstation apps
which gaming drivers just dont do

Read rivatuners help file king
Quote:
When I install the NVStrap driver and select "Quadro" graphics adapter identification mode, my systems starts responding slowly after few minutes of work, then it completely hangs. Any clues?
A: The symptoms you are describing are the results of NVIDIA's protection against the NVStrap's PCI DeviceID override, which was introduced the Detonator 30.82. When the driver detects that PCI DeviceID was changed via the NVStrap, it iteratively increases internal delay counter and purposely spend time in internal wait loops, emulating progressing system slowdown and finally system hang. You must use RivaTuner's NVStrapAntiprotection patch script in order to use the NVStrap driver with the latest drivers. GeForce FX and newer display adapter owners may also use "Use ROM straps for PCI DeviceID programming" option, which allows to work around this protection without patching the Detonator / Forceware driver.


Nvidia locks things away in their drivers basically. You have to pay 700usd for a underclocked GTx 460 to get them unlocked
Score
0
a b U Graphics card
November 13, 2011 2:00:20 AM

also Nvidia and ATI only certify workstation drivers and cards
same with the software devs

in an enterprise level large IT departiment situation
You CANT use non certified equipment
Well you can
but if something fails
you have minmal support
and it is your butt on the line
when all equipment and softwares are certified
you arent liable in a way
they cant blame you if it fails
and you have the support from hardware vendors

now small single owner busineesses and home users
that doesnt apply
Score
0
a b U Graphics card
November 13, 2011 2:51:27 AM

king smp said:
also Nvidia and ATI only certify workstation drivers and cards
same with the software devs

in an enterprise level large IT departiment situation
You CANT use non certified equipment
Well you can
but if something fails
you have minmal support
and it is your butt on the line
when all equipment and softwares are certified
you arent liable in a way
they cant blame you if it fails
and you have the support from hardware vendors

now small single owner busineesses and home users
that doesnt apply

Im not going to tell a big business to get a 580. Dont even think they will bother to come and ask here lol
Nor will they bother with cooling for the card unlike a single rig at home
Score
0
a b U Graphics card
November 13, 2011 3:03:41 AM

gnomio said:
Read rivatuners help file king
Quote:
When I install the NVStrap driver and select "Quadro" graphics adapter identification mode, my systems starts responding slowly after few minutes of work, then it completely hangs. Any clues?
A: The symptoms you are describing are the results of NVIDIA's protection against the NVStrap's PCI DeviceID override, which was introduced the Detonator 30.82. When the driver detects that PCI DeviceID was changed via the NVStrap, it iteratively increases internal delay counter and purposely spend time in internal wait loops, emulating progressing system slowdown and finally system hang. You must use RivaTuner's NVStrapAntiprotection patch script in order to use the NVStrap driver with the latest drivers. GeForce FX and newer display adapter owners may also use "Use ROM straps for PCI DeviceID programming" option, which allows to work around this protection without patching the Detonator / Forceware driver.


Nvidia locks things away in their drivers basically. You have to pay 700usd for a underclocked GTx 460 to get them unlocked



good info
didnt know that
but agrees with what I said
it is possible for the advanced use to mod the gaming card into workstation
but also very risky
not something you would want for software reliability
and also would want the CEO finding out about it

it is fact

1) workstation drivers are optimized for the workstation apps

ever have a new game run badly until NVidia/ATI release driver patches?

gaming cpus are optimized for games not Maya Autocad etc

workstation gpus are optimized for those apps


2) workstation drivers certified for use with those apps

in an IT department in a corporation you MUST have that certification

realize that software designs bridges,buildings,equipment etc

not only money but possibly peoples lives are at stake

when they come head hunting to cover their butts for a mistake

you are wide open to liability if your equipment/software was not ceritified and you approved it

possible lawsuits and damages

scenario - You have a 100k budget for 10 workstations (not your money)
You will be doing projects that are mission critical
huge sums of money and peoples lives are at stake

Do you try to save a few hundred dollars and risk liability?

now for a home user or small business owner
it is very cost productive to use a gaming GPU
unless your project involves high risk factor

This topic is very situation based

thinking on a home user/small is different that an enterprise level
where redundant RAID arrays and ECC ram are the standard
there is a reason workstation mobos are different than gaming mobos also

from purely a cost vs performance relating to non mission critical work

YOU ARE RIGHT

but on enterprise business level you are wrong

there is no simpler way for me to put that

I really have no more to say on this subject

if you cant see the facts right there in front of you

than I am wasting my time

thank you
Score
0
a b U Graphics card
November 13, 2011 3:06:50 AM

gnomio said:
Im not going to tell a big business to get a 580. Dont even think they will bother to come and ask here lol
Nor will they bother with cooling for the card unlike a single rig at home



posted too fast for me LOL


nevermind last post

as the question the OP asked

as a college student doing projects

a gaming card makes alot of sense :) 

if it will help since sometimes the game cards dont load up properly with some workstation apps

I would definitly run a system monitor to watch the GPU utilization in any app used

sometimes if it is 2D based the card wont be used

but in situations where the card is being taking advantage of

then using a gaming card for a college student is the way to go

man my hands are cramped LOL
Score
0
a b U Graphics card
November 13, 2011 6:03:07 AM

The Op has a 560ti. He/She can tell us his/her experience with the card and if he/she has any problems except for it being a bit slow and low on ram
Score
0
November 14, 2011 2:48:02 AM

I just stumbled upon this thread while looking at 5010m and the GTX580m, and I wondered......;
If the quadro cards are really overprised because of the reputation they have in f. instance the Architectural design circuits, professional support solutions etc... Now let's say I am an architect in a small office - which I am. Is there any reason why I should get single quadro's instead of f.ex gtx580 in sli ?
I mean, who doesn't like bang for their buck ?
Gnomio ?
Score
0
a c 109 U Graphics card
November 14, 2011 5:32:21 AM

etb said:
I just stumbled upon this thread while looking at 5010m and the GTX580m, and I wondered......;
If the quadro cards are really overprised because of the reputation they have in f. instance the Architectural design circuits, professional support solutions etc... Now let's say I am an architect in a small office - which I am. Is there any reason why I should get single quadro's instead of f.ex gtx580 in sli ?
I mean, who doesn't like bang for their buck ?
Gnomio ?

Higher memory bandwith, higher calculations per polygon etc.
Score
0
a b U Graphics card
November 14, 2011 6:40:48 AM

etb said:
I just stumbled upon this thread while looking at 5010m and the GTX580m, and I wondered......;
If the quadro cards are really overprised because of the reputation they have in f. instance the Architectural design circuits, professional support solutions etc... Now let's say I am an architect in a small office - which I am. Is there any reason why I should get single quadro's instead of f.ex gtx580 in sli ?
I mean, who doesn't like bang for their buck ?
Gnomio ?

Because with Geforce cards we accept the odd crash and so. In the industry with businesses its unacceptable. Home user doing it part time or as a hobby or like in the OPs case a geforce card is a better choice. But professionally those systems run 24/7 which means your gpu is going to cook. But the quadros are slower clocked and they use to sport more ram. They still do. The topline quadro sport 6GB of vram. You wont find that on a consumer card
SLI is a gaming feature. CAD programs dont make use of it. Dont even support it
Score
0
a b U Graphics card
November 14, 2011 11:27:25 AM

^+1 agree
Score
0
November 14, 2011 3:38:33 PM

Thanks for all the info guys. Most helpful when trying to decide witch to go gtx 580 or a quadro. Its for my home pc in which I will be teaching myself Maya, CS5, and Catia. Have already taught myself how to use Autocad and my 5850 worked fine,but just bought the farm. I also game and use my pc in place of a cable box. So it seems 580 may be the better route.

Would I benefit from running a entry level quadro as well?


Nevermind dumb question
Score
0
a c 177 U Graphics card
November 14, 2011 6:32:13 PM

Well especially since you game too, go with the 580. In a home environment, there's really no need for a quadro unless a consumer card can't handle the project. That pretty much sums up the whole thread.
Score
0
a b U Graphics card
November 14, 2011 8:16:27 PM

^-1 disagree

1) stability

2) liability issues from not using a certified product which can cause you to lose your job

3) performance- some of the workstation apps will no use a gaming gpu
because the gaming driver wont recognize and raise clocks
you can work around by manually forcing clocks but not optimal
in a enterprise environment

4) loss of support from software developers and hardware vendors

to sum up

a home or small business owner/operator in a non-mission critical project can
possibly use desktop card

a major corporation or institution must use a workstation card

that sums up the whole thread :) 
Score
0
June 19, 2012 11:26:03 PM

I'm a Gaming Architect and just wanted to add 2 cents into the mix based off my own experience- as a Gamer and an Architect, not as a computer guru, because I'm not one.

I've used CADD, 3D Studio Max, Photoshop and Illustrator, both to do my thesis project, on two computers: An Athlon 2 core 64 Alienware with (some?) model Gforce, and a Dell Centrino Laptop. (This is 2005, 2006). I tended to take the laptop to the Studio and work on the modeling there, and then go home and work on the modeling, OR do the rendering, on my desktop. Both functioned alright- My computer set up was at least 2-4 times faster than the terrible computers the School Lab offered. AutoCAD is a dinosaur, and I'm sure its Code is about as efficient as a Model-T, but its still just a bunch of lines in 2D. I had a 800Mhz Dell - I think with a VooDoo GPU - back sometime ago, and I was drafting CADD with that back then.

The point being that any computer you buy today should be blowing away the Spec requirements for any type of 2D CADD drafting. For 3D BIN modeling like Revit, I was doing well enough at my last job with a no-frills AMD computer which was a year or so old- and I know that thing didn't have a 1000$ + graphcis card in it- I wish I'd bothered to see what it did have.

I've heard alot of talk about 'accurate' computing and frankly I don't know what these arguments are about. There are vertices in space you manipulate. Autocad can get stupid with accuracy down to the billionth of an inch if you want it to. Thats why when drafting Architects put 'snaps' on and set the accuracy of the drawing- usually to about 1/4, 1/8, or 1/16 of an inch. Some carpenter is going to whip out his measuring tape and if it isn't a tick mark on his tape, I guarantee he isn't worrying about .01 inches, let alone .00000000000000000000001 inches. And I don't care what kind of GPU you have, if you set a line to be 2'-0" long, the GPU won't somehow make computational errors such that the line will stop being 2'-0" long. Ditto with BIN modeling in Revit, or a more contemporary Architectural program.
As for rendering, you set up the parameters and you click the render button- am I supposed to believe that the reflection of the 'glass' material looks better with a corporate line GPU than a gamer GPU? If so then I can't tell.

If you've got a good enough graphics card to do the 'current' games, I suspect you'll be just fine, for the practice of architecture and its drafting/ modeling requirements. If you've got a 580 and its 'not enough', I'm just wondering what else could be going wrong. Are you being RAM limited? I've worked on files regularly approaching 1GB in size. The projects aren't any bigger, but somehow the files these CAD programs save just get bigger and bigger.

If I were you I'd look at Beefing up RAM as an option- I admit I don't remember what you said your RAM was. Memory access is also extremely important- as you know if you've had to do a lot of drafting, file copying, saving, loading- maybe a better hard drive set up would help access memory faster. Also try all the usual system-cleaning solutions, like Reformat, if you've got a system which SHOULD handle some CADD work but doesn't seem up to snuff.

Cheers,
-A



Score
0
a c 194 U Graphics card
June 19, 2012 11:42:55 PM

This topic has been closed by Hunter315
Score
0
!