BurnerBen said:
Mostly I'll be doing scientific computing in Matlab and R but going down to C for implementing some custom numerical methods (typically building libraries called from Matlab).
I do heavy empirical econometric analysis that, for instance, includes evaluating and testing potentially thousands of model specifications. I also do simulations that run approximately 4-10 hours for each iteration on my current system (a Core2 Q6600 quad core Dell XPS with 4GB ram) but could, in principle, be completely parallelized.
I'll be looking to do high dimensional optimization that involves things like Bayesian analysis, MCMC sampling and integration, and inverting 1000x1000 matrices. The neural imaging is a future direction that I'll be going in in about a year or so, so being able to handle FSL will be a critical expandability.
OH .... You DID clarify ... So ... First, Have you gone to the support sites and looked at the sys-reqs ???
Now, you may tell me that it is not the app but HOW you use it (what sorts of formulae and modules and ... ahem ... data-sets ... that you are running.
Obviously, if your models/simulations have been taking that long, to run ... sure ... you need gobs more power . BUT ... What resources does your code exploit ?
It would help YOU greatly if you could estimate just what the ratio of CPU-vs.-GPU dependency is (duh) ...
Just another thought ... You may wish do go to THE CLOUD, because many scientific and financial sims are going there, now, where the speed and the power knows no limits ... I'm just sayin'.
Sure ... 12 cores and 24GB ... but, ask China ... You can get as much CUDA (read: TESLA) as you want (or need).
http://www.nvidia.com/object/tesla_computing_solutions....
But it is farsical to imagine that any of US could properly scale your load !
Seriously, my advice is to (1) find out the ratio of CPU to GPU resource dependence and THEN ? ... See if your tools are adapted to TESLA solutions because, that is an infinitely scalable architecture ... or explore CLOUD COMPUTING APPS, RESOURCES AND SERVICES ... *BEFORE* you drop a wad on a blind stab ...
... But, sure ... A dual socket 12-core (24 thread) system with a couple of Mamoth Quadros (or more) will surely take much less time than your old platform. That much is a no brainer and, hey, if you have the monies? ... Why not ? ... (but GTX GPUs are very fast, much cheaper, and they have gobs of CUDA cores so ... shrug ... unless there is some PARTICULAR app requiement, that stresses Quadro (specifically, as a req), then just go with GTX ... Noise?! ... It is not likely to be quiet ... yes ... there are many ways to mitigate noise (PSUs, Cases, Facilities, even foam baffels) ... But ... The core architecture is the FIRST thing you should be nailing down ... not noise.
CLOUD ?? ... TESLA? ... CUDA? (Quadro? or GTX?) ... and fast SSDs and gobs of DRAM, in any case (other than CLOUD).
Sounds like you are using standard libraries and languages and writing the code, yourself, so ... YOU must find out what level of scalability is required, and what sort of ongoing costs are justified.
In short, you are smarter than me (and US) about your tools and your own code, models and data sets ... and you have the best visibility on what your needs will be, in a year ... If the ROI justifies it, I would say "TESLA or CLOUD-BASED solutions would be most granular and allow project evolution to be smoother (less platform disruption) looking forward.
Hmmm?
=burrrp=
(BTW ... YES ... I AM AN ASS)