Help Choosing My High-End Workstation

Status
Not open for further replies.

BurnerBen

Distinguished
Jun 6, 2011
4
0
18,510
Hey now,

I'm looking to order a custom workstation and am currently choosing between three builds. These are currently specified from ThinkMate, but similar systems are available from AVA Direct and similar vendors. The decision seems to come down to whether I should build a quiet system with no GPU, a system that's expandable to support multiple GPU's, or something in between. I'd really appreciate:

1. Any thoughts on Custom Workstation Vendors (i.e., ThinkMate vs AVA Direct vs Boxx vs whoever else). I haven't seen many reviews of these services and am curious about users' experiences.

2. Any thoughts on the workstation specs themselves. Are there any changes I should make? Any thoughts on which of the workstations might best suit my needs?

3. Any thoughts on whether "some hot new thing" will render these specifications obsolete within 3 years.

Thanks!
ben

Approximate Purchase Date: next week
Budget Range: 8,000-12,000
System Usage from Most to Least Important: Data Analysis, Numerical Methods, Financial Analysis, Simulation
Parts Not Required: keyboard, mouse, monitor, speakers, OS
Preferred Website(s) for Parts: Thinkmate or AVA Direct
Country of Origin: United States
Parts Preferences: Leaning towards Xeon CPU & Nvidia CUDA GPU
Overclocking: Probably Not
SLI or Crossfire: Maybe?


Quiet Workstation with No GPU:
Processor: 2 x Six-Core Intel® Xeon® X5690 3.47GHz 6.4GT/s QPI 12MB L3 Cache (130W)
Motherboard: Supermicro X8DAi - EATX - Intel® 5520 Chipset
Memory: 48GB (6 x 8GB) PC3-10600 1333MHz DDR3 ECC Registered
Boot Hard Drive: 250GB Intel® 510 Series 2.5" SATA 6.0Gbps Solid State Drive (Multi Cell) (MLC) (34nm)
Secondary Hard Drive: 2.0TB SATA 6.0Gbps 7200RPM - 3.5" - Seagate Constellation™ ES.1
Video Card: AMD Radeon HD 6750 1GB GDDR5 (2xDVI, 1xHDMI) (Fanless)
Price: $7,500



Single GPU Workstation:
Processor: 2 x Six-Core Intel® Xeon® X5650 2.66GHz 6.4GT/s QPI 12MB L3 Cache (95W)
Motherboard: Supermicro X8DAi - EATX - Intel® 5520 Chipset
Memory: 96GB (12 x 8GB) PC3-10600 1333MHz DDR3 ECC Registered
Boot Hard Drive: 250GB Intel® 510 Series 2.5" SATA 6.0Gbps Solid State Drive (Multi Cell) (MLC) (34nm)
Data Hard Drive: 2.0TB SATA 6.0Gbps 7200RPM - 3.5" - Seagate Constellation™ ES.1
Video Card: PNY NVIDIA Quadro 4000 2.0GB GDDR5 (1xDVI-DL, 2xDP, 1xST)
GPU: NVIDIA "Fermi" Tesla C2050 Computing Processor - 3GB GDDR5 - 448 Cores
Price: ~$10,500



GPU-HPC Workstation:
Barebone: Supermicro SuperServer 7046GT-TRF - 6(+2) x SATA - 12x DDR3 - 1400W
Processor: 2 x Six-Core Intel® Xeon® X5690 3.47GHz 6.4GT/s QPI 12MB L3 Cache (130W)
Memory: 96GB (12 x 8GB) PC3-10600 1333MHz DDR3 ECC Registered
Boot Hard Drive: 250GB Intel® 510 Series 2.5" SATA 6.0Gbps Solid State Drive (Multi Cell) (MLC) (34nm)
Secondary Hard Drive: 2.0TB SATA 6.0Gbps 7200RPM - 3.5" - Seagate Constellation™ ES.1
Video Card: PNY NVIDIA Quadro 4000 2.0GB GDDR5 (1xDVI-DL, 2xDP, 1xST)
GPU: NVIDIA "Fermi" Tesla C2050 Computing Processor - 3GB GDDR5 - 448 Cores
(Expandable to 4x GPU)
Price: ~$12,000
 
Solution

BurnerBen

Distinguished
Jun 6, 2011
4
0
18,510
Mostly I'll be doing scientific computing in Matlab and R but going down to C for implementing some custom numerical methods (typically building libraries called from Matlab).

I do heavy empirical econometric analysis that, for instance, includes evaluating and testing potentially thousands of model specifications. I also do simulations that run approximately 4-10 hours for each iteration on my current system (a Core2 Q6600 quad core Dell XPS with 4GB ram) but could, in principle, be completely parallelized.

I'll be looking to do high dimensional optimization that involves things like Bayesian analysis, MCMC sampling and integration, and inverting 1000x1000 matrices. The neural imaging is a future direction that I'll be going in in about a year or so, so being able to handle FSL will be a critical expandability.
 

Alvin Smith

Distinguished
Naw ...
I'll bet this guy is an Arab Prince who has gone abroad to get his MBA ...
STEP (1) List the exact apps you expect to be using, and volume and forms of output.
STEP (2) Sounds like you should have four, 24" monitors hanging off of 2 nVidia GPUs.
Note (1) If you intend to do any live trading, your connection and ISP/proximity count.
Note (2) A big, powerful, expensive system is likely not needed ... just a snappy system with multiple displays, unless you intend to model and simulate the whole of all global financial markets.
OBSERVATION: (1) You do not seem to have a clue what apps you will be running, or even in what context (but I am all ears).
SPECULATION: A 6 core AM3+, with a couple of GTX560Ti GPUs and 16GB DDR3 may be more than you even need ... including 4 large monitors, you would be spending less than ~$5300 ... Perhaps, MUCH less. Really depends on the monitors.
ADVICE: Start with the applications (the main ones) you will most often be running. ... Then, go (or allow me) to the applications support sites and pull the recommended and preferred system requirements, for those app ... and add a bit of headroom, just for good measure.

Most of the heavy lifting, when it comes to "global market modelling and sims", is performed on mainframe computers ... rows of them ... just because of the sheer volume of data.

Most of the tech-edge that high-power traders enjoy has to do with how many milliseconds it takes for their transactions to execute ... again, ... more about connection speed and ISP trunk priority (ISP proximity, to the electronic markets ... and bandwidth).

Of course ... we have no clue what you will be doing ... your declaration is profoundly vague ... and YOU are ready to just drop (in excess) of $10K ?!

QUESTION: Be honest .... What make/model car do you drive and who purchased it ?

We need more detail, if you really want any sort of cogent recommendation(s).

=Later=

 

Alvin Smith

Distinguished



OH .... You DID clarify ... So ... First, Have you gone to the support sites and looked at the sys-reqs ???

Now, you may tell me that it is not the app but HOW you use it (what sorts of formulae and modules and ... ahem ... data-sets ... that you are running.

Obviously, if your models/simulations have been taking that long, to run ... sure ... you need gobs more power . BUT ... What resources does your code exploit ?

It would help YOU greatly if you could estimate just what the ratio of CPU-vs.-GPU dependency is (duh) ...

Just another thought ... You may wish do go to THE CLOUD, because many scientific and financial sims are going there, now, where the speed and the power knows no limits ... I'm just sayin'.

Sure ... 12 cores and 24GB ... but, ask China ... You can get as much CUDA (read: TESLA) as you want (or need).

http://www.nvidia.com/object/tesla_computing_solutions.html

But it is farsical to imagine that any of US could properly scale your load !

Seriously, my advice is to (1) find out the ratio of CPU to GPU resource dependence and THEN ? ... See if your tools are adapted to TESLA solutions because, that is an infinitely scalable architecture ... or explore CLOUD COMPUTING APPS, RESOURCES AND SERVICES ... *BEFORE* you drop a wad on a blind stab ...

... But, sure ... A dual socket 12-core (24 thread) system with a couple of Mamoth Quadros (or more) will surely take much less time than your old platform. That much is a no brainer and, hey, if you have the monies? ... Why not ? ... (but GTX GPUs are very fast, much cheaper, and they have gobs of CUDA cores so ... shrug ... unless there is some PARTICULAR app requiement, that stresses Quadro (specifically, as a req), then just go with GTX ... Noise?! ... It is not likely to be quiet ... yes ... there are many ways to mitigate noise (PSUs, Cases, Facilities, even foam baffels) ... But ... The core architecture is the FIRST thing you should be nailing down ... not noise.

CLOUD ?? ... TESLA? ... CUDA? (Quadro? or GTX?) ... and fast SSDs and gobs of DRAM, in any case (other than CLOUD).

Sounds like you are using standard libraries and languages and writing the code, yourself, so ... YOU must find out what level of scalability is required, and what sort of ongoing costs are justified.

In short, you are smarter than me (and US) about your tools and your own code, models and data sets ... and you have the best visibility on what your needs will be, in a year ... If the ROI justifies it, I would say "TESLA or CLOUD-BASED solutions would be most granular and allow project evolution to be smoother (less platform disruption) looking forward.

Hmmm?

=burrrp=
(BTW ... YES ... I AM AN ASS)
 
Solution

BurnerBen

Distinguished
Jun 6, 2011
4
0
18,510
Just wanted to say thanks for the feedback. It did help me to focus on the application and use side (which I understand better) rather than get caught up in the details of hardware (where I'm lost).

FWIW, I ended up going with the quiet workstation. I found some youtube videos comparing sound levels of these systems at load and the big GPU-based workstations just sounded too loud for me to be able to concentrate while working. I do a lot of theory in addition to programming and need to be able to think while working out the math.

This will give me a good compute platform for the next 1-2 years and a fine development environment for the life of the system. For scalability, I'll likely follow your advice and turn to a Cloud-based solution, using either Penguin or EC2 when I need a bursts of compute power now and down the road if I find the workstation lagging. Interestingly, Penguin is now offering Cloud systems with Tesla GPU's, which could grow into a very useful resource.

Thanks again for taking the time to share your thoughts, they were quite helpful.
 
Status
Not open for further replies.