Server vs multiple desktops

scolty

Honorable
Jun 12, 2013
3
0
10,510
I currently have an i7 3820 and im performing some data mining/number crunching. Unfortunately its taking a long time to run on my pc, so im looking at getting either a server or a couple desktops and splitting the workload between them. So the question is, does the performance boost from a server offset the cost, when compared to two or more desktops of lower spec but whose final price is the same as the server?
 

USAFRet

Titan
Moderator
That depends entirely on how the application is written.

Can it take advantage of more than one machine at a time? Is it efficient in use of RAM? Cores?
Is the code well written?

Only way to know is to either try it, or find someone who has that specific tool that has done it both ways.
 

scolty

Honorable
Jun 12, 2013
3
0
10,510


Yeah i was planning to use WCF to communicate between them. Efficient, probably not. I have already looked into GPU's and coprocessors but the warp divergence is a major issue with GPU's, so they arent a possible option which is a shame. The calcs usually take a couple days to process on my current system. I believe they are cpu bound as all the data is loaded into ram (couple gigs) and the processors seem to be running at 100%.

 

Traciatim

Distinguished
It really depends on the data.

You can buy an E5-2687W for about 2 grand just for the CPU and then you would need a server based motherboard and memory . . . or you could buy 4 i7-4770K's with a cheap-ish overclocking motherboard and memory each for the same cost.

Does your data set work well with hyperthreading? Maybe the i5-4670k (or even AMD 8350) is a better choice and get one additional machine.

Could your data be ported to OpenCL or CUDA and processed on a video card? Maybe you'd be better served with an i3 (or pentium) and a 7970 or 780 in each machine . . . or just a machine with 4 of them.

Your answer depends on so many factors there is no good response.
 

scolty

Honorable
Jun 12, 2013
3
0
10,510


Ive had a look into GPU processing but i cant get around the issue of warp divergence due to the nature of the analysis. The only reason i can see (im possibly missing others) is that servers support ECC memory.

I havent tried hyperthreading but im assuming not since it appears to be CPU bound and the context switching would probably reduce performance.

Yeah i appreciate its a very open ended question, but i was just wanting to get some general pointers/make sure i hadnt missed anything.
 

Traciatim

Distinguished


Since you already have an HT enabled processor why not run a test with it enabled or disabled. The difference in price between a 4670k and 4770k is about a hundred bucks (or about 40%) . . . do you get more than a 40% improvement with HT on vs off?

If HT doesn't help and your completely doing the work on CPU's and it is easy to break up in to chunks and spread out among machines then it seems like the best choice would be a few 4570k's and OC the bejeezbus out of them.