I am a long-time tech enthusiast who hasn't bothered to keep up with hardware advances in a while (it got a lot harder around 1995 or so, and I'm lazy). So I have a decent idea of what the different technologies accomplish and why, but I know very few specifics (of, for example, when and where SSD vs. SAS vs. SATA is best).
I am in a position to build a set of servers with a single goal: ridiculous performance. Because it's relevant, the use case is high frequency trading. That should give you an idea of the kind of budget I can have and what the potential rewards are. I don't know what flavor of *nix yet, but it will be 64-bit with a realtime kernel.
I will source W5590 chips if I can get them, so my first question is: which mobo? It seems that SuperMicro and Intel would be the safe bets. I need the X58 chipset obviously, but beyond that, it seems my biggest concern will be PCI-E 2.0 capability; we can easily end up dealing with data loads that push 10GbE close to max, and latency is the biggest issue we face. I'll probably be looking at 2-4 10GbE ports per box. (No interest in InfiniBand; we'll accept a small performance hit to avoid vendor lock-in.)
Beyond that, if these boxes won't be used for massive storage loads, how can I minimize the effect of disk activity? Is it just a matter of speed, and so the answer is SSD?
For those that know all the things I have questions about, I'm well aware that I'm in way over my head. But I have to start somewhere.
seems like there cases are fine. i would recommend the 3U or 4U chassis just for space reasons, and i don't think you would need a ton of these servers to do what you need (though i have been wrong before), that is a lot of processing power in one computer
I would not build myself if price is no object, your time is better used on programming!
I would agree that the dual socket Nehalems is the way to go. My app just went live at a large investment bank with HP DL380 G6 boxes using dual X5570 and 72Gb RAM and that machine is absolutely fantastic (other vendors make similar boxes). I run two RAID1 disks for the OS and then three disks in RAID5 for local storage even though the real data in on a SAN. Comes standard with two gigabit Ethernet but you can add additional controllers. The coming 6-core 32nm CPUs would probably be better, but that is not out yet.
I am assuming (hoping) that you are using C++ given the performance requirements. In that case make absolutely sure that you get the Intel compiler, the optimizations that it can perform with the Nehalem makes a real difference compared to GNU (my app is a Monte Carlo based risk app).
I was very positively surprised at how well that RAID array works performance wise and we cut corners by using 10k SAS drives. I would think you would be OK without SSD. What would the application use the disks for?