I am a long-time tech enthusiast who hasn't bothered to keep up with hardware advances in a while (it got a lot harder around 1995 or so, and I'm lazy). So I have a decent idea of what the different technologies accomplish and why, but I know very few specifics (of, for example, when and where SSD vs. SAS vs. SATA is best).
I am in a position to build a set of servers with a single goal: ridiculous performance. Because it's relevant, the use case is high frequency trading. That should give you an idea of the kind of budget I can have and what the potential rewards are. I don't know what flavor of *nix yet, but it will be 64-bit with a realtime kernel.
I will source W5590 chips if I can get them, so my first question is: which mobo? It seems that SuperMicro and Intel would be the safe bets. I need the X58 chipset obviously, but beyond that, it seems my biggest concern will be PCI-E 2.0 capability; we can easily end up dealing with data loads that push 10GbE close to max, and latency is the biggest issue we face. I'll probably be looking at 2-4 10GbE ports per box. (No interest in InfiniBand; we'll accept a small performance hit to avoid vendor lock-in.)
Beyond that, if these boxes won't be used for massive storage loads, how can I minimize the effect of disk activity? Is it just a matter of speed, and so the answer is SSD?
For those that know all the things I have questions about, I'm well aware that I'm in way over my head. But I have to start somewhere.
Thanks in advance for all help.
I am in a position to build a set of servers with a single goal: ridiculous performance. Because it's relevant, the use case is high frequency trading. That should give you an idea of the kind of budget I can have and what the potential rewards are. I don't know what flavor of *nix yet, but it will be 64-bit with a realtime kernel.
I will source W5590 chips if I can get them, so my first question is: which mobo? It seems that SuperMicro and Intel would be the safe bets. I need the X58 chipset obviously, but beyond that, it seems my biggest concern will be PCI-E 2.0 capability; we can easily end up dealing with data loads that push 10GbE close to max, and latency is the biggest issue we face. I'll probably be looking at 2-4 10GbE ports per box. (No interest in InfiniBand; we'll accept a small performance hit to avoid vendor lock-in.)
Beyond that, if these boxes won't be used for massive storage loads, how can I minimize the effect of disk activity? Is it just a matter of speed, and so the answer is SSD?
For those that know all the things I have questions about, I'm well aware that I'm in way over my head. But I have to start somewhere.
Thanks in advance for all help.