Microsoft is pushing itself into the High Performance Computing. This week the company reached a milestone with its release of a second beta of Windows HPC Server 2008 R2.
In the TechNet blog, Ryan Waite, Product Unit Manager, Microsoft High Performance Computing Group listed the following new features and developments:
· Scalability and performance. We’ve continued to improve scalability, regularly testing on the 1,000 node cluster in Microsoft Research—we plan to pursue Top500 runs that prove much greater scalability. We also know customers want to make use of spare processing cycles as part of their overall HPC infrastructures. Windows HPC Server 2008 R2 Beta 2 now integrates with workstations running Windows 7, enabling organizations to use them as cluster compute nodes.· Simplified parallelism. HPC starts with parallel code, so my team is particularly excited about next week’s Visual Studio 2010 launch. Windows HPC Server 2008 R2 empowers parallel development, providing a platform for traditional (batch-based) and service-oriented (interactive) HPC applications. And, Visual Studio 2010 helps developers create, debug, and trace HPC applications using already-familiar tools. · Excel integration and ease of use. Whenever we speak with scientists, engineers, and analysts about their HPC needs we hear how they rely on Microsoft Excel for computations and how they’d love to scale those computations to run in parallel on a cluster. We’ve responded with HPC Services for Excel 2010. Top systems integrators and consultants like Wipro, Infusion and Grid Dynamics are now ready to help customers deploy and take advantage of HPC Services for Excel 2010.· Interoperability options. We’ve heard from customers that “rip and replace” isn’t often a viable option for building out their clusters. So, we have started collaborating with industry-leading HPC management companies like Adaptive Computing, Clustercorp and Platform Computing to enable hybrid options where Windows HPC Server and Linux work together. Whether it’s a dual boot or dynamic cluster, hybrid options help organizations get more out of HPC investments and provide broader access to HPC resources.
Linux currently dominates the HPC/Supercomputing market
To start I have to disclaim that I currently harness one of man's greatest known reserves of HPC ignorance...
MS has done a pretty good job on the Server 2008 series to strip it down to a very small featureless footprint. I'd bet they have taken this a step further for this particular application.
There are some resources out there, here are some undergrad projects that cover the basics of HPC:
I'm liking the idea about integrating with Windows 7 to use spare client CPU cycles. When you start to look at typical offices, imagine 100 client's giving space cycles during the night or even during the regular day.
Really? Most of the clusters I have worked with do have their own local drive. The one thing I always wondered about is the price. A $200/node OS is a lot more than the free Rocks using CentOS, especially when you have a 1000 nodes!
Large clusters would have a hard drive failing every 5 minutes. Check out my link to the LANL summer institutes above for more details on diskless nodes and management. We used CentOS too and you can push that out to nodes without disks (after removing unnecessary features). Storage arrays are more expensive but they are more failure tolerant than the cluster. You lose money when your cluster goes down so the storage array is cheaper in the long run.