Sign in with
Sign up | Sign in

ASRock Brings Supercomputing to X58 Mobos

By - Source: Tom's Hardware US | B 19 comments

Yesterday ASRock said that it will unveil its personal Supercomputer system at CeBIT '09, sporting its ASRock X58 Supercomputer motherboard.

ASRock said that its X58 SuperComputer motherboard spawned from the world's need for something practical and cost-effective. "High specification and exquisite are not adequate anymore these days," and the company may be right in many aspects, especially when consumers are pinching fannies as well as pennies during economical hard times. While the global economy continues to plunge down into the galactic toilet, manufacturers looking to caress enthusiasts with "more bang for the buck" are a welcome ideal indeed.

Although the term Supercomputer brings up images of HAL and R2-D2, ASRock says that its X58 Supercomputer is not just any ordinary X58 chipset motherboard on the market. In fact it's the only X58 motherboard worldwide that can be set up as a Nvidia Tesla Personal supercomputer. Instead of housing gigantic and expensive workstations that can make any hardware enthusiast claustrophobic, the ASRock X58 Supercomputer can bring all that monstrous processing power to consumers through a practical and affordable budget. Thus, the "personal" Supercomputer is born.

“The new X58 Supercomputer motherboard from ASRock is an excellent platform upon which to build an Nvidia Tesla Personal Supercomputer," said Andrew Walsh, general manager, personal supercomputing at NVIDIA. "Supporting up to 4 Nvidia GPUs, each delivering 1 teraflop of processing power, desktop systems built around the X58 Supercomputer motherboard will accelerate the pace of discovery for scientists and engineers around the world."

According to ASRock, the X58 motherboard is "exclusively designed" with 4 PCI-E x16 with double-wide spacing slots, making it very flexible when installing any Nvidia SLI or ATI CrossFire graphics combination.  Supposedly, the computing power of a personal Supercomputer is equal to the performance of 250 workstations--although what kind of workstationsn ASRock is referring to is beyond us. Finally, ASRock's product blew the socks off Richard Chou, Leadtek's General Manager of Computer Product Business of Leadtek Research Inc.

“We are thrilled to collaborate with ASRock to showcase at Cebit 2009," he said. “As a leading manufacturer of professional graphics card solutions, this is a great opportunity to work with ASRock, a leading brand in motherboard manufacturing and design. We are confident this cooperation not only demonstrate the personal super computer capabilities of Tesla and X58 motherboard but also brings Workstation users a more valuable workstation experience."

ASRock said that three Tesla C1060 cards and an additional Quadro card can fit within the motherboard's 4 double-wide spacing PCI-E x16 slots. LL Hsu, Chief Operation Officer of ASRock Inc, said that the motherboard now offers supercomputing power to the Intel Core i7 platform, and is a good choice for gamers wanting to stuff their high-end gaming rigs with SLI or CrossFireX technologies and still be able to pick up a Big Mac meal at McDonald's. Is the motherboard the ideal component for taking over the world? That has yet to be seen, but we're betting the frame rates will melt your eyeballs.

Just as a taste, here are a few specs to make you salivate in utter hardware bliss:

  • Intel Socket 1366 Core i7 Processor Extreme Edition / Core i7 Processor Supports Intel Dynamic Speed Technology
  • System Bus up to 6400 MT/s; Intel® QuickPath Interconnect
  • ASRock DuraCap (2.5 x longer life time), 100% Japan-made high-quality Conductive Polymer Capacitors
  • Intel X58 + ICH10R Chipsets
  • Supports Triple Channel DDR3 2000(OC)/1866(OC)/1600(OC)/1333(OC)/1066 (6 x DIMM slots), non-ECC, un-buffered memory, Max. capacity up to 24GB
  • Supports DDR3 ECC, buffered memory with Intel® Workstation 1S Xeon processors 3500 series
  • Supports Intel Extreme Memory Profile (XMP)
  • 4 x PCI Express 2.0 x16 slots (blue @ x8 / x16 mode, orange @ x8 / N/A mode) (Double-wide slot spacing between each PCI-E slot)

Consumers can visit ASRock and it's X58 SuperComputer at Cebit 2009, located at Hall 21, Stand C40.

Discuss
Ask a Category Expert

Create a new thread in the News comments forum about this subject

Example: Notebook, Android, SSD hard drive

This thread is closed for comments
  • 0 Hide
    A Stoner , February 25, 2009 4:08 PM
    Wonder why they cannot get more PCIe pathways for full X16 on all 4 slots, if it is supposed to be a super computer, you would think it would need more bandwidth.
  • 1 Hide
    falchard , February 25, 2009 4:42 PM
    Why is a budget division subsidary of ASUS providing better motherboards?
  • 1 Hide
    leo2kp , February 25, 2009 4:43 PM
    Chipset limitations I think...
  • Display all 19 comments.
  • -1 Hide
    dacman61 , February 25, 2009 4:44 PM
    Call me crazy, but what is so special about this board? It has all of the same X58 Chipset Specs as most of the other models out there today. The X58 chip basically determines the features of a motherboard regardless of the manufacturer. Maybe just a few different slot options and how they chop up the different x1 PCI-Express lanes and such.

    Also, when are motherboard manufacturers going to stop putting in old PCI Slots? It's exactly like when they kept around those really old ISA slots back in the day.
  • 1 Hide
    Anonymous , February 25, 2009 4:50 PM
    A StonerWonder why they cannot get more PCIe pathways for full X16 on all 4 slots, if it is supposed to be a super computer, you would think it would need more bandwidth.

    Because there's just no chipset out there that supports this!
    The speed is subject to the North/southbridge of a board.

    I'm interested in just how much this setup will cost!
    Seems to support upto 6x4GB DDR3!

    So what else can you do with it, besides playing crysis and running the 'folding@home' project?
  • 1 Hide
    Anonymous , February 25, 2009 4:51 PM
    Old news are old. Tesla never needed any SLI enabled to work on X38. You could already build this last year with almost any X38 motherboard.
  • -1 Hide
    bustapr , February 25, 2009 4:53 PM
    Who would actually use 4 pci-e2.0 slots and for what. All I know is that for gaming people prefer putting 2 cards on sli and not 3. Plus a super computer would benefit more from more pci slots instead of an extra pci-e.
  • -1 Hide
    Anonymous , February 25, 2009 4:58 PM
    dacman61Call me crazy, but what is so special about this board? It has all of the same X58 Chipset Specs as most of the other models out there today. The X58 chip basically determines the features of a motherboard regardless of the manufacturer. Maybe just a few different slot options and how they chop up the different x1 PCI-Express lanes and such.Also, when are motherboard manufacturers going to stop putting in old PCI Slots? It's exactly like when they kept around those really old ISA slots back in the day.

    Because lots of users still use extension board audiocards like Soundblaster Audigy, network cards, etc... Most of them still use PCI.

    I wondered if someone knew if the pci slots on a board use different lanes compared to PCIE slots?
    I know one thing, that's PCIE reduces speed depending on the amount of slots that are used.
  • 0 Hide
    Anonymous , February 25, 2009 5:03 PM
    Anonymous CowardOld news are old. Tesla never needed any SLI enabled to work on X38. You could already build this last year with almost any X38 motherboard.

    Perhaps the extra memory bandwidth of this board (2000 oc, compared to 1600 on the X38) made the computer topple 1Teraflop of data, which technically put it into the supercomputer category?
  • 0 Hide
    Anonymous , February 25, 2009 5:06 PM
    bustaprWho would actually use 4 pci-e2.0 slots and for what. All I know is that for gaming people prefer putting 2 cards on sli and not 3. Plus a super computer would benefit more from more pci slots instead of an extra pci-e.

    Could be on a dual/quadcore system where the memory controller used part of the FSB speed.
    There is a slight chance that the improved design of the corei7 frees up some bandwidth, and that using 3 or 4 cards could improve performance, something we could not see with tom's benchmark results on a Core2Quad Extreme.
  • 0 Hide
    dacman61 , February 25, 2009 5:22 PM
    ProDigit80Because lots of users still use extension board audiocards like Soundblaster Audigy, network cards, etc... Most of them still use PCI.I wondered if someone knew if the pci slots on a board use different lanes compared to PCIE slots?I know one thing, that's PCIE reduces speed depending on the amount of slots that are used.


    No kidding! But I think it's time to move on.
  • 1 Hide
    kittle , February 25, 2009 6:38 PM
    "supercomputer" with only 1 CPU slot?

    mabye im old-fashioned, but dont all supercomputers nowdays have more than 1 cpu socket in them?
  • 0 Hide
    gwolfman , February 25, 2009 7:23 PM
    dacman61Call me crazy, but what is so special about this board? It has all of the same X58 Chipset Specs as most of the other models out there today. The X58 chip basically determines the features of a motherboard regardless of the manufacturer. Maybe just a few different slot options and how they chop up the different x1 PCI-Express lanes and such.Also, when are motherboard manufacturers going to stop putting in old PCI Slots? It's exactly like when they kept around those really old ISA slots back in the day.

    It can use Xeon CPUs and ECC RAM, therefore a truer to the definition workstation/supercomputer.
  • 0 Hide
    gwolfman , February 25, 2009 7:23 PM
    kittle"supercomputer" with only 1 CPU slot?mabye im old-fashioned, but dont all supercomputers nowdays have more than 1 cpu socket in them?

    It will primarily use nVidia's CUDA for processing, not the CPU.
  • 3 Hide
    hellwig , February 25, 2009 7:46 PM
    This is NOT a gaming motherboard (at least, not as ASRock is marketing it).

    Do people know what Tesla is? The "supercomputer" part comes from the fact that you can pack a bunch of Nvidia GPUs in this sucker, and use them to do the computation (not the CPU). The frequency (600+MHz) and sheer number of computational cores (128-256) mean a single GPU can perform many more floating-point operations per second than even a modern 6-core Xeon. Throw 4 such GPUs in a single case, and you have one powerful floating point machine. I don't think any motherboard by any manufacturer supports enough CPU sockets to get that kind of performance.
  • 0 Hide
    Shadow703793 , February 25, 2009 7:51 PM
    The GPU architecture is going to make this NOT a true "Supercomputer". It's more like a HPC.

    Quote:

    The first is that of memory latency; GPUs operate with a very high degree of latency on the memory; since they're handling relatively linear tasks, and when dealing with textures and shaders, always call up very large, sequential blocks of memory at a time, having a CAS latency of 30+, 40+, or more clock cycles doesn't really matter, since the GPU will know much farther in advance what it'll be needing next around 99% of the time. The same benefit can be applied to decoding media; being a streaming application, latency doesn't hurt it. However, when it comes to scientific applications, that really can be harmful, as in those cases the predominant bottleneck invariably winds up being data and instruction latency, something that's also hurt heavily by how GPUs have an extremely skewed processing unit-to-cache ratio, a ratio that's vastly different than what's found in general-purpose CPUs.

    The second reason that occurred to me is the lack of a standard multi-GPU architecture that would be able to support a large quantity of GPUs even just for mathematic operations; the current limit for ANY design appears to be 4 GPUs, from either nVidia or ATi/AMD. So, while yes, while in theory you could produce the same floating-point capacity using only 1/7.5th the number of RV770s compared to what Sequoia uses (i.e, 13.3% the number) as of yet, there is no way to actually build that assembly, so in practice, it's a moot point.

    The final reason is actually that of power and heat; GPUs may have a very high degree of performance-per-watt efficiency when it comes to math, but they STILL have a very high TDP per chip. The cost of the actual chips are usually one of the minor parts of a supercomputer, as a lot more care has to be given to providing enough power to stably run thousands upon thousands of nodes, with not just multiple CPUs per node, but all the other components as well, all of which must be powered and cooled. With GPUs, you're going to have your heat production focused on a far smaller number of chips, so you'll need to actually have more intensive cooling, and likely greater spacing between GPUs, since you can't just blow hot air out the back of the case, since there will be more nodes in every direction. There's a good chance that one would actually have to construct a LARGER facility to house an equally-powerful supercomputer built from GPUs than one built from multi-core general-purpose CPUs.

    nottheking
    http://www.tomshardware.com/news/IBM-Sequoia-Supercomputer,6955.html
  • 0 Hide
    Tindytim , February 26, 2009 5:41 AM
    bustaprWho would actually use 4 pci-e2.0 slots and for what. All I know is that for gaming people prefer putting 2 cards on sli and not 3. Plus a super computer would benefit more from more pci slots instead of an extra pci-e.

    If you really had money to burn, you could do a Tri-SLI setup, then get a forth card for Physics.

    However, it is for number crunching.
  • 0 Hide
    random1283 , February 26, 2009 6:59 AM

    Hmmm soz if you read my other post but the ASUS P6T WS 6 x pcie
    and an high level onboard raid controller. and these boards are mainly for CUDA and Such As
  • 0 Hide
    nottheking , February 26, 2009 8:19 AM
    Well, it looks like some of my words made it here before I did... So I guess there might not really be all that much for me to say.

    At any rate, if I remember correctly, the GTX 280, of which it appears the Tesla C1060 card is effectively the same as, has a peak theoretical floating-point capability of 936 gigaFLOps. Since as I mentioned prior, (in what Shadow quoted) the limit for a single system is 4 GPUs, (for a total of 3.888 TFLOps) and hence it's no surprise that the bulk of units that are based on these Tesla chips, it seems, come with four of them, with none having any more. Most of the units I see are packed into tower-style PC cases, and simply equip 4 Tesla cards. The exception is nVidia's own Tesla S1070, which comes as a 1U rackmount piece, which suggests that mounts the GPUs in a different PCB format.

    I'll admit, this sort of board release really isn't all that unique; as the third-party Tesla units have shown, you can already buy an OEM version of what you could assemble using this motherboard. It's a fairly familiar case of a previously workstation-only part making its way to availability for the home market; I'm reminded of AMD's Quad FX and Intel's Skulltrail platforms, which weren't exactly novel on the workstation market, which have had dual-CPU motherboards for decades.

    No, this really isn't a "supercomputer," and nVidia appears to make to implication that such a platform is. (though they have backed the "personal supercomputer" made by third parties) Again, I don't believe that there really is no existing architecture that would allow you to link these units up as individual nodes in a single supercomputer, so this would be chiefly for HPC applications, which granted, is way above the level of what home users (and even the vast bulk of enthusiasts) are doing. The only thing that slightly irks me is that nVidia has opted, on their own page for Tesla, to call it "the world's first Teraflop processor," a claim it really can't back up; if you go by outright chip, RV770 hit shelves well before GTX 200, and I believe even the FireStream 9250 version did as well, giving it a far more defendable claim to being the "first teraflop processor."