Sign in with
Sign up | Sign in
Your question
Closed

Asus Supercomputer Motherboard Revealed

Last response: in News comments
Share
May 6, 2009 12:19:10 AM

No dual CPU sockets? It does not seem very super to me.
May 6, 2009 12:22:09 AM

Go read up about CUDA. You will probably get it then.
Related resources
May 6, 2009 12:25:56 AM

fooldog01Go read up about CUDA. You will probably get it then.

Everyone will use some sort of "CUDA", so CUDA itself is no game changing since all others will support as well. Think time is better spent looking at cost/performance and other features.
May 6, 2009 12:26:05 AM

no super ML caps. what a shame.
May 6, 2009 12:46:17 AM

crisisavatarEveryone will use some sort of "CUDA", so CUDA itself is no game changing since all others will support as well. Think time is better spent looking at cost/performance and other features.


I was just explaining why this thing is "super" and doesnt have dual sockets.
a b V Motherboard
May 6, 2009 12:52:31 AM

Is it wrong to get aroused by looking at that board?
May 6, 2009 1:01:17 AM

Better put this thing in a deep freezer when it is running.
May 6, 2009 1:30:19 AM

3 way sli x16 seems pretty super and 7 pcie x16 slots lol the board is crowded as shit either way.

But it's super because you shove 12gigs in it and 4 nvidia quarto cards and then you got some serious gpgpu power in there and one hell of an electric bill
May 6, 2009 2:12:37 AM

crisisavatarEveryone will use some sort of "CUDA", so CUDA itself is no game changing since all others will support as well. Think time is better spent looking at cost/performance and other features.

Yup, all the other companies are going for OpenCL, Nvidia just wanted to pretend they had something special by rushing their closed format out the door.

This product is a unique product attempting to market as a gimmick.
May 6, 2009 2:42:17 AM

With an nforce 200 chipset, I can finally install that rotisserie feature in my tower and have a roast fully cooked in under an hour.
May 6, 2009 3:04:48 AM

I wonder how long it will take to get driver support for a quad x2 video card set-up?

May 6, 2009 3:08:12 AM

This thing would be much more powerful with a set of AMD cards in it.
a b V Motherboard
May 6, 2009 3:16:54 AM

I am wondering when asus will support my p5n-d board with windows 7 drivers. My cd will not work on 7.. I cant find anything.. is it too early??
a b V Motherboard
May 6, 2009 3:17:03 AM

fooldog01I was just explaining why this thing is "super" and doesnt have dual sockets.


The whole point of GPGPU is to not use CPUs to do work...
May 6, 2009 3:45:58 AM

screw CUDA, ill put 7, 48 core larrabees in it!
May 6, 2009 4:05:49 AM

Hey why aren't the comments in the same areas as their authors (not separated by that little line)? This 'style' as it were is more confuss-ed.
May 6, 2009 4:19:50 AM

christopI am wondering when asus will support my p5n-d board with windows 7 drivers. My cd will not work on 7.. I cant find anything.. is it too early??

Use xp or vista installs from their website cd are lame until it works.
May 6, 2009 4:21:01 AM

i thought that with the new technology like OpenCL and Cloud Computing, PCs would be replaced by server racks.
May 6, 2009 4:50:30 AM

I swear I'll kill a baby kitten the next time I hear someone talk about clouds and computers like its remotely new or novel just because a new name got tagged to it.

OK...maybe not...but I'll deny it treats for 3 days!

In any case, a few people nailed it. The point of this rig is to put as much GPU power on a single board as possible (within reason) to use as a "super workstation".

Niche? Sure, but the idea's solid either way for the very few apps out there that will currently take full advantage of it.
May 6, 2009 9:30:12 AM

I always thought mainboards could use more PCIe slots...
If you need several video cards and some 8x PCIe SAS RAID controllers, this one can do the job.
Having just one CPU socket is not fine with me, after all it's Intel's Core i7, and up to 24GB. Do you really need twice that?
May 6, 2009 9:32:16 AM

I meant "Having just one CPU socket is fine with me"...

I know, i should have checked twice... knowing there's no EDIT button...
May 6, 2009 10:58:02 AM

Hmm .. 7x 4850 and some custom drivers to make em work in 7xcf - then I'm sure even the fastest desktop cpu would need a water chiller to satiate the cards ... and probably you'd need a peltier built into the psu to avoid it overheating too!
On a more serious note - I wonder why they didn't just add 5 pcie slots (4 for graphics 1 for raid) and made room for another cpu socket.
May 6, 2009 1:43:32 PM

And they are going to charge how much for this board? $500-$600?

I can find better things to put my money into.
May 6, 2009 1:44:01 PM

Quote:
four CUDA cards into the board (one of which should be a Quadro graphics card) to achieve nearly four teraflops of performance.

Isn't 4 teraflops acheivable with 2 of the 4870x2's?
May 6, 2009 2:23:16 PM

I am not an Electrical/Computer Engineer, but if scaling seems to be the path of the uber-workstation, and SLI/Crossfire is a viable solution anywhere in the computing world, why not have scalable motherboards in a folding double-sided server style BTX motherboard/chasse configuration? Alternate PCI slots, with a motherboard-SLI connection. The rig would be relitively easy to service,and the airflow would be unidirectional.
May 6, 2009 2:32:07 PM

jacobdrjI am not an Electrical/Computer Engineer, but if scaling seems to be the path of the uber-workstation, and SLI/Crossfire is a viable solution anywhere in the computing world, why not have scalable motherboards in a folding double-sided server style BTX motherboard/chasse configuration? Alternate PCI slots, with a motherboard-SLI connection. The rig would be relitively easy to service,and the airflow would be unidirectional.

you know, i wish i thought about that. for a workstation in an environment suitable, that is a fantastic idea. Asus? MSI?
May 6, 2009 2:38:25 PM

Ask ibm to make it. They're already making water cooled servers, and I bet you'd need watercooling for such a big heater.
May 6, 2009 2:42:40 PM

can i do 3-way SLI and 3-way Crossfire simultaneously with this board?
May 6, 2009 3:53:39 PM

I'd like to see a block diagram of this beast.
7 x16 PCIe cards are wonderful and all but lets do the bandwidth math.
There the best X58 chipset port breakdown scenario is officially 4 ports @ x8 worth of lanes available.
Plus another port at x4 lanes, then and two ports at 1 lanes, that's 7 total possible ports.
Which tells me that all they likely did was take the 4 x8 and add the nforce switches to them simply to allow SLI support (that the board already had since nVidia caved, but nVidia insists is somehow "better" to have these useless switches than not having them, which electrically makes no sense.)
And then pop x16 slots onto x4 and x1 electrical links.
Big deal, maybe there is some BIOS magic in there to make certain card combinations easier, but since PCIe is an auto-negotiating you can do the same thing at home with a dremel tool.

You can add all the switches and ports you want but you are still limited to 42 lanes of PCIe, in very specific configurations.
Even if all 7 slots are somehow electrically x16 lanes through the use of those n200s you are still shoving all that bandwidth through two at least x16s at the Northbridge (think Skulltrail).
Not all that compelling.
This thing is a waste of silicon to begin with and certain to be overpriced one at that.
May 6, 2009 4:01:09 PM

Also, the biggest reason for no dual socket is that this isn't a board for server cpus, you would want, again, Skulltrail for that, or any server motherboard that suits your budget.
Multiple CPUs are server processors only.
Even Skulltrail just used a rebranded Server chipset and Xeon CPUs... that's why it used FB-DIMMs.
May 6, 2009 4:02:41 PM

3 4870x2's and a 4890 in a case with 8 expansion slots would be sick on this. Of course, you need a small fusion reactor to power it all, but thats like 8.6 teraflops. Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops, of course at this point we are pretty much talking about government funding only.
May 6, 2009 4:07:54 PM

scook9 Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops, of course at this point we are pretty much talking about government funding only.

I doubt passive cooling would be enough. You'd want to at least go with phase cooling, if not a multilevel cascade system.
May 6, 2009 4:24:25 PM

TindytimI doubt passive cooling would be enough. You'd want to at least go with phase cooling, if not a multilevel cascade system.

Water cooling is not passive. Maybe you meant standard cooling... but water cooling is hardly standard either.
May 6, 2009 5:34:48 PM

scook93 4870x2's and a 4890 in a case with 8 expansion slots would be sick on this. Of course, you need a small fusion reactor to power it all, but thats like 8.6 teraflops. Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops, of course at this point we are pretty much talking about government funding only.


Keep in mind, again, the relative bandwidths going into, and out of, these cards.
The best case scenario is 4 cards with x8 PCIe, 1 more with x4 PCIe and two more with x1.
Despite the 7 PCIe x16 slots, you are not getting 112 PCIe lanes back to the chipset/processor, you are still only getting 38 lanes(I mistakenly said 42 before, I have no idea where I got that number from).

Keep that in mind when doing your GPGPU calculations, they may be able to process really fast, but will you have the bandwidth to feed them to that performance level?
May 6, 2009 8:11:37 PM

ViPri thought that with the new technology like OpenCL and Cloud Computing, PCs would be replaced by server racks.


Never heard CUDA-based (or these kinds of boards) can help in cloud computing, maybe good in animation rendering, but cloud computing?
May 6, 2009 8:15:17 PM

bleah.. wheres the dual socket core i7 board?
supercomputer mabye.. but not everything runs on CUDA.
May 6, 2009 8:21:20 PM

kittlebleah.. wheres the dual socket core i7 board?supercomputer mabye.. but not everything runs on CUDA.

Systems like this are built for a particular purpose. They're not built to run everything.
May 6, 2009 8:30:12 PM

MiribusKeep in mind, again, the relative bandwidths going into, and out of, these cards.The best case scenario is 4 cards with x8 PCIe, 1 more with x4 PCIe and two more with x1.Despite the 7 PCIe x16 slots, you are not getting 112 PCIe lanes back to the chipset/processor, you are still only getting 38 lanes(I mistakenly said 42 before, I have no idea where I got that number from).Keep that in mind when doing your GPGPU calculations, they may be able to process really fast, but will you have the bandwidth to feed them to that performance level?



You are getting that all wrong. From Asus themselves they claim true 4 PCIe x16. The four blue PCIe slots are channeled for x16. You will have true x16 triple SLI with this board.

3 x PCIe 2.0 x16 (@ x16 or x8)
3 x PCIe 2.0 x16 (@ x8)
1 x PCIe 2.0 x16 (@ x16)
May 6, 2009 8:41:59 PM

MiribusThe best case scenario is 4 cards with x8 PCIe, 1 more with x4 PCIe and two more with x1.Despite the 7 PCIe x16 slots, you are not getting 112 PCIe lanes back to the chipset/processor, you are still only getting 38 lanes(I mistakenly said 42 before, I have no idea where I got that number from)

Where the hell are you getting you're numbers? All of the information I've gotten says the nForce 200 chipset has 62 PCI-e lanes, 32 of which are PCI-e 2.0. Now, this board has 2 nForce chipsets, giving it a total 124 PCI-e lanes, 64 of which are PCI-e 2.0. Meaning you could stick 4 dual slot Tesla cards on this Mobo with each getting x16 2.0 bandwidth, and still have plenty of bandwidth left over.

sandcompNever heard CUDA-based (or these kinds of boards) can help in cloud computing, maybe good in animation rendering, but cloud computing?

Look at GPGPU technology:
http://en.wikipedia.org/wiki/GPGPU
May 6, 2009 9:18:48 PM

All of this talk about cooling.

Why doesn't someone take one of these smaller beer refrigerators (like the ones we all had in college) or an old basement freezer, and port it thru with HDMI, USB, and Esata cables that are sealed in place with RTV.

You can attach all your Blue Ray DVD's, extra hard drives, etc. outside the fridge box.

Then take your pc case (sans DVD's and other extraneous airflow blockers but leave in all the exiting heat sinks and fans), install one of the new PCI 2.o based SSD's, strip off the doors and panels, hard mount the whole rig in the middle of the fridge/freezer and attach the cables. Turn the temperature control to Arctic and RTV the door shut. After the RTV cures plug in the fridge.

Once it has gotten down to freezing, boot up, put your OS and stuff on the SSD and away you and go. I gotta think this would run way cooler for a lot less cash than most of the exotic and expensive cooling rigs folks have been trying to shoe horn into their cases.
May 6, 2009 9:29:38 PM

wayneepalmerAll of this talk about cooling.Why doesn't someone take one of these smaller beer refrigerators (like the ones we all had in college) or an old basement freezer, and port it thru with HDMI, USB, and Esata cables that are sealed in place with RTV.

Wouldn't work, refrigerators don't have the power to do so.
http://www.ocforums.com/showthread.php?t=373263
Anonymous
a b V Motherboard
May 6, 2009 9:34:52 PM

Yo, people: it is a niche board. It is primarily intended for people who run GPGPU code for number crunching, not you amateurs.

"Now imagine 7 single slot watercooled 4870x2s. Then you talking 16.8 teraflops", etc etc etc... very nice, you can add. No, unless you know how to program the thing to extract 16.8TF from it, maybe be hush.
May 6, 2009 10:03:20 PM

It's not a super computer if it is DOA. Asus sucks, i've had to RMA numerous Asus motherboards and their RMA process stinks as well. They'll ""repair"" a board and send it back, DOA. Never Asus again for me, their 3 year warranty is rotten.
May 6, 2009 11:03:44 PM

i wouldn't necessarily call this thing super but extreme performance desktop maybe ,unless it had dual sockets then were really taking.

anyways it about time thet got rid of all the old legacy devices and support and i mean all of it (ie:p ata,pci,dvi,rca,x86
May 7, 2009 1:39:00 AM

bill gates is your daddyYou are getting that all wrong. From Asus themselves they claim true 4 PCIe x16. The four blue PCIe slots are channeled for x16. You will have true x16 triple SLI with this board.3 x PCIe 2.0 x16 (@ x16 or x8) 3 x PCIe 2.0 x16 (@ x8) 1 x PCIe 2.0 x16 (@ x16)


Let me clarify, as I misunderstood the way the n200 in particular works.
The n200 is a pretty heavily optimized PCIe switch so it's a fair bit of a step up from the n100 that I got it confused with. While it is true that there are X amount of lanes going to those slots, they all branch at some point from the X58 chipset which has limited bandwidth. So communication to the processor will still be bottle-necked. Like having a 12 GbE switch with 1 GbE uplink to a server for example.
For CUDA or whatever, you are using the GPGPU it's still interesting if you keep all of the computing away from the CPU until it's done.
I was mistaken, they are quite a bit more than just PCIe switch in gpgpu apps because the link to the CPU is irrelevant as long as what your cards have done is small enough to fit back into the CPU for whatever portion it needs to do.

For graphics though, it doesn't seem worthwhile over any other 3-way board.
As for using both SLI/Xfire simultaneously that someone had asked, that's a driver nightmare, maybe with some really well done abstraction layer, especially if one were limited to a virtual environment... don't know, programming isn't my thing.

TindytimWhere the hell are you getting you're numbers? All of the information I've gotten says the nForce 200 chipset has 62 PCI-e lanes, 32 of which are PCI-e 2.0. Now, this board has 2 nForce chipsets, giving it a total 124 PCI-e lanes, 64 of which are PCI-e 2.0. Meaning you could stick 4 dual slot Tesla cards on this Mobo with each getting x16 2.0 bandwidth, and still have plenty of bandwidth left over.


I've never read 62-lanes for the n200, but I did read such a thing for the 780i with the n200.

Did they ever optimize YETI@Home for Stream processors? I'd definitely fire up a few quadros for the cause.
May 7, 2009 6:21:44 AM

*chokes*
May 7, 2009 12:56:09 PM

Hoohoo,

What about a full size deep freezer or refrigerator?
May 7, 2009 1:55:01 PM

wayneepalmerHoohoo, What about a full size deep freezer or refrigerator?

They are not made to be constantly cooling something. The things that go into a refrigerator do not generate heat. A phase, or cascade cooling compressor would make more sense, and take less work.
May 7, 2009 1:59:59 PM

I seriously think that Tom's needs to sweet talk someone at Asus and get their little hands on one of these and load it up with 4x HD4890 and see what happens.

If they don't want to do it then please give me the hardware and I will gladly review it....you won't ever get the hardware back but that is a moot point.
!