Multiple Machines Slaved to One Monitor Surface? | Synergy-like

commissar_mo

Distinguished
Jan 23, 2011
96
0
18,630
I have a suspicion this isn't even remotely possible... but I'm interested in this concept, so I'd love enlightenment from the knowledgeable elite that frequent this forum...

I'm interested in running multiple monitors, let's say 6, but having a second machine power 3 of them. Now with software like Synergy or Input Director, all you really get is a network-based peripheral seamless sharing effect.

What I would like to do is to run a single monitor surface over multiple machines. i.e. it would be like slaving the second machine's graphics cards to power extra displays, but the data for said displays would be fed to the master machine.

How, even in speculative theory, could this be accomplished? Any ideas?
 

frombehind

Honorable
Feb 18, 2012
351
0
10,810
What you are looking at touches on a concept known as distributed compututing... This means splitting a large workload seamlessly amoung multiple machines. Current implementations of this are very limmited (mostly scientific calculations) and very costly to operate.

Trouble is... current windows application (with the exception of graphix editiing software... adobe, autocad... ) everythithing else is a single thread - it cant even take advatage of a multi-core processor.

So splitting a video game like that is currently impossible. Plus no verision of windows is built for distibuted computing. I havent heard too many games written for Unix... or Sun. Thats what most distibuted computing runs under
 

COLGeek

Cybernaut
Moderator

Why not simplify your config and have on system drive all 6 displays and then remote your other systems as needed/desired?
 

commissar_mo

Distinguished
Jan 23, 2011
96
0
18,630
Well I was really just interested in the concept. I have an eyefinity 6 card that broke, and now am limited to 3 monitors since I only have 1 pcix16 slot open on 1 card.

Essentially, correct me if I'm wrong, the recent multi-GPU bonanza (SLi/Crossfire) IS essentially, 'distributed computing' on a very local level since it basically farms out the renders to multiple processors in parallel, and then unites the outputs for display.

Essentially, I'm interested (again... theoretically) in using something like a render farm in real time.

It would essentially be like having say, 10-way SLi. I imagine the largest issue (and the main difference from current multi-gpu setups) would be bandwidth connectivity - i.e. communication between memory/CPU and the GPU.

Instead of being directly on the motherboard, you would need some very high-speed link to some central board where the outputs of the GPU farms are meshed together.

Then again, GPUs don't scale anywhere near linearly for rendering, so it's likely an entirely new architecture would be required for this kind of super-computer-style gedankan.

 

COLGeek

Cybernaut
Moderator
Setting up a render farm (like with a Beowulf cluster) is different than a shared display config. You could build a clustered system and then use the shared displays. Of course, just getting a new GPU that can handle all of your displays would be the easiest thing to do.

You are describing something similar to an HPC environment. I am not sure how much feedback you'll get here at Tom's on HPC-like subjects.

Still, this is an interesting subject you have posed to us.