Building a Graphics Card

bellasys

Honorable
Jun 28, 2012
4
0
10,510
I am interested in maximizing my 3D rendering scope (for Maya2012) and I've taken this far enough to want to play a little- perhaps even exploring a design for a "frankenstein" graphics card perhaps using an old CPU of 1Ghz or so. I know this is an odd topic, but I am beginning to imagine how one could take a (low wattage) CPU of modest ability and use it to support a hot new system.

For example, I was willing to explore parallel processing- which I've done with custom Python and Mel scripts natively, as well as with Maya's mental ray supporting up to 16 sattelite CPU's. but I don't really have that many, and I'm wondering if I can pack everything I need in one box... and so I'm thinking out on a limb that it would be interesting to repurpose an old server CPU, or perhaps an old laptop cpu (less wattage).

First, before anyone smacks their foreheads, I am thinking way out of the box, and interested in the journey as much as the result. I'm not really focused on the practical, although in the end I do want some results worth my time. I'm interested in learning in general, and what possibly could be accomplished using old CPU's.

I haven't really thought it all through yet, but I see many significant obstacles- particularly considering the role of a cpu versus a discrete graphics chip. I get all that.

High Level thoughts? Anyone heard of this outside a laboratory? I've got a few skills, but probably not enough to manufacture my own circuitboard...
 
I do a decent amount of work with maya and python scripts myself... I've had a similar thought in the past, essentially I/you would be building a cheap server/super computer. but I decided against it. here's my reasoning:

the task at hand is to repurpose old CPUs for parallel compute, which GPUs really excel at. here we ask the question: "why?", the reasoning is that the GPU cores are made to handel many simple calculations in parallel (picture a room of say 100 average joes doing simple math), where as the CPU are made to handle complex tasks/calculations (think a few professors working on some abstract math). so while the professors or the CPUs are able to handle a more complex task, it doesn't mean they can do (1+4+8+15+49+32+984+15+78)*5 etc. significantly faster than one average joe. in the end, the 100 average joes will win out. hence why we use GPUs for parallel compute.

So the first problem at hand is that I only have like 4 old CPUs... if you have like 20-50 old ones, then maybe it'd be worthwhile. secondly, I don't have a board capable of wiring the CPUs up for parallel compute. if you can figure out a way around it, then kudos to you and I'd love to hear about it
 
Maya default renderer and mental ray do not use a gpu, they are cpu renderers. Besides, it would be a much easier process to network cpus together rather than to change them into something they aren't, if it's even possible at all. I know the thread is about trying to keep it in one box but a renderfarm is much simpler.

Anyways here's my thoughts. No matter how I think about it, it's still connecting a computer to a computer even if you are using pcie (I'm guessing you want to use the pcie slot). A graphics card essentially has a mobo, ram and the gpu itself so one with a cpu will still involve using a mobo and ram like a separate pc. In which case it might as well be a separate machine. There are pc cases that can fit multiple pc's in one case or just make a custom one to fit your needs. Even if it were small like a graphics card, you're really just making an adapter to another pc but going through the trouble of programming a driver to see it through pcie. This is just my thoughts for now, maybe I'm missing something.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
1.running a modern x86 CPU needs very complex supporting hardware. I dont think those would be found in electronic hobby shops, even at unreasonable prices.

2. Modern GPU are: processor+ram+power circuitry+scheduler+on-the-fly compiler in one PCB. Plus they need more hardware to run a modern software. And driver to expose that functionality to the main machine.

3. adding 4-5 old-ish CPU's wont get you much performance gains compared to even a mid term GPU. Assume you have 8 old CPU's
running at 2GHz (you will probably have lesser) that you combine. Now i use a i7-3960x. Your hybrid processor will definitely be less powerful that the i7. And the i7 will get beaten by a HD7970 in rendering.

4. if you want to maximize your maya2012 power, you might try switching to a render that uses the GPU. Like vray. Then buy a great compute-loving GPU like the HD7970.
 

bellasys

Honorable
Jun 28, 2012
4
0
10,510



Thanks- yes you illustrate the point really well.

I have 24 server cores in 12 machines and I was dreaming about repurposing 1 of them for a project, but you're absolutely right. I really don't want to fund the power for these and the noise...

I had been looking t some old Siggraph mags and stumbled upon an early diagram of parallel memory circuits and how these would evolve (Sep '94) into graphics architecture as we know it today. Essentially it's a chip-driven pipeline... My mind was lazily abstracting what-ifs, and I wondered how well creating custom hardware pipelines using Y2K server tech would bump the margins on that early, simple, and effective model of parallel throughput...

One thing all the feedback has helped me figure out is just how much work it would be for very little results.

Instead of this I think I'll focus on another exciting prospect, which is building a specialized storage using CORFU. I will post a thread about that. Thanks for all the feedback here. Everyone had some great points.

Although I like yours best, vmem.