Is HYDRA for real? / nV 280GTX & ATi4870 in One System?

jeb1517

Distinguished
Apr 15, 2007
259
0
18,780
What do you guys think about this new technology? If it works like they say it does, it will be a serious slap in the face of both AMD and Nvidia.
 

hypocrisyforever

Distinguished
Mar 30, 2008
155
4
18,715
Yeah...it's way to early to say.....for instance, PhysX was a good idea....that went 100% down the drain. You know how Nvidia and ATI claim giant gains from crossfire/sli......and...you really don't get them. I think this will probably turn out that way.....but....again, who knows at this point.
 

halister_one

Distinguished
Aug 20, 2008
100
0
18,680
Technology advances in an incredible rate, so something like this is bound to happen sooner or later. The question is, will it be available to the consumers?
 

iluvgillgill

Splendid
Jan 1, 2007
3,732
0
22,790
it will be a break through if it actually works. and if it does then its a fight to see who(nvidia/amd) will buy the company off and use its 100% working technology and from there possibly dominate the market.
 
It certainly looks real and I have been hoping for something like this.

As for slapping someone in the face, I agree that it's a slap in the face for nVidia, but not AMD/ATI. nVidia is the only one restricting their multicard platform to their chipsets. ATI (and by association AMD) knows that the enthusiast platform is Intel. ATI made the move to allow CrossFire to run on Intel chipset before AMD bought them, and to AMD's credit, they've done nothing to disturb that relationship. So as I see ATI loses nothing by this new technology, they will still get to sell two or more cards, while not losing out on a chipset sale, as they weren't selling them anyway. If anything it may simplify their driver development. nVidia on the otherhand will lose their only selling point for their chipsets. I am betting that anyone wanting SLI with an Intel CPU, would prefer it if it worked on an Intel chipset. I know I would. nVidia's track record for Intel chipsets isn't that great. For some reason they haven't worked out a SATA driver on the first try. That and you could heat your house with their chipsets.

One could also presume that even though Lucid Logix was financed by Intel, I don't see why this controller couldn't be used on an AMD platform. So one could assume that nVidia chipsets for AMD systems would become redundant.

I think this is a very promising idea. A fresh idea couldn't hurt, you never know, this could prove to be more efficient than either ATI's or nVidia's approach.

Lastly, you can bet this plays right into Intels hand (Larrabee). ATI allows their own cards to work on Intel boards, but their driver support only supports their cards. With Larrabee coming, this technology will give Intel the multicard platform without any R&D on their part. No need to reverse engineer SLI or CrossFire, just use a third party hardware/software solution.
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


I agree with everything i didn't quoted :bounce: I guess at this time it is a correct analysis. About Larrabee, i still think it will flunk big time, they will only sell the cards to benchmarkers, the rest will rot. But yes Intel funding this one is to have all those (i mean we) enthusiasts with Intel CPUs. No more CF or SLI silliness. Just the chip off loading and up we go !!

I still want to see this in practice although, it is too good to be truth.
 
Hydra is an early prototype. There's lots of positives mentioned, but of course little of the negatives. From a traditional perspective, I would wonder about the buffers and how the chip level communication happens. I think some tasks would be hard to divide with the hardware implementation they have. It would be easier in a DX9 situation moreso than a DX10 implementation, where things start getting much more complicated. Defered rendering, tone mapping, specular lighting, Shader AA & AA buffers, all of which I see as major issues.

The demos and info also make me wonder how the tasks are assigned, there's no clear split point so the division of labour of A renders the beams B renders the wall and floor would mean that those items need to be clearly defined. It sounds like the role of Lucids software and hardware is to try to act as a pre-GPU assembler/scheduler, however without shared resource pools it makes some taks very difficult and for GPU 1 and 2 to communicate would be very bandwidth intensive (edit: especially the add-in version). And it would require alot of tweaking to make the assembler efficient for new games, so once again you would need 'Lucid Optimized' titles like 'Xfire/SLi-ready' to get the full benefit.
Also they mention having different generations of cards doing the work with a GF6800 and a GF9800 doing the task together, however they do things like AF differently, let alone the DX generation differences. For the X1K -> HD series you have many more differences, and a few different similarities. Then doing AMD & nV, you could only barely do that in the last generation, this generation would be even trickier unless you change techniques where the two become GPGPUs IMO. they say DX10 and DX11 should be easier than DX9, but I think the exact opposite from a hardware standpoint, and even from an API standapoint, the features in DX10 let alone DX10.1 to me pose a much greater problem for such a method without so drastic change to what they are doing.
Now Raytracing however I can see it being much easiser, however if you simply turn the GPUs into raytracing co-processors in OpenGL/CL or DX11 then really you wouldn't need the LUCID solution anyways, and performance should scale very linearly. All you could need is a CPU (or CPU/GPU) and then salve GPUs acting as SPUs and then something to assemble and write the data to output buffer taking the role of the traditional ROP.

http://www.pcper.com/article.php?aid=607
http://www.pcper.com/article.php?aid=607&type=expert

Sounds great, but I'm very skeptical, especially since the person providing the details at IDF sounds more like a PR guy than a technical person, making difficult task sound like a simple division of labour, like the part where they say: "Maybe 5 tasks to 1 or something like that; the results are then combined by the HYDRA chip and sent to a single GPU for output." very loosey goosey and alot lilke the promise of Supertiling before they actually tried to implement it in more complex games than the very closed environment of proffesional Flight SIMS.

Right now, I'm very skeptical, but it is interesting if they ever provide more details on how to do the complex stuff.

Oh Jebus, it's not only going to be offered as a MoBo but an Add-in card solutuon (thus would not be limited to just intel etc);
http://www.lucidlogix.com/technology/technologies.html
IMO this would add even more latency & bandwidth concerns, since it would have to use the chipset PCIe lanes 4 way + whatever CPU communication is required. That doesn't sound good at all.
 


Well I think Larrabee (a shrinken version of it) has alot of potential for laptops, but we'll wait and see how that turns out.
I'm optimistic and could see myself getting one if it pans out the way I hope, otherwise it will be a tough sell, but still as long as they can support DX10-11 and they price it attractively enough, they'll sell a ton, even if it fails, it'll likely do brisk sales in the first few weeks while people figure out the potential. After that though IMO it'll come down to feature & performance / price just like all the rest.
 
Oh yeah, a ton of latency.

Wasn't sure about the memory component especially since they have conflicting statements versus conflicting process map.

But to me the issue would be the dependant situations, SFR doesn't work many times because of this, which is why you must you AFR, now spliting the workload further into subcomonents just seems to amplify that problem.

Anywhoo, I'm going on lunch, I'll think it over there, but it's looking to make it very difficult and very slow IMO.
 

jeb1517

Distinguished
Apr 15, 2007
259
0
18,780


Well I meant it would be a slap in the face to both because Lucid Logix will have done something that neither AMD nor Nvidia could achieve with their own hardware. But like you, I am very skeptical. IDF seems to be more PR and less tech.
 

jcorqian

Distinguished
May 7, 2008
143
0
18,680
The ExtremeTech article seemed pretty optimistic at least. They also did mention that cards from the same generation should be used, i.e. DirectX 10 cards. Combining a GTX 260 and a 8800 GT should work well, at least according to the article.
 

Pazuzoo

Distinguished
Jul 18, 2008
19
0
18,510
I don't know about pairing different cards, driver nightmares if you ask me.

Very interesting read though, can't wait to see how it develops.
 
I sincerely hope they can get it to work. I was just reading about it earlier, here:
http://www.dailytech.com/Chipmaker+...ender+CrossFire+SLI+Obsolete/article12719.htm

They will need to work with Microsoft and find a way to have both AMD and nVidia drivers installed at the same time. That is currently impossible according to the DailyTech article..
If Microsoft doesn't want to fix it, or can't, then these guys might have to develop a driver to replace both.

 


I think alot of people confusing 'impossible' with impractical and unlikely.

If Microsoft doesn't want to fix it, or can't, then these guys might have to develop a driver to replace both.

Exactly, although, who needs Microsoft? :D (Yeah we wish! :fou: )
The thing Daily tech forgets (they do that alot) is that if Lucid is doing the front end interface anyways, then they wouldn't need to hack the OS so much as the drivers, and if they got permission from AMD and nV (not likely :pfff: ), then they could write a unified driver, so it's not impossible, just alot of work.... which is what the whole idea is, alot of tricky work to make it work.
Even without permission AMD's open driver program in Linux, you would be able to use that to make a unified driver (when and if nV opened their drivers) right now though you'd be struck with R500 support though since they haven't opened R600+ yet. That would likely be an easier platform too, especially with the openess of OGL, but it's still not a marketing coup to say (WooHoo 20% faster multi-GPU gaming.... on LINUX games :sleep: ).

I still want to see more than just what essentially is a PR "hey this idea is neat and this is what we think" look at the technology.
Too much of this reminds me of the promise of SuperTiling which was supposed to scale great with many more GPUs (E&S had support for 32 in their SIM systems) and supports all 3D applications;
Will Harrris, Bit-Tech UK;
"ATI, whilst later to market than its rival, appears to have put some serious thought into remedying the deficiencies within SLI."
"Undoubtedly the biggest selling point for Crossfire is the universal game support offered by super-tiling, which could prove outrageously popular with a wide range of gamers."

[:mousemonkey:1]

But in real life, buggy as heck, and everyone defaults to AFR because it's easier to blend all the features in a single page and render ever other page than to try and use scissor and supertiling to match dependant comonents/targets.