Sign in with
Sign up | Sign in
Your question

Is HYDRA for real? / nV 280GTX & ATi4870 in One System?

Last response: in Graphics & Displays
Share
August 20, 2008 4:12:32 PM

What do you guys think about this new technology? If it works like they say it does, it will be a serious slap in the face of both AMD and Nvidia.
August 20, 2008 4:32:10 PM

post some links or info about it plz, i dunno wtf u talkin bout >_>
August 20, 2008 4:37:17 PM

Yeah...it's way to early to say.....for instance, PhysX was a good idea....that went 100% down the drain. You know how Nvidia and ATI claim giant gains from crossfire/sli......and...you really don't get them. I think this will probably turn out that way.....but....again, who knows at this point.
August 20, 2008 4:42:28 PM

Technology advances in an incredible rate, so something like this is bound to happen sooner or later. The question is, will it be available to the consumers?
August 20, 2008 4:57:17 PM

it will be a break through if it actually works. and if it does then its a fight to see who(nvidia/amd) will buy the company off and use its 100% working technology and from there possibly dominate the market.
a b U Graphics card
August 20, 2008 5:10:02 PM

It certainly looks real and I have been hoping for something like this.

As for slapping someone in the face, I agree that it's a slap in the face for nVidia, but not AMD/ATI. nVidia is the only one restricting their multicard platform to their chipsets. ATI (and by association AMD) knows that the enthusiast platform is Intel. ATI made the move to allow CrossFire to run on Intel chipset before AMD bought them, and to AMD's credit, they've done nothing to disturb that relationship. So as I see ATI loses nothing by this new technology, they will still get to sell two or more cards, while not losing out on a chipset sale, as they weren't selling them anyway. If anything it may simplify their driver development. nVidia on the otherhand will lose their only selling point for their chipsets. I am betting that anyone wanting SLI with an Intel CPU, would prefer it if it worked on an Intel chipset. I know I would. nVidia's track record for Intel chipsets isn't that great. For some reason they haven't worked out a SATA driver on the first try. That and you could heat your house with their chipsets.

One could also presume that even though Lucid Logix was financed by Intel, I don't see why this controller couldn't be used on an AMD platform. So one could assume that nVidia chipsets for AMD systems would become redundant.

I think this is a very promising idea. A fresh idea couldn't hurt, you never know, this could prove to be more efficient than either ATI's or nVidia's approach.

Lastly, you can bet this plays right into Intels hand (Larrabee). ATI allows their own cards to work on Intel boards, but their driver support only supports their cards. With Larrabee coming, this technology will give Intel the multicard platform without any R&D on their part. No need to reverse engineer SLI or CrossFire, just use a third party hardware/software solution.
August 20, 2008 5:31:33 PM

techgeek said:

Lastly, you can bet this plays right into Intels hand (Larrabee). ATI allows their own cards to work on Intel boards, but their driver support only supports their cards. With Larrabee coming, this technology will give Intel the multicard platform without any R&D on their part. No need to reverse engineer SLI or CrossFire, just use a third party hardware/software solution.


I agree with everything i didn't quoted :bounce:  I guess at this time it is a correct analysis. About Larrabee, i still think it will flunk big time, they will only sell the cards to benchmarkers, the rest will rot. But yes Intel funding this one is to have all those (i mean we) enthusiasts with Intel CPUs. No more CF or SLI silliness. Just the chip off loading and up we go !!

I still want to see this in practice although, it is too good to be truth.
August 20, 2008 5:46:38 PM

Nvidia has licensed SLI on the 5 series chipset if you not already know.
a b U Graphics card
a b Î Nvidia
August 20, 2008 6:03:56 PM

Hydra is an early prototype. There's lots of positives mentioned, but of course little of the negatives. From a traditional perspective, I would wonder about the buffers and how the chip level communication happens. I think some tasks would be hard to divide with the hardware implementation they have. It would be easier in a DX9 situation moreso than a DX10 implementation, where things start getting much more complicated. Defered rendering, tone mapping, specular lighting, Shader AA & AA buffers, all of which I see as major issues.

The demos and info also make me wonder how the tasks are assigned, there's no clear split point so the division of labour of A renders the beams B renders the wall and floor would mean that those items need to be clearly defined. It sounds like the role of Lucids software and hardware is to try to act as a pre-GPU assembler/scheduler, however without shared resource pools it makes some taks very difficult and for GPU 1 and 2 to communicate would be very bandwidth intensive (edit: especially the add-in version). And it would require alot of tweaking to make the assembler efficient for new games, so once again you would need 'Lucid Optimized' titles like 'Xfire/SLi-ready' to get the full benefit.
Also they mention having different generations of cards doing the work with a GF6800 and a GF9800 doing the task together, however they do things like AF differently, let alone the DX generation differences. For the X1K -> HD series you have many more differences, and a few different similarities. Then doing AMD & nV, you could only barely do that in the last generation, this generation would be even trickier unless you change techniques where the two become GPGPUs IMO. they say DX10 and DX11 should be easier than DX9, but I think the exact opposite from a hardware standpoint, and even from an API standapoint, the features in DX10 let alone DX10.1 to me pose a much greater problem for such a method without so drastic change to what they are doing.
Now Raytracing however I can see it being much easiser, however if you simply turn the GPUs into raytracing co-processors in OpenGL/CL or DX11 then really you wouldn't need the LUCID solution anyways, and performance should scale very linearly. All you could need is a CPU (or CPU/GPU) and then salve GPUs acting as SPUs and then something to assemble and write the data to output buffer taking the role of the traditional ROP.

http://www.pcper.com/article.php?aid=607
http://www.pcper.com/article.php?aid=607&type=expert

Sounds great, but I'm very skeptical, especially since the person providing the details at IDF sounds more like a PR guy than a technical person, making difficult task sound like a simple division of labour, like the part where they say: "Maybe 5 tasks to 1 or something like that; the results are then combined by the HYDRA chip and sent to a single GPU for output." very loosey goosey and alot lilke the promise of Supertiling before they actually tried to implement it in more complex games than the very closed environment of proffesional Flight SIMS.

Right now, I'm very skeptical, but it is interesting if they ever provide more details on how to do the complex stuff.

Oh Jebus, it's not only going to be offered as a MoBo but an Add-in card solutuon (thus would not be limited to just intel etc);
http://www.lucidlogix.com/technology/technologies.html
IMO this would add even more latency & bandwidth concerns, since it would have to use the chipset PCIe lanes 4 way + whatever CPU communication is required. That doesn't sound good at all.
a b U Graphics card
a b Î Nvidia
August 20, 2008 6:11:18 PM

radnor said:
About Larrabee, i still think it will flunk big time, they will only sell the cards to benchmarkers, the rest will rot.


Well I think Larrabee (a shrinken version of it) has alot of potential for laptops, but we'll wait and see how that turns out.
I'm optimistic and could see myself getting one if it pans out the way I hope, otherwise it will be a tough sell, but still as long as they can support DX10-11 and they price it attractively enough, they'll sell a ton, even if it fails, it'll likely do brisk sales in the first few weeks while people figure out the potential. After that though IMO it'll come down to feature & performance / price just like all the rest.
a b U Graphics card
August 20, 2008 6:27:48 PM

They do crossreference the memory of each gpu, but arent we still talking lag here. There has to be a certain amount of latency, or added latency, regardless of their claims
a b U Graphics card
a b Î Nvidia
August 20, 2008 6:33:13 PM

Oh yeah, a ton of latency.

Wasn't sure about the memory component especially since they have conflicting statements versus conflicting process map.

But to me the issue would be the dependant situations, SFR doesn't work many times because of this, which is why you must you AFR, now spliting the workload further into subcomonents just seems to amplify that problem.

Anywhoo, I'm going on lunch, I'll think it over there, but it's looking to make it very difficult and very slow IMO.
a b U Graphics card
August 20, 2008 6:38:47 PM

Mine too. It nay be good, using comparable ram, using an older card with a newer card it would show some gains, otherwise, its too slow
August 20, 2008 8:11:41 PM

techgeek said:
As for slapping someone in the face, I agree that it's a slap in the face for nVidia, but not AMD/ATI.


Well I meant it would be a slap in the face to both because Lucid Logix will have done something that neither AMD nor Nvidia could achieve with their own hardware. But like you, I am very skeptical. IDF seems to be more PR and less tech.
August 21, 2008 12:23:56 AM

The ExtremeTech article seemed pretty optimistic at least. They also did mention that cards from the same generation should be used, i.e. DirectX 10 cards. Combining a GTX 260 and a 8800 GT should work well, at least according to the article.
August 21, 2008 4:59:52 PM

So, If you put both of these cards into a System, Do you think games could run faster than just having one?
I guess you cant call it CF or SLI. It would be Other?

Looks like this may be very possible in the not too distant future...........
http://anandtech.com/tradeshows/showdoc.aspx?i=3379
August 21, 2008 5:17:17 PM

I don't know about pairing different cards, driver nightmares if you ask me.

Very interesting read though, can't wait to see how it develops.
a c 143 U Graphics card
a b À AMD
a b Î Nvidia
August 21, 2008 5:38:34 PM

I sincerely hope they can get it to work. I was just reading about it earlier, here:
http://www.dailytech.com/Chipmaker+Hydras+Stunning+Work+May+Render+CrossFire+SLI+Obsolete/article12719.htm

They will need to work with Microsoft and find a way to have both AMD and nVidia drivers installed at the same time. That is currently impossible according to the DailyTech article..
If Microsoft doesn't want to fix it, or can't, then these guys might have to develop a driver to replace both.

August 21, 2008 5:44:39 PM

i read this quite a while ago, and i don't think many people will be up for it... sounds like the scaling will be terrible
a b U Graphics card
a b Î Nvidia
August 21, 2008 5:44:45 PM

This next topics has been merged by TheGreatGrapeApe
  • NVIDIA 280GTX & ATI4870 in One System?!?
  • The title of this topic has been edited by TheGreatGrapeApe
    August 21, 2008 5:47:42 PM

    ^

    i wonder where the thread gone!when i clicked on it it says dont exist!lol
    August 21, 2008 5:48:25 PM

    but i think in the future when it successfully gone into mass production. someone is going to crack it so mix brand can work together.
    a b U Graphics card
    a b Î Nvidia
    August 21, 2008 6:09:24 PM

    aevm said:

    They will need to work with Microsoft and find a way to have both AMD and nVidia drivers installed at the same time. That is currently impossible according to the DailyTech article.


    I think alot of people confusing 'impossible' with impractical and unlikely.

    Quote:
    If Microsoft doesn't want to fix it, or can't, then these guys might have to develop a driver to replace both.


    Exactly, although, who needs Microsoft? :D  (Yeah we wish! :fou:  )
    The thing Daily tech forgets (they do that alot) is that if Lucid is doing the front end interface anyways, then they wouldn't need to hack the OS so much as the drivers, and if they got permission from AMD and nV (not likely :pfff:  ), then they could write a unified driver, so it's not impossible, just alot of work.... which is what the whole idea is, alot of tricky work to make it work.
    Even without permission AMD's open driver program in Linux, you would be able to use that to make a unified driver (when and if nV opened their drivers) right now though you'd be struck with R500 support though since they haven't opened R600+ yet. That would likely be an easier platform too, especially with the openess of OGL, but it's still not a marketing coup to say (WooHoo 20% faster multi-GPU gaming.... on LINUX games :sleep:  ).

    I still want to see more than just what essentially is a PR "hey this idea is neat and this is what we think" look at the technology.
    Too much of this reminds me of the promise of SuperTiling which was supposed to scale great with many more GPUs (E&S had support for 32 in their SIM systems) and supports all 3D applications;
    Will Harrris, Bit-Tech UK;
    "ATI, whilst later to market than its rival, appears to have put some serious thought into remedying the deficiencies within SLI."
    "Undoubtedly the biggest selling point for Crossfire is the universal game support offered by super-tiling, which could prove outrageously popular with a wide range of gamers."

    [:mousemonkey:1]

    But in real life, buggy as heck, and everyone defaults to AFR because it's easier to blend all the features in a single page and render ever other page than to try and use scissor and supertiling to match dependant comonents/targets.
    !