Sign in with
Sign up | Sign in

Hydra 200: An Evolved ASIC

MSI Big Bang Fuzion: Pulling The Covers Off Of Lucid’s Hydra Tech
By

When Lucid first began showing off what it was working on, the company was using Hydra 100-series hardware—manufactured on a 130nm process, limited to PCI Express 1.1 signaling rates, and rated for 3.5W power consumption.

The LT24102—Lucid’s highest-end ASIC (in a family of three Hydra 200-series SoCs)—is a second-gen part compatible with PCI Express 2.0, manufactured at 65nm, and rated for up to 5.5W. The ASIC’s 48 PCIe lanes allow it one x16 upstream port and two x16 downstream ports, one x16 and two x8s, or a quartet of x8 connections. An embedded 300 MHz RISC processor with 64KB instruction cache and 32KB data cache exercises control over the device’s switch port.

Hardware, Meet Software

Before you’re able to utilize the Hydra engine you have to install Lucid’s driver software, which is currently evolving in a notable way. It used to be that every time ATI or Nvidia updated their own drivers, Lucid would have to qualify them and iron out any new glitches introduced by either vendor. Clearly, this would have been an ongoing (and compounding) support nightmare for Lucid’s engineers.

Enabling Hydra is ridiculously easy through the driver's control panel.Enabling Hydra is ridiculously easy through the driver's control panel.

But with the company’s most recent driver drop, version 1.4, a reshuffling of where the Hydra engine exists in software means it’s no longer necessary to sweat the Catalyst or GeForce version you’re using.

That’s not to say the game, API, and operating system compatibility stories have simplified at all:

  • You’re still limited to DirectX 9 and DirectX 10.
  • You’re still limited to Windows Vista (32- and 64-bit) and Windows 7 (32- and 64-bit). Moreover, X-mode—running an ATI and Nvidia card in the same machine, is limited to Windows 7, which gives you the ability to install multiple graphics drivers concurrently.
  • You’re still subject to Lucid’s own game testing. According to the company, many titles work right out of the box. Others require specific optimization in its driver. This is perhaps the biggest challenge facing Lucid in making Hydra a transparent technology for gamers to enjoy. Not only do the hardware vendors have to work out the kinks when a new title is launched, but then Lucid has to do the same thing.


Depending on the graphics card configuration you’re running, there are different lists of games qualified to pass QA. For example, in driver 1.4.1, Lucid presents a list of 42 different games validated on all five of its available hardware combinations. An additional 22 are supported by N- and A-modes (not X-). Nine others work in N-mode, and five work in A-mode. One of the things we’ll be testing today is Hydra’s compatibility. We’ve recently upgraded our benchmark suite with newer games, so it’ll be a challenge for Lucid, to be sure.

What if your favorite new game isn’t one of the ones qualified to run acceptably? Does that mean you’re out of luck? Not necessarily. You can manually add the game to the driver control panel, which will turn Hydra on for that title. Here’s the breakdown:

  1. If Hydra is disabled on the Fuzion board (through the control panel or system tray icon), any game you play will run on a single GPU.
  2. If Hydra is enabled and the game is not on the control panel’s list of validated/manually-added titles, it’ll run on a single GPU.
  3. If Hydra is enabled and the game is on the list, it’ll run on multiple GPUs and (hopefully) realize a speed-up. If you added the game manually, it could encounter problems given that it wasn’t validated.

The X-Factor

Perhaps Hydra’s sexiest selling point is the ability to augment your once-fastest Nvidia-based graphics card with something faster from ATI. Sure beats hawking that $500 GeForce GTX 285 on Ebay for $250 used, right? Well, there are a few things you’ll need to keep in mind before assuming Radeons and GeForces get along.

Most important, the obvious: you’re using dissimilar architectures from competitors who use differentiation to sell more GPUs. Mixing them will get you a lowest common denominator. Lucid seems rife with smart engineers, but they’re not magicians. They can’t make a Radeon HD 5870 accelerate PhysX or a GeForce GTX 260 support DirectX 11. Instead, you have to give up both. You’ll see in the benchmarks that we weren’t able to achieve PhysX acceleration as long as there was an ATI card installed, and we weren’t able to run the latest S.T.A.L.K.E.R.: Call of Pripyat test with DX11 lighting with an Nvidia board present.

There’s another caveat here that might temper your enthusiasm a bit: Lucid recommends mixing non-identical cards with performance profiles as close as possible in order to maximize scaling. Match too-fast of a board with something too slow and you’ll see minimal gain, if any. That might be a tough pill to swallow for upgraders who aren’t necessarily looking to jump sideways from a GeForce GTX 260 to, say, a Radeon HD 4890.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 52 comments.
This thread is closed for comments
  • 1 Hide
    Maziar , January 7, 2010 6:19 AM
    Nice article,its very good for users for upgrading,because for current SLI/CF you need 2 exact cards but with Lucid you can use different cards as well,but it still needs to be more optimized and has a long way ahead of it,it looks very promising though
  • 9 Hide
    Von_Matrices , January 7, 2010 6:26 AM
    I'm highly doubtful of the Steam hardware survey. I think it is underestimating the number of multi-GPU systems. I for one am running 4850 crossfire and steam has never detected a multi-GPU system when I was asked for the hardware survey. The 90% NVIDIA SLI seems also seems a little too high to me.
  • -1 Hide
    Bluescreendeath , January 7, 2010 6:27 AM
    The CPU scores for the 3D vantage tests are way off. You need to turn off PhysX when benchmarking the CPU or it will skewer the results...
  • -1 Hide
    shubham1401 , January 7, 2010 6:28 AM
    Nice concept...
    A long way to go though.
  • -1 Hide
    Bluescreendeath , January 7, 2010 6:34 AM
    http://www.tomshardware.com/reviews/msi-fuzion-lucidlogix-hydra,2526-6.html
  • 2 Hide
    Bluescreendeath , January 7, 2010 6:35 AM
    So far, the best scaling has been in Crysis. The 5870/GTX285 combo benchmarks looked very promising.
  • 2 Hide
    cangelini , January 7, 2010 6:46 AM
    BluescreendeathThe CPU scores for the 3D vantage tests are way off. You need to turn off PhysX when benchmarking the CPU or it will skewer the results...


    It's explained in the analysis ;-)
  • 2 Hide
    kravmaga , January 7, 2010 6:57 AM
    "But when you spend $350 on a motherboard, you’re using graphics cards that cost more than that. If you’re not, you aren’t doing it right"

    Quoted from the last page; I disagree with that statement.
    There are plenty of people in situations where using this board is a better investment performance per dollars. This is all the more relevant as this technology will undoubtedly find its way into cheaper boards and budget oriented setups where it will make all the sense in the world to bench it using mid-end value parts.

    I, for one, would have liked to see what using gtx260s and 5770s would look like in this same setup. As is, this review leaves many questions unanswered.
  • -2 Hide
    SpadeM , January 7, 2010 7:15 AM
    Well the review does give an answer in the form of: It's better to run a ATI card for rendering and a nvidia card for physics and cuda (if u're into transcoding/accelerating with coreavc etc.) with windows 7 installed.
    Or at least that is the conclusion i'm comfortable with at the moment.
  • 2 Hide
    HalfHuman , January 7, 2010 7:18 AM
    i also agree with the fact that a person who will buy this board will necessarily go for the highest priced vid boards. maybe some will but not all. there will be more who will try to save the older vid cards.

    i also understand why you paired the 5870 with nvidia's greatest. there is a catch however... lucid guys did not have the chance to play with 5xxx series too much and you may be evaluating something that is not too ripe. i guess the 4xxx series would have been a better chance to see how well the technology works. couple that with games that are not yet certified for lucid, couple that with how much complexity this technology has to overcome... i think this is a magnificent accomplishment o lucid team part.

    i also think that in order for this technology to become viable it will go down in price and will be found in much cheaper boards. for the moment the "experimenting phase" is done on the expensive spectrum. i saw some early comparisons and the scaling was beautiful. i know that the system put together by lucid... but that is fine since that was only a demo to show that it works. judging on how fast this guys are evolving i guess that they will go mainstream this year.
  • 0 Hide
    cangelini , January 7, 2010 7:20 AM
    Ah, but if it doesn't offer a better investment in performance per dollar, as is the case now, that statement stands up =)
  • 4 Hide
    Andraxxus , January 7, 2010 8:33 AM
    I hope that the guys at Lucid will have a chance to continue with this
    wonderful technology.Not long ago mixing ATI with Nvidia was unthinkable
    and many people asked if they could CF or SLI mixed boards on forums. So I think that this is something that should have the support of the people
    that buy GPUs so that we can end this proprietary technology farce (see Physx).I'm not saying that the Physx is bad but the restriction are bad.
    Well in the end I just hope that they won't be bought by a rich so called
    "competitor" that will can the product so that it can keep sucking money
    from the buyer just for minor improvements or rebranding.
  • 0 Hide
    juanc , January 7, 2010 9:50 AM
    I think that this will really pay if the people develop some driver that can "get the most out of each card" by rendering using each cards "best features" like for example, render the 3D Scene with the GeForce and apply the AA with the ATI and the colouring with the ATI. Balancing using what's best on each card.

    Then I'll get 1 middle of the pack ATI and one middle of the pack NV. Run what runs best on each, or combine the best features of each card togheter.
  • 0 Hide
    Yuka , January 7, 2010 11:52 AM
    Nice review Mr. Chris, sharp as usual.

    I agree with zipzoomflyhigh, but this chip has a lot of pontential. It needs some polishing or help from ATI and nVidia to make it better. If they can make it some how (ATI and nVidia for Hydra), this would boost up their sales for not being "platform bound" and leasing their multi gpu tech to third parties. I can dream a little, right? lol.

    Anyway, very good news and hopefully nVidia nor ATI will bully this tech.

    Cheers!
  • -1 Hide
    socrates047 , January 7, 2010 12:18 PM
    Nvidia has 'x', AMD/ATI has 'y', and Intel brings 'z'.
    Hydra produces 'xyz'.
    this is all value to me... i dont know about you guys.
  • -1 Hide
    thackstonns , January 7, 2010 1:08 PM
    Here is why I like this technology. I can keep my 4870 and upgrade to a multicard system without having to buy 2 more graphics cards. So I could do a 5870 and instead of moving the 4870 to a different computer I can keep it. Here is where I have that problem though. Physics will suck because of nvidia's restrictions, I will have a hell of a system that will run crysis and looks good, but since the rest of the games are console ports I will be wasting money to play crap quality games.
  • 2 Hide
    noob2222 , January 7, 2010 1:45 PM
    Nice read, but I question the actual useable titles with this Hydra. Testing with games that aren't supported doesn't show what the board can do, but only shows what it can't.
    Using 5/6 titles that aren't supported officially makes this board and technology appear to be an epic failure. Would be nice to know what it does when the game is acually supposed to work, or what happens when the drivers allow these games to work in the future.
  • 1 Hide
    TeraMedia , January 7, 2010 1:47 PM
    The problem I have with the product is that they are essentially replacing the GPU obsolescence schedule with the chipset obsolescence schedule. And their platform choice makes this particularly bad because while AMD makes an effort to keep their sockets backward-compatible, Intel seems to do the opposite. In fact, Intel now seems hell-bent on segmenting the platform space as much as possible while constraining the product lifecycle as much as possible. Want to reuse your C2Q or upgrade to a 6-core (gulftown, is it?) CPU on this mobo? Good luck with that. With socket 1156, Intel has effectively forced you to buy a new mid-range CPU and constrained you to the mid-range market. If past behavior is any predictor of future behavior, I fully expect the next major generation of Intel CPUs (e.g. 3+ yrs out) not to be compatible with 1156. How long do you think Intel will make advancements on 1156-compatible CPUs?

    So, yes, you can mix GPUs from different generations and even from different vendors. But by the time it even makes sense to do that twice, you'll need to upgrade your whole MB to keep a balanced CPU-GPU system. If the X-mode, A-mode and N-mode scaling were more seamless and effective on the latest HW, and the cost were more in-line with other 1156-socket MBs, I could see this MB making some sense. But given that you need to spend an extra $150+ for this Mobo, I'd rather put that $150 towards the second card or an upgraded card with a longer life span before obsolescence.
  • -1 Hide
    memeroot , January 7, 2010 2:09 PM
    big fan of the concept and $150 isn't to much for something a bit fun....
    however needs to be x58 and what is the over clocking ability of the board?
    also does it have the same audio advantages
  • 0 Hide
    xer0 , January 7, 2010 2:12 PM
    So what happens when Nvidia (which already has with Physx) or ATI decide to to make drivers (or even firmware) that looks for the competitors's (or lower-end, same-manufacturer cards) and says "Sorry we're being douchebags and turning off functionality and performance features."
Display more comments