Sign in with
Sign up | Sign in
Your question

ATI + Nvidia: The Future is Now!

Last response: in Graphics & Displays
Share
a b U Graphics card
January 7, 2010 6:23:09 AM

I can't wait to get my hands on one of these Lucid Hydra chips. Combining ATI and Nvidia cards in SLifired sounds sweet!


http://www.tomshardware.com/reviews/msi-fuzion-lucidlog...


Of course, it still needs to work out some kinks, such as the poor Stalker benchmarks and issues with 2xGTX285s in Crysis. But the 5870/GTX285 Crysis benchmark looked promising. :D 

More about : ati nvidia future

a b U Graphics card
January 7, 2010 7:49:58 AM

Bluescreendeath said:
I can't wait to get my hands on one of these Lucid Hydra chips. Combining ATI and Nvidia cards in SLifired sounds sweet!


http://www.tomshardware.com/reviews/msi-fuzion-lucidlog...


Of course, it still needs to work out some kinks, such as the poor Stalker benchmarks and issues with 2xGTX285s in Crysis. But the 5870/GTX285 Crysis benchmark looked promising. :D 


In my opinion the Hydra is a complete load of ****. X-Fire has better results that will only get better with good drivers. I don't see any benefits for people how have already X-fire configurations or plan to. If you have an unused card then go ahead, use Hydra on a crappy MSI mobo.

My .02
m
0
l
a b U Graphics card
January 7, 2010 8:38:29 AM

If you have an unused card then go ahead, use Hydra on a crappy MSI mobo said:
If you have an unused card then go ahead, use Hydra on a crappy MSI mobo



msi is a good brand. i'd prefer them now over asus.

m
0
l
Related resources
a c 169 U Graphics card
a b Î Nvidia
January 7, 2010 8:47:56 AM

Lucid has it advantages and the most important thin it that it allow users to use different cards with each other but it has a long way in front of it and lots of things to do to become better
m
0
l
a b U Graphics card
January 7, 2010 11:26:31 AM

wh3resmycar said:
msi is a good brand. i'd prefer them now over asus.


Probably, I haven't tried it myself. I base my opinion on overclocking benchmarks, limitations in OC, forums, and word of mouth. Personally I like Gigabyte. Anyhow, I said what are the benefits, but I see SLI, X-Fire being better between same model cards.
m
0
l
a b U Graphics card
January 7, 2010 2:31:23 PM

Finished reading the articles on Anand and Guru3d about Hydra. This is a big sack of fail. X-Fire has better scaling all over the place.
m
0
l
a b U Graphics card
January 7, 2010 3:37:36 PM

Its clear the support simply isn't around yet. Maybe in a year or two, but right now, Hydra isn't ready.
m
0
l
January 7, 2010 7:55:12 PM

hallowed_dragon said:
Finished reading the articles on Anand and Guru3d about Hydra. This is a big sack of fail. X-Fire has better scaling all over the place.



Patience.


You will learn patience.



A completely new approach to multi-GPU rendering, they demonstrate some improvements, they demonstrate improvements in multi-vendor setups...


Yet you diss them for not having a faultless driver setup.... c'mon, get real.
m
0
l
a b U Graphics card
January 7, 2010 7:59:06 PM

They need a more powerful processor to run the algorithms, a 600MHz chip? I was thinking they could use something along the lines of a 1GHz Snapdragon.
m
0
l
a c 273 U Graphics card
a c 172 Î Nvidia
January 7, 2010 8:03:42 PM

gamerk316 said:
Its clear the support simply isn't around yet. Maybe in a year or two, but right now, Hydra isn't ready.

Agreed, I like the idea and think it has potential but it just needs a bit more R & D.
m
0
l
a b U Graphics card
January 8, 2010 8:09:48 AM

Amiga500 said:
Patience.


You will learn patience.



A completely new approach to multi-GPU rendering, they demonstrate some improvements, they demonstrate improvements in multi-vendor setups...


Yet you diss them for not having a faultless driver setup.... c'mon, get real.


Ok, patience. First this is sloppy R&D. The easiest approach would be to first design the Hydra to surpass the X-Fire and SLI approaches providing better scalability. They failed hard. After that they began trying to get both company cards to work together. They failed again. I just don't understand why they chose to take that road. I would have provided a solution for the X-Fire and SLI scalability. After that worked about 85% I would have started to mix cards, not all at the same time. That is like shooting for 2 rabbits at once.
m
0
l
a b U Graphics card
January 8, 2010 10:53:13 AM

Mousemonkey said:
Agreed, I like the idea and think it has potential but it just needs a bit more R & D.


Yeah, I'm not opposed to it, but niether the support or performance is there at this point. And BTW, I REALLY wanted to see an OpenGL based game (even an old one), just to see if Hydra would even WORK (this will become an issue as Doom4/Id's new engine comes out later this year)
m
0
l
January 8, 2010 11:41:19 AM

hallowed_dragon said:
Ok, patience. First this is sloppy R&D. The easiest approach would be to first design the Hydra to surpass the X-Fire and SLI approaches providing better scalability. They failed hard. After that they began trying to get both company cards to work together. They failed again. I just don't understand why they chose to take that road. I would have provided a solution for the X-Fire and SLI scalability. After that worked about 85% I would have started to mix cards, not all at the same time. That is like shooting for 2 rabbits at once.


SLI and Crossfire have taken a long time to get where they have. Give Hydra some time.
m
0
l
a b U Graphics card
January 8, 2010 11:51:48 AM

This hydra really sucks....5870+285<5870 lol lol lol
m
0
l
January 8, 2010 12:19:10 PM

Even nVidia and ATI both said what LucidLogix is trying to accomplish with their Hydra chip is EXTREMELY difficult, give it time for them to perfect it, the idea shows a LOT of promise.

The fact they have it working at all is amazing in itself, it will be even more amazing when they get it working optimally.
m
0
l
January 8, 2010 12:41:31 PM

hallowed_dragon said:
Ok, patience. First this is sloppy R&D.


You don't have the slightest notion about R&D. If you did, you would know it is always a work in progress.


hallowed_dragon said:

The easiest approach would be to first design the Hydra to surpass the X-Fire and SLI approaches providing better scalability. They failed hard.


And of course, that would be the easiest way for them to sell it... by taking on ATi and Nvidia head on. :lol: 



hallowed_dragon said:
I would have provided a solution for the X-Fire and SLI scalability.


And you would have went bankrupt in the process.

Well done.
m
0
l
a b U Graphics card
January 8, 2010 12:59:27 PM

Amiga500 said:
You don't have the slightest notion about R&D. If you did, you would know it is always a work in progress.




And of course, that would be the easiest way for them to sell it... by taking on ATi and Nvidia head on. :lol: 





And you would have went bankrupt in the process.

Well done.


Actually I have first hand experience with R&D seeing that I am a developer mainly working on complex medical software. And I can tell you that what they did is sloppy. If something like this would get to our Q&A heads would fall hard.
Amiga500, yes I would take nVidia and ATI head on, but so did Hydra, yet they failed because they tried to do everything on a mediocre scale. Instead of tackling just a part of the problem they went out and gave a shot at everything. If they specialized in resolving some of the scalability of SLI and X-fire I bet they would have had more success.
m
0
l
January 8, 2010 1:14:46 PM

hallowed_dragon said:
Actually I have first hand experience with R&D seeing that I am a developer mainly working on complex medical software. And I can tell you that what they did is sloppy. If something like this would get to our Q&A heads would fall hard.
Amiga500, yes I would take nVidia and ATI head on, but so did Hydra, yet they failed because they tried to do everything on a mediocre scale. Instead of tackling just a part of the problem they went out and gave a shot at everything. If they specialized in resolving some of the scalability of SLI and X-fire I bet they would have had more success.


No, they wouldn't have. It was a two-pronged problem.

X-fire and SLI already HAVE excellent scalability lately, upwards of 80-90% in some games, so having an extra 3% scalability or even 5-10% wouldn't matter all that much alone, they also had to tackle the problem of mixing and matching cards.

Mixing and matching in itself requires both solutions, it isn't enough to be able to use any combination of cards together, you have to be able to do so while maintaining scalability, you can't say, "Oh I can use an HD5970 and a GT295 together now but they only scale at 10%! Whoopie!" It HAS to be able to do both. Scalability needs to at least match SLI and X-fire and it STILL needs to be able to mix and match cards.

Stop oversimplifying a complex problem.
m
0
l
a b U Graphics card
January 8, 2010 1:22:41 PM

RealityRush said:
No, they wouldn't have. It was a two-pronged problem.

X-fire and SLI already HAVE excellent scalability lately, upwards of 80-90% in some games, so having an extra 3% scalability or even 5-10% wouldn't matter all that much alone, they also had to tackle the problem of mixing and matching cards.

Mixing and matching in itself requires both solutions, it isn't enough to be able to use any combination of cards together, you have to be able to do so while maintaining scalability, you can't say, "Oh I can use an HD5970 and a GT295 together now but they only scale at 10%! Whoopie!" It HAS to be able to do both. Scalability needs to at least match SLI and X-fire and it STILL needs to be able to mix and match cards.

Stop oversimplifying a complex problem.


I don't oversimplify. I stated that they went and tried to tackle X-fire/SLI scalability and the mixing and matching of cards, the second problem being the hardest. This is like trying to get to the top of mount Everest after 1 day of physical training. They should have approached the X-fire/SLI scalability and make it work regardless of games. X-fire/SLI scales so well in specific games only, where ATI/nVidia provided help to the developers or hardcoded configurations in the drivers. Is that simple. ATI/nVidia are not incredible in this approach, but Hydra and the way they constructed their software could have made a huge difference if they approached the problem only from this side. Imagine 100% scalability with each card added. That would have been awesome, and I bet much easier to sort than trying to get ATI and nVidia different kind of models to work together.
m
0
l
a b U Graphics card
January 8, 2010 1:30:00 PM

For example: an approach that would have hidden the number of cards to the system would have been incredible. The Hydra chip would get between the CPU and cards and transform/transfer the information from the CPU to the GPU so that each GPU would render a part of the image instead of the full image. Each card rendering 960x1200 images instead of 1920x1200, or 640x1200 images, etc. That would be really fast. Now I oversimplified, but something like this would make me shell out some big $$ in multiple cards + a mobo with a Hydra chip.
m
0
l
January 8, 2010 2:56:42 PM

You have unrealistic expectations.

Like saying the Wright Flyer was rubbish because it couldn't go faster then a train.
m
0
l
January 8, 2010 3:03:44 PM

djcoolmasterx said:
You have unrealistic expectations.

Like saying the Wright Flyer was rubbish because it couldn't go faster then a train.


Best description so far :p 
m
0
l
a b U Graphics card
January 8, 2010 3:08:46 PM

djcoolmasterx said:
You have unrealistic expectations.

Like saying the Wright Flyer was rubbish because it couldn't go faster then a train.


Wrong comparison. Hydra is only new in the way that it does the join nVidia and ATI on the same board. That is it. On the other aspect it has X-fire and SLI to compete with. And it fails.
I expected from Hydra to have good scaling when it came to the same model GPUs and medium scaling when it came to different models. It failed.
If you really want to be successful you bring to the table something better than the competitor. Hydra didn't.
m
0
l
January 8, 2010 3:16:31 PM

The Wright Flyer wasn't faster then a train at the time, so why bother? It was slower, but it could fly. It didn't do flight particually well, but it was a start, it paved the way for new and better things.

As technology advanced the plane suppassed the train.

Don't expect everything new to be better then what already exists, it just needs time to mature.
m
0
l
a b U Graphics card
January 8, 2010 3:21:25 PM

djcoolmasterx said:
The Wright Flyer wasn't faster then a train at the time, so why bother? It was slower, but it could fly. It didn't do flight particually well, but it was a start, it paved the way for new and better things.

As technology advanced the plane suppassed the train.

Don't expect everything new to be better then what already exists, it just needs time to mature.


That was actually right...back in 19th and at the beginning of 20th century. Now we have to start having better expectations. Progress has to be very fast.
m
0
l
a b U Graphics card
January 8, 2010 3:29:13 PM

The fact that such an ambitious project is actually functioning on some level in a relatively short period of time, is proof to me that it shoulg be given the room to grow and mature. I won't go out and buy one tomorrow, but that doesn't mean that I won't be paying attention to how it develops.

Wouldn't hiding the GPUs behind the Hydra require very complex and specific driver sets? As in, every title that used hydra would require it's own hydra-specific drivers, that themselves would have to include a subset for every concievable combination of hardware? It is a very intense theoretical pickle that I can't even begin to formulate a solution for, or else I am sure I would be applying to work for one of these corps.
m
0
l
a b U Graphics card
January 8, 2010 3:31:04 PM

Another thing that made me less than optimistic about Hydra. In all the reviews I saw that the scaling differs depending on hardware and GAMES. Wtf. The only way to get perfect scaling and a true universal experience is to make the GPUs invisible for the CPU and OS. Get in between and make scalability independent from which model, game, OS, CPU, pudding you have. That is the way it should be. That would be innovative. All I see now are a lot of hardcoded configurations. The idea of Hydra is partially good, but I believe that the developers and the producers of the chip were under some contract that forced them to release this product too early.
m
0
l
a b U Graphics card
January 8, 2010 3:32:48 PM

hallowed_dragon said:
That was actually right...back in 19th and at the beginning of 20th century. Now we have to start having better expectations. Progress has to be very fast.


Wow, that was kind of a pathetic retort. Progress that is fast for the sake of it will never achieve its potential. If progress is accelerated by high consumer demand, or advancing technology in other areas, it will grow based on meeting the needs of the demand, or the abilities of the tech. Creating a demand for faster development to CREATE these situations is a recipe for disaster. Let it be, and it shall be, if it is meant to be. If not, oh well, not like the CE world would be shocked by an upstart technology not making it to mass market proliferation.
m
0
l
a b U Graphics card
January 8, 2010 3:35:58 PM

JofaMang said:
The fact that such an ambitious project is actually functioning on some level in a relatively short period of time, is proof to me that it shoulg be given the room to grow and mature. I won't go out and buy one tomorrow, but that doesn't mean that I won't be paying attention to how it develops.

Wouldn't hiding the GPUs behind the Hydra require very complex and specific driver sets? As in, every title that used hydra would require it's own hydra-specific drivers, that themselves would have to include a subset for every concievable combination of hardware? It is a very intense theoretical pickle that I can't even begin to formulate a solution for, or else I am sure I would be applying to work for one of these corps.


Not quite. The GPUs should only receive data as if the CPU sent it, not the Hydra chip. So the GPU would render the image as if the monitor is only 960x1200. That could be resolved by software independent of the GPU model.
I have another idea, but don't have enough time to detail it.
m
0
l
a b U Graphics card
January 8, 2010 3:39:53 PM

JofaMang said:
Wow, that was kind of a pathetic retort. Progress that is fast for the sake of it will never achieve its potential. If progress is accelerated by high consumer demand, or advancing technology in other areas, it will grow based on meeting the needs of the demand, or the abilities of the tech. Creating a demand for faster development to CREATE these situations is a recipe for disaster. Let it be, and it shall be, if it is meant to be. If not, oh well, not like the CE world would be shocked by an upstart technology not making it to mass market proliferation.


Ok, maybe I was a little hard on the subject, but as I said in another post, I think the product was rushed to show people something working. For me what they showed dulled my enthusiasm. For others no. I said in this thread what I expected, even from an unfinished product. I may have too high expectations, but that is just me and nothing can change that.
m
0
l
a b U Graphics card
January 8, 2010 4:02:03 PM

hallowed dragon, just given it time. Lucid just came out. When Xfire and SLi first came out, the scaling was so bad that nobody recommended it. I'm sure that within 1-2 years, it'll match the level of regular Xfire/SLi.
m
0
l
a b U Graphics card
January 8, 2010 5:18:43 PM

Bluescreendeath said:
hallowed dragon, just given it time. Lucid just came out. When Xfire and SLi first came out, the scaling was so bad that nobody recommended it. I'm sure that within 1-2 years, it'll match the level of regular Xfire/SLi.


...If there is marketable demand for the tech, and if platform evolution doesn't ecclipse it's purpose. I for one, was pretty impressed with intel's crossfire/sli capable chipsets when they first came out, and recognised how it made certain choices (and associated negatives with those choices) obsolete within their own market. Will hydra move beyond a genre into mainstream desirability, as has Xfire/Sli has (to the point that differentiation does't have to be a defining platform choice, as in x58/p55).

Is hydra an answer to a question that has only been asked by those that don't understand what they are asking for?
m
0
l
January 8, 2010 6:12:11 PM

hallowed_dragon said:
Actually I have first hand experience with R&D seeing that I am a developer mainly working on complex medical software.


Balls you are. I think its much more likely your some kid that seen some software code a while back and thinks the rest of the world operates around the same principles.

Or else you are one helluva naive software engineer.



hallowed_dragon said:

And I can tell you that what they did is sloppy. If something like this would get to our Q&A heads would fall hard.


Really...

So when is the last time you tried to make a new code for two new architectures and make them talk to each other simultaneously?

Never. Because the medical area is very low risk.

Do you know what the average coding rate for aerospace safety critical software is? About 1 line... PER DAY



hallowed_dragon said:

Amiga500, yes I would take nVidia and ATI head on, but so did Hydra, yet they failed because they tried to do everything on a mediocre scale.


Hydra did not take them head on.

In fact in concentrating on mixed GPUs, Hydra specifically avoided taking them head-on.


With regards mediocre scale - you realise lucid have no money coming in the door from hydra? That is alot of investment for no return. You can't keep doing that until you have perfection.



hallowed_dragon said:

Instead of tackling just a part of the problem they went out and gave a shot at everything. If they specialized in resolving some of the scalability of SLI and X-fire I bet they would have had more success.


As I said earlier. You'd have bankrupted yourself long before reaching sufficent performance levels for commercial success.

This approach gives them a massive selling point - "your old GPU is still of use when you upgrade if you get a mobo with a lucid chip in it". Rather than - "buy hydra, we can give you 95% the performance of a xfire/sli board if you buy two identical gpus"...

(oh, and ignore the fact that xfire & sli boards are cheaper... cos.... errr.... cos.... uhmmm....)
m
0
l
January 8, 2010 6:14:18 PM

hallowed_dragon said:
I don't oversimplify.


For some supposed software engineer - you sure are demonstrating absolutely no knowledge of software whatsoever.


You cannot have unmatched scaling without scaling of any description. They need good algorithms for splitting the work between ANY 2 GPUs, regardless of whether they are heterogeneous or homogeneous.


m
0
l
a b U Graphics card
January 8, 2010 7:59:21 PM

hallowed_dragon said:
For example: an approach that would have hidden the number of cards to the system would have been incredible. The Hydra chip would get between the CPU and cards and transform/transfer the information from the CPU to the GPU so that each GPU would render a part of the image instead of the full image. Each card rendering 960x1200 images instead of 1920x1200, or 640x1200 images, etc. That would be really fast. Now I oversimplified, but something like this would make me shell out some big $$ in multiple cards + a mobo with a Hydra chip.


Wrong, sigh, you are oversimplifying.

In each render, there are many interdependencies. A truck on a 5870's render side might depend on the rays from a fire on the GTX 280's side.
Then the 5870 would either: Request the required info (request travels PCI-E lane, Hydra has to process, travel to GTX 280, 280 gives info, travels, hydra process, travel) or send back its half of the screen, when output it will obviously look wrong as the 5870's side is black due to the only light source (fire) being on GTX's side.

Hydra could avoid this by sending both cards the entire frame but having them render only 1 side, however this would probably result in horrible scaling, 50% would be hopeful.
m
0
l
a b U Graphics card
January 10, 2010 7:55:43 PM

Amiga500 said:
For some supposed software engineer - you sure are demonstrating absolutely no knowledge of software whatsoever.

You cannot have unmatched scaling without scaling of any description. They need good algorithms for splitting the work between ANY 2 GPUs, regardless of whether they are heterogeneous or homogeneous.


Ok. I expected that from an internet avatar. I don't care what you or anybody here has to say about my programming experience. I have worked in software developing beginning from HR Business software and now I work for a diagnostics company. My experience is in Asp.Net(mainly working with C#), Windows services, Windows applications(C# + VB.Net) and database programming(mainly Ms Sql Server, but also MySQL).
If you or anyone here think for 1 moment that something that is partially working is getting past Q&A in a professional environement than you are dreaming.
Algorithms can be written for any possible hardware. It depends on the professionalism and experience and project management of the people working on the project. I don't know what the hardware for Hydra, what changes it implies in software, nor should I care because I don't work on the project, but those people worked on it and this feels like they didn't have time to finish it.
I have expressed my opinion on the matter and you come and insult my programming experience. Good for you. Have a cookie. As I said, I am feeling disappointed by what they have shown us, but that doesn't require you or anyone else for that matter to come down on me like I was some kind of blasphemer.
m
0
l
a b U Graphics card
January 10, 2010 8:00:40 PM

sabot00 said:
Wrong, sigh, you are oversimplifying.


Yes....I kinda said that in my post... . Of course there will be a need for a lot of data and image processing. But that is not impossible. Your guess is a good as mine about the 50% scaling. Someone with more knowledge in this area can shed some light on the matter.
m
0
l
a b U Graphics card
a b Î Nvidia
January 11, 2010 4:54:11 AM

Bluescreendeath said:
hallowed dragon, just given it time. Lucid just came out. When Xfire and SLi first came out, the scaling was so bad that nobody recommended it. I'm sure that within 1-2 years, it'll match the level of regular Xfire/SLi.


Lucid did not 'just come out' Hydra 100 launched a year and a half ago, and to alot of bluster about 'above linear' improvements in performance and promises no limit to graphics cards, chipsets or games, that they've only delivered part of what they promised.

It's a nice idea but FAR from major review news worthy, and not worth recommending to anyone to buy except for as a science experiment.

For those of you comparing it to the Wright Brothers you're smoking something, it's not as ground breaking, and it's future is nowhere near as promising as Flight, since the future is not for this solution, it's a short term fix, and if anything it's like a glider promising sustained flight. I wouldn't recommend it to people yet, it has promise, but they better get a move on or else development will pass them by.

m
0
l
January 11, 2010 10:07:31 AM

In an arguement, you use extreme examples to argue your case more clearly. At no point did I say that Hydra is as ground breaking as flight, that would be stupid.
m
0
l
a b U Graphics card
January 11, 2010 10:38:47 AM

TheGreatGrapeApe said:
Lucid did not 'just come out' Hydra 100 launched a year and a half ago, and to alot of bluster about 'above linear' improvements in performance and promises no limit to graphics cards, chipsets or games, that they've only delivered part of what they promised.

It's a nice idea but FAR from major review news worthy, and not worth recommending to anyone to buy except for as a science experiment.

For those of you comparing it to the Wright Brothers you're smoking something, it's not as ground breaking, and it's future is nowhere near as promising as Flight, since the future is not for this solution, it's a short term fix, and if anything it's like a glider promising sustained flight. I wouldn't recommend it to people yet, it has promise, but they better get a move on or else development will pass them by.


TGGA, I value your opinion quite highly so maybe you can explain this to me. I was under the impression that Hydra's solution, dividing work between the cards so one processes a Tree and the other a rock for example, would be the future of multi GPU scaling. You say that it is a short term fix, but it seems to be the exact right thing to do to me. Is there something better around the corner?

As for Lucid, if they were smart they would patent this if they haven't already, and put their efforts into wow-ing the market, and not worrying about mixing cards from ATI and nVidia together, so that when both ATI and nVidia decide to use the same approach with their respectively far greater budgets, staff, and experience Lucid can make a dime off of royalties and then focus on mixing the cards.
m
0
l
a b U Graphics card
January 11, 2010 10:44:51 AM

Wouldn't ATI or nVidia complain about this?
Also are these boards available in the market?
m
0
l
a b U Graphics card
a b Î Nvidia
January 11, 2010 2:40:22 PM

djcoolmasterx said:
In an arguement, you use extreme examples to argue your case more clearly.


What like you did with H_D? :pfff: 

This is no Wright Flyer, so don't evoke something that made great changes when this is a short term fix, this is more like a cool glider just as the wrights are developing their first planes which within years outpaces all other (gliders, trains & automobiles with John Candy), it's a nice sidetrack not the future, which I will go into more in my reply to AMW.

m
0
l
a b U Graphics card
a b Î Nvidia
January 11, 2010 3:03:15 PM

AMW1011 said:
TGGA, I value your opinion quite highly so maybe you can explain this to me. I was under the impression that Hydra's solution, dividing work between the cards so one processes a Tree and the other a rock for example, would be the future of multi GPU scaling. You say that it is a short term fix, but it seems to be the exact right thing to do to me. Is there something better around the corner?


The way the workloads are handled are moving away from the traditional rasterized solution, and move toward a more computational model,where scaling is more linear, and it won't really differentiate between solutions as much, and this is something all 3 of the big players saw coming long ago, and now developers like Mark Rein are promoting as the way forward for development. This is likely a 3-4 year solution, and while it's interesting and we had some hope (although alot of doubt about it's promises), I think it's a little late in the game at a time when all 3 players are moving away from simply a simple DX graphics focus. And versus the high end intel agnostic X58 Mobo is it as compelling as when you had to chose one MoBo or another? Get a high end X58, then if the company A GPU is mediocre and you wanna upgrade many times it would still make sense to simply sell the card and get a new one or even sell 1 or 2 and get 2 new ones. Until Lucid shows unbalanced solutions (like 9800GTX + 5850, etc) I certainly wouldn't recommend anyone pay a premium for Lucid if they can get a more reliable intel solution. If the premium were $10 then it would be more attractive as a 'flexibility option', but @ ~$100+ it's hard to recommend that. Larrabee and Fermi delays have both pushed out the timeline, but it's still not that far away even from a prospect of using GPU ray-casting as GPUs and CPU get more powerful and both AMD and intel look to faster interconnects, the market for Lucid is limited to near term, similar to Ageia which took too long to bring out their PPU before a more generalized concept took over, same thing here, it's not happening fast enough to stay ahead of change. Not a bad idea, just late to the game and too slow to implement.

Quote:
As for Lucid, if they were smart they would patent this if they haven't already, and put their efforts into wow-ing the market, and not worrying about mixing cards from ATI and nVidia together, so that when both ATI and nVidia decide to use the same approach with their respectively far greater budgets, staff, and experience Lucid can make a dime off of royalties and then focus on mixing the cards.


Remember LUCID is not alone, there are pretty deep pockets behind them, they are backed by intel Capital, if it were beneficial there would simply be an accusition and unlimited resources. And for AMD and nV despite all their comments, without a true competitor in intel, there's little incentive for them to work together despite the fact that this may increase overall sales, it's more of the prisoner's dilema of working together vs $crewing the other guy.
m
0
l
!