techgeek :
Heck with the ribbon cable we are getting right to my 3dfx analogy.
Which should be a Matrox analogy IMO. :lol:
I also wasn't really saying that communicating over the PCI-E bus was really a barrier (better than having yet another proprietary solution), just more of a technical query as to how they plan to implement it. The PCI-E bus has plenty of headroom left, we are just getting the cards now that would have saturated the X8 AGP bus.
Ok, but here's an issue for the PCIe bus communication, it takes alot of energy, so you are just addding to the power budget, when mobile solutions are so worried about lane power draw even when low traffic that they run their graphics with reduced lanes (1-2) when doing low demand work. So using the PCIe lanes would be a way to do it, but woudln't maximize the power savings IMO.
I have no problem with attempting to reduce power consumption, but do it on one piece of silicon, don't convince users the need for two.
What if it can't be done? What if due to the required power circuitry you cannot get proper power states out of a single solution due to architecture demands like silicon layers, wire desnity/length, etc keep you from ever getting a low & high power solution out of one chip without driving development cost through the roof?
They may not be trying to convince users of anything other than, This is the way we can do it and get it to you now and not cost twice as much and sacrifice performance or power savings. What the point in a single solution if the cost is more, the performance is 90% and the power saving is only 1/2 instead os 1/20 the idle state? Sure it's more elegant, but this isn't a laptop or cell phone where elegant and clean is more important than effective.
Maybe N01sFanboy has a point, maybe they should consider lowering the 2D clocks even further.
How much lower do you have to go on those cards to take that 70W down to 35W let alone 7-10W?
Better yet, instead of racing to beat the competition in performance, take one product cycle where you don't introduce any new features or speed improvements and just give the same performance with lower power requirements.
Then your competition builds the faster mousetrap and you go out of business because 5% of the market wanted their 'Green Gamer' and the other 95% wanted their ultra-fast card. But you died with that loyal 5% saying damn fine company those guys trying to save me a little power, oh well, guess I'll buy company B's solution now.
Seriously, why does it need to be one piece of silicon? I still don't get it, anymore than it needs to be 1 memory module at higher speed, or 1 ultra-fast core in a CPU.
This usually gets done to some extent through process reduction (ie 90nm to 65nm), but instead of just relying on that, really try to make your present design more energy efficient.
They are more energy efficient. The power consumption figure look high, but I doubt they're much higher performance/W than any previous generation, and if anything are likely lower in power consumption when the performance is compared.
When it comes right down to it, ATI is the one that really has to consider their power usage, since it's really only them struggling with excessive power requirements. That's not to say that nVidia doesn't have some work to do on the power front, they're just not as bad as ATI.
Then you didn't look at the graph above. Neither is clean in the idle/2D realm we're talking about, which is the important area. The GTX uses a bit more under 2D and the GTS is only slightly better than the XT. And 3D almost doesn't matter, ask a gamer if they care more about +10fps at max setting or -10W when gaming and I think you'll get near unanimous reply +10fps. Doing both is nice if you can, but I don't know of many people who truely buy their high end cards or SLi rig based on their power bill, that's what the mid-range are for, and there it the same story;
I don't understand the resistance to a 2 part solution if it's effective in giving us the best of both worlds. Sure I'd prefer a single elegant solution, but I'm sure so would the Mfrs (cheaper/easier for them), which is to me the biggest reason to think there is some barrier to overcome in a single solution that's not doable right now for whatever reason. Do you think either Ati, intel or nV want to waste more transistors if they don't have to?
As you see in the lower end card like the HD2400 they don't need them, so it's obviously a problem pretty strictly related to the higher transistor and PCB wire budget of the top cards, where regardless of speed the transistors still play the largest role, where 4X as many in the G80/R600 equals about 4+X the idle/2D figures.