ATI takes a step back in time

Wow I just realized something while reading about ATI's Hybrid CrossFire. First off an explanation of what it is for those that don't already know. Hybrid CrossFire uses the IGP (graphics on the motherboard) while in 2D and shuts off the discrete graphics card to save power. When a 3D app is launched it re-enables the discrete card.

Now here is the epiphany that I had, isn't this going back to having a dedicated 3D card like we had back in the 3dfx days. Now of course in those days doing this wasn't a matter of saving power, it was just the way it was done. One card for 2D and another for 3D.

This seems like a lot of wasted transistors in the discrete card, since it will likely have all the 2D and video decoding circuitry still available, but go unused. Thought I would throw that out there and see what others have to say.
 

michiganteddybear

Distinguished
Oct 4, 2006
325
0
18,780
while I havn't read about that, it sounds like ATI is trying to grab some of the integrated video market with a lower cost solution.

in reality, today's high powered cards turn off much if not all of the 3d engine when its not needed. Now, you can still actually use parts of the 3d engine to do some 2d stuff, there is no real need to have a seperate 2d engine.

the reason it was done that way back in the old days was they didn't have the manufacturing tech to put that many transistors onto one piece of silicone effeciently.
 

No1sFanboy

Distinguished
Mar 9, 2006
633
0
18,980
Sounds a lot like Hybrid SLI. http://www.xbitlabs.com/news/video/display/20070625083756.html Regardless of which company does it I believe it to be a great idea. Some GPU's are now the biggest draw of power in a system but until a game is launched it is all waste. They do use less power under 2d but it is still far more power than necessary to display a web page or spreadsheet. The ATI 2900xt has addressed this to a small degree already by clocking slower in 2d.

I liken a GPU or CPU running at full speed under light load to be like a car engine that would always rev high rather than being able to throttle back. As the hardware become more capable I'd like to see more done to shut that power down when not needed.
I've probably said this too many times already but here I go again. I currently run my 8800gtx at 130 core 260 memory in 2d and save 20 watts measured at the plug. I also enable C1e with full stability on my overclocked CPU and save another 10 watts. Basically I'm saving enough power while typing this to run two of my CFL light bulbs, (yes I know about the Hg issues).

 
I totally understand why it was done in 3dfx days, I actually owned a Voodoo card. It wasn't really that they couldn't do the integration, 3dfx was really the first to have a real 3D accelerator coupled with the 3D API Glide (there were a couple before but they were more like decelerators) and since it was a new market it was easier to add a card to a system that already had a VGA adapter. nVidia I believe started the integration of 2D and 3D on a single die with the Riva128 and the TNT series.

As for turning off most of the 3D side while it's not used, I do agree with you. That said though, virtually every IGP uses much less power than a discrete card does even with the 3D portion disabled. Mostly by virtue of not having dedicated VRAM.

As for why ATI is doing this, that is the question? It doesn't make sense to me. Also with Vista and MicroSoft's push with AERO, the whole 2D side should go away (if MicroSoft has their way). So in light of that this initiative by ATI is even more questionable. Seems like marketing fluff. Or maybe your right, maybe ATI is trying to convince the enthuisiast that they need an IGP as well as a discrete card to help save the planet by conserving energy.
 

jelly1228

Distinguished
Apr 13, 2006
94
0
18,630
What about AMD's fusion chip that they are building? The chip is ment to get rid of the IGP and use the chip and system ram for 2d and light 3d apps. When you need power for heavy 3d apps use the GPU rether then the fusion chip. You got to think AMD is thinking of the future and not the now. 2009 is not that far way!
 

rodney_ws

Splendid
Dec 29, 2005
3,819
0
22,810
I agree with techgeek 100%... this feature will not be needed in the VERY near future as most modern OSs utilize 3D desktop environments (Vista, Ubuntu 7.10, etc) The one time I got to use a Vista laptop, the Aero interface did not run smooth at all on that particular laptop's Nvidia 6150 integrated GPU. I can't imagine what it'd have been like on something along the lines of a GMA 950.

Nice try ATI!
 
I remember we had a thread on similar lines not to long ago about why all MOBO's dont come with at least rudimentary graphics capabilities.This sounds a lot like what we were talking about at the time,however then most were saying the gpu companies would never go for it.
We were on about maybe them getting a "Green" grant from goverment for reserch,
I just wanted my computer to be able to tell me when the GPU had died instead of all the testing and guess work.
Mactronix
 

No1sFanboy

Distinguished
Mar 9, 2006
633
0
18,980


3d OS graphics is a terrible reason to buy or run more graphics power. I'm now up to two Vista machines including my notebook and I don't even notice Aero. The novelty of flip3d ran out in about 5 seconds. And by the way my disposable notebook with some crap integrated graphics, (i'd have to look at the sticker) seems to run Aero just fine.
 
Here's another question I just thought of and I didn't see explained in what I read about Hybrid CrossFire. In a single monitor setup which connector do you use to connect you monitor? If it's the IGP's then when you are playing games the framebuffer on you discrete card needs to be piped over to the IGP's framebuffer for display. Now there should be plenty of bandwidth available with PCI-E (2.0?) to handle the added traffic. The potential for bottlenecks is there though. By my quick calculation @ 1600X1200 8 bit color depth to maintain 60 FPS that is 345.6MB/s (base 10 not base 2) being piped across the PCI-E bus. It's should be easily doable as long as the bus isn't busy when the framebuffer needs to be transferred. Of course the higher the FPS and resolution, the more bandwidth required.

Another thing is what if you are watching video, do you get the choice of which device it defaults to? In most cases it's likely that your discrete adapter will have superior video decoding as well as less CPU utilization. So they need to make sure that you have the option to pick your discrete adapter over the IGP.
 
I really don't understand the negative that others seem to see in this type of solution. We've talked about it basically since the Mutli-View proposal with the 9100 IGP.

It's not like this is as relevant to the GF8300 or HD2400 so much as their big brothers.

Right here would be a perfect reason to offer the option of disabling the high-end graphics power draw;
2900xt_power.gif


70W at Idle and 75/80W under peak 2D !?!

Looking at the GTS you get a feel for the idea of the differences, and in this case you don't have to waste a card, only use as much as you need at the time. Do you really needmore than a 5W GF7100/X1250 IGP for anything other than gaming or HD playback?

Now if you could have 150W gaming monster one second, and then a 2W Via style IGP solution when file sharing or doing non-monitor related CPU intensive tasks like ripping, editing, etc? Why not?

I definitely think people asking for a 'Green' SLi/Xfire rig while gaming are idjits, however if we can 'green-ify' such a rig when the monster graphics are not needed, and it doesn't require a huge overhaul or much expense, then I see that as a worthwhile effort. And if people are serious about being a green as they can be without impacting their enjoyment or just wanna save electric bill money (as is so often used as the excuse for buying A vs B card), I'm sure alot of people will go for this idea.

Just my 2 amps worth.
 


Well it depends on the setup, but it doesn't have to share framebuffers, you could asily make it a smart internal KVM style solution depending on how you set it up where they use their own resources andsimply share the output device, whether that use a ribbon cable like a low rise card or another method almsot doesn't matter. Heck, make a single slot display jumper card that both plug into. And with displayport the options become much nicer still. The major hurdle becomes getting Graphics and MoBo companies working together, instead of working within their own confined product line. I expect to see alot of proprietary solutions personally, but it doesn't have to be that way.

However I don't see the issue as a barrier to implementation, it's simply a question of being smarter about the implementation than simply slapping two separate parts together.

I think you're over-analyzing it a bit and while that's great for product development, I wouldn't worry about it until it's a reality.

However, I'm sure either company would love any well thought out and constructive input because it's people like us who think of the little things/scenarios that the group-thinkers on project tend to forget.
 
3d OS graphics is a terrible reason to buy or run more graphics power. I'm now up to two Vista machines including my notebook and I don't even notice Aero. The novelty of flip3d ran out in about 5 seconds. And by the way my disposable notebook with some crap integrated graphics, (i'd have to look at the sticker) seems to run Aero just fine.

I don't think anyone was saying that one should buy or run more graphics power for a 3D OS, I think what was being said is (at least by myself) is that the 2D portion of GPU's will be phased out as 3D desktops become prevalent. Vista handles nearly all visual output through DirectX the whole 2D API has disappeared in Vista. So with this in mind, having an IGP and a discrete graphics card seems like a waste. With this initiative in Windows XP not only is are all the 2D transistors being wasted in the discrete adapter, but all the transistors for 3D are being wasted in the IGP. Talk about conservation and the environment, now you have two IC's heading to the landfill instead of just one, makes sense to me.

As for 2D clocks in ATI products, that has been around in more than the 2900XT, if I am correct, it was introduced with the X1800 series. I know that it was definitely implemented in the X1900 series because I had one. Just to let you know, the 2D clocks do make a huge difference in power draw, but it still doesn't approach the lower power of an IGP.
 


Once again you're WAY over-analyzing this.
The transistor budget for 2D is a forgone portion of the discrete, the 3D is a forgone budget for the IGP and neither is exclusive. They don't need to be mutually exclusive, you can run 3D on the IGP, and 2D on the discrete card, but like chaing your monitor settings from DVI > VGA > HDMI > TV you might be able to pick (or possibly have the profile in the driver auto-select) when either is needed. There's no hard limitation to that and they don't need to be any more wasted than any other part of the PC that gets old and replaced. How is this different than the memory module on you RAM sticks that go unused when you only use 70% of your memory, or portions of the CPU or instruction set that go unused? The only difference is that they function on their own, just like the crap intergrated audio, or additional network controller, etc.

The way I look at the transistor budget for both parts as individual ICs, it's no different than the integrated audio or other components I don't need on the board, so it's not that huge a deal to me, and the cost difference is so insignificant that intel's IGP vs non-igp solutions are often the same price, with AMD and nV's only dozen or half-dozen dollars different (keeping similar other features).

As for 2D clocks in ATI products, that has been around in more than the 2900XT, if I am correct, it was introduced with the X1800 series. I know that it was definitely implemented in the X1900 series because I had one. Just to let you know, the 2D clocks do make a huge difference in power draw, but it still doesn't approach the lower power of an IGP.

It's been around since the X800 series at least, if not before that on the mobile segment; and I thought nV desktops had 2D clocks before ATi IIRC, but it's something I just rememebr seeing about an FX line (may have been the FX5700) and the control panel.

Sure the 2D clocks, let alone totally idle, have some impact but C'mon look at that graph, who are we kidding 70-85W when doing nothing is the primary reason technologies like this are being developed IMO.
 


Heck with the ribbon cable we are getting right to my 3dfx analogy. I also wasn't really saying that communicating over the PCI-E bus was really a barrier (better than having yet another proprietary solution), just more of a technical query as to how they plan to implement it. The PCI-E bus has plenty of headroom left, we are just getting the cards now that would have saturated the X8 AGP bus.

I have no problem with attempting to reduce power consumption, but do it on one piece of silicon, don't convince users the need for two. Maybe N01sFanboy has a point, maybe they should consider lowering the 2D clocks even further. Better yet, instead of racing to beat the competition in performance, take one product cycle where you don't introduce any new features or speed improvements and just give the same performance with lower power requirements. This usually gets done to some extent through process reduction (ie 90nm to 65nm), but instead of just relying on that, really try to make your present design more energy efficient. Of course that's like asking two countries at war to stop bombing each other so they can improve their bombs for the next round of bombing. When it comes right down to it, ATI is the one that really has to consider their power usage, since it's really only them struggling with excessive power requirements. That's not to say that nVidia doesn't have some work to do on the power front, they're just not as bad as ATI.
 


Which should be a Matrox analogy IMO. :lol:

I also wasn't really saying that communicating over the PCI-E bus was really a barrier (better than having yet another proprietary solution), just more of a technical query as to how they plan to implement it. The PCI-E bus has plenty of headroom left, we are just getting the cards now that would have saturated the X8 AGP bus.

Ok, but here's an issue for the PCIe bus communication, it takes alot of energy, so you are just addding to the power budget, when mobile solutions are so worried about lane power draw even when low traffic that they run their graphics with reduced lanes (1-2) when doing low demand work. So using the PCIe lanes would be a way to do it, but woudln't maximize the power savings IMO.

I have no problem with attempting to reduce power consumption, but do it on one piece of silicon, don't convince users the need for two.

What if it can't be done? What if due to the required power circuitry you cannot get proper power states out of a single solution due to architecture demands like silicon layers, wire desnity/length, etc keep you from ever getting a low & high power solution out of one chip without driving development cost through the roof?
They may not be trying to convince users of anything other than, This is the way we can do it and get it to you now and not cost twice as much and sacrifice performance or power savings. What the point in a single solution if the cost is more, the performance is 90% and the power saving is only 1/2 instead os 1/20 the idle state? Sure it's more elegant, but this isn't a laptop or cell phone where elegant and clean is more important than effective.

Maybe N01sFanboy has a point, maybe they should consider lowering the 2D clocks even further.

How much lower do you have to go on those cards to take that 70W down to 35W let alone 7-10W?

Better yet, instead of racing to beat the competition in performance, take one product cycle where you don't introduce any new features or speed improvements and just give the same performance with lower power requirements.

Then your competition builds the faster mousetrap and you go out of business because 5% of the market wanted their 'Green Gamer' and the other 95% wanted their ultra-fast card. But you died with that loyal 5% saying damn fine company those guys trying to save me a little power, oh well, guess I'll buy company B's solution now.
Seriously, why does it need to be one piece of silicon? I still don't get it, anymore than it needs to be 1 memory module at higher speed, or 1 ultra-fast core in a CPU.

This usually gets done to some extent through process reduction (ie 90nm to 65nm), but instead of just relying on that, really try to make your present design more energy efficient.

They are more energy efficient. The power consumption figure look high, but I doubt they're much higher performance/W than any previous generation, and if anything are likely lower in power consumption when the performance is compared.

When it comes right down to it, ATI is the one that really has to consider their power usage, since it's really only them struggling with excessive power requirements. That's not to say that nVidia doesn't have some work to do on the power front, they're just not as bad as ATI.

Then you didn't look at the graph above. Neither is clean in the idle/2D realm we're talking about, which is the important area. The GTX uses a bit more under 2D and the GTS is only slightly better than the XT. And 3D almost doesn't matter, ask a gamer if they care more about +10fps at max setting or -10W when gaming and I think you'll get near unanimous reply +10fps. Doing both is nice if you can, but I don't know of many people who truely buy their high end cards or SLi rig based on their power bill, that's what the mid-range are for, and there it the same story;
hd2600_power.gif


I don't understand the resistance to a 2 part solution if it's effective in giving us the best of both worlds. Sure I'd prefer a single elegant solution, but I'm sure so would the Mfrs (cheaper/easier for them), which is to me the biggest reason to think there is some barrier to overcome in a single solution that's not doable right now for whatever reason. Do you think either Ati, intel or nV want to waste more transistors if they don't have to?

As you see in the lower end card like the HD2400 they don't need them, so it's obviously a problem pretty strictly related to the higher transistor and PCB wire budget of the top cards, where regardless of speed the transistors still play the largest role, where 4X as many in the G80/R600 equals about 4+X the idle/2D figures.
 

thuan

Distinguished
Sep 6, 2005
166
0
18,690


Actually the funny thing is the GMA950 runs better (no choppy flip3D). I have two computer overs here one with Nvidia 6100 intergrated GPU running on an AMD X2 4200+ and another with Core 2 Duo and GMA950 both with 2GB RAM. Likely the reason is because on the AMD platform, the GPU has to go over the CPU to go to the memory.
 

No1sFanboy

Distinguished
Mar 9, 2006
633
0
18,980
I'm not saying that lower clocks are "the" solution but with the current tech it is what is available.

I'm out of my depth here but why do these solutions have to be based on IGP on a motherboard except to generate more chip set sales? Why can't a high end GPU have this 2d low power processor built onto the graphics card and then be compatible with any motherboard? As Ape has touched on it simply appears that transistor count directly effects power consumption. For truly low power in a desktop environment either a redundant low power GPU or designs that are better at shutting down more of the GPU will be more effective.
 
Yeah the only reason I think that they wouldn't add the 2D low proc on the card is that it's an added cost only some people need, so adding it to the card means everyone pays extra, and add complexity to an already crowded PCB, and since some don't care that might not be a good risk on their part versus putting everything on the mobo, or even as a riser add on to a mobo (like the Karajan audio riser, or like the old 2D-3D card combos) where it can be a seperate feature and can be manipulated better and carried over from generation to generation.

Now that I think of it, the Karajan solution would be great, going from mobo to mobo and not relying on the parent card, you buy one for life, not just the life of the mobo or graphics card. Build it so it's a passthroguh for the parent card, but that also has the onboard IGP feeding the same display output. Then all you need is for it to tell the mega-card and mobo to turn down/off power to the other card when not needed.

Otherwise you'd need to have 2 SKUs IMO, one for those who care or might care later buying the 'Green' MoBo, and for those who don't they buy the SLi-m-Gonna-Sex-You-Up mobo without the added circuitry. :lol:

Anywhoo, just some more random thoughts on the possabilities.
 

rodney_ws

Splendid
Dec 29, 2005
3,819
0
22,810
Don't we already have the solution to this problem? The laptop I'm on right is chugging along at 1 GHz... is that because I'm too poor to have a decent laptop? Well, no... it's because Intel decided I only needed that much speed for tooling around on the desktop. When I fire up a game or do something crazy with the GIMP, it dynamically adjusts its speed. Is there a specific reason a GPU couldn't exhibit the same behavior?
 

thuan

Distinguished
Sep 6, 2005
166
0
18,690
Actually, my Gigabyte ATI HD2400XT card already adjusts its core and mem clock like you said with BIOS Version 010.055.000.001 (this number maybe wrong as it comes from my memory). It goes a low as 100Mhz core and 270Mhz (raw not after DDR) for mem compare to the normal 700/700. But this cause choppy playback when playing really high bitrate video with high fps. After updating the BIOS to 010.059.000.003 using the BIOS provided on Gigabyte site, I don't have that problem anymore but it loses that ability too. Now it only runs at 700/700.
 

Jakc

Distinguished
Apr 16, 2007
208
0
18,680
Isn't the main point of hybrid crossfire that performance improves in 3d mode? In other words, the onboard GPU being used for (for example) AA, while the discrete GPU does the rest?
As far as I understood, the power efficiency thingey was just a bonus...
 

jt001

Distinguished
Dec 31, 2006
449
0
18,780
Actually, this is something that I'm interested in. The build that I'm looking at assembling before the end of the year will likely have a pair of 8800's, now the issue I have is, I could care less if my system is pulling 400+w the half hour to hour that I game each day, but since my system is on 24/7, I can't have it pulling that much power all the time. It looks like I'm going to have to resort to turning down the clocks manually when I'm not gaming, which really isn't that big of a deal to me, but something automatic like this would be nice.

@Jakc: It seems to me like the power of IGP would be too insignificant to even benefit that way.
 

yipsl

Distinguished
Jul 8, 2006
1,666
0
19,780
Hybrid Crossfire and hybrid SLI are something I'm looking forward to. The power savings will be great, as will a bit of a boost to 3D performance of low, mainstream and performance discrete cards. I just built an X2 3800+ ASUS 690G system for my wife because hybrid Crossfire and Phenom wasn't out yet. When it comes out, the 690G board and Windsor will go into an HTPC and she'll get a 780G(?) board with at least a triple core Phenom and whatever ATI card is equivalent to an X2900 Pro with 1 gig of RAM for her modding retextures.

I'm getting an 8800GT in mid November, but when I can go hybrid SLI and Phenom, then it will be even better. I've always thought that the integrated graphics being useful only for a second monitor when not actually disabled was a waste of motherboard space and money. It's nice to see those of us who want mainstream single card systems get a boost.

That's the market I see hybrid DX10 IGPs aimed at. I don't want a 1000 watt PSU with three cards, thank you. Until Fusion arrives, this will be a good solution for the next 2 years. I just wish ATI hadn't been late, as usual lately, because my wife needed her new system last month.

My system:

Athlon X2 4600+ Windsor 65 watt
MSI K9VGM-V barebones
2 gigs Kingston DDR2-667
Antec 550 watt PSU
MSI 7600GS
Viewsonic A71f+

My wife's:

Athlon X2 3800+ Windsor
Asus 690G mobo
4 gigs Kingston DDR2-667
Antec 550 watt PSU
Viewsonic A70f+

Once we get decent GPU's on hybrid boards with Phenom triple cores, then we'll get decent LCD monitors. I can't wait, but will an X2900 Pro class card and an 8800GT run upcoming DX10 RTS and RPGs smoothly enough? They don't seem to do well in FPS, which "require" 50 fps.