Is a Video card "CPU aware"...?

the_vorlon

Distinguished
May 3, 2006
365
0
18,780
This may seem like a dumb question but...

Is a Video card "CPU aware"..?

With AMD + ATI => Daamit

and

vNidia + Intel => "The enemy of my enemy is my friend"

what if..

Lets say ATI/AMD had a killer video card, could the program it to throttle off say 20% when running off an Intel CPU?

Could nVidia make their card run 20% slower on an AMD chip?

This way, for example, AMD could make it look like their CPU was faster than the intel part, even if it wasn't...

Of course this assumes they can make the fastest GPU....

~~~~

Unrelated....

Now that we have 150 watt+ graphic cards, do graphic cards have something conceptually like "cool and quiet" where they throttle back while you read your email and are not playing a game?
 

apt403

Distinguished
Oct 14, 2006
2,923
0
20,780
Not really. The Nvidia Intel thing is possible, since Nvidia could tweak their drivers to throttle performance when using an AMD cpu in conjuction with one of their cards. But I really can't see them doing it, it would just hurt sales.

It would be harder for Intel to screw with AMD's gfx card performance, but it's possible. Though, again. It would just hurt sales.
 

darious00777

Distinguished
Dec 15, 2006
687
0
18,990
With all of nVidia's talk about how they're neutral between AMD and Intel, I doubt they would do it. They couldn't care less if a system used an AMD processor, or an Intel one, just as long as the graphics is being provided by one of their cards or internal graphics chips.
 

4745454b

Titan
Moderator
As others have mentioned, anything could be possible through the use of drivers. Frankly, I don't see anyone doing this, as it would be easy to catch. The backlash wouldn't be pretty.

As for the GPU version of cool and quiet, its been in use since the days of the x1800. ATI has seperate clocks for 2D and 3D. When your just sitting in windows playing solitare or surfing the web, it uses lower clock speeds on the core. (possibly the memory to, I don't remember.) Once you turn on a game however, the drivers load the 3D clocks, and the card speeds up to run the game. This is one problem with Vista. If you enable Aero, it uses DX9 to render the visual effects, forcing your GPU to stay in 3D mode all the time. I'm not sure if ATI has fixed this yet.
 

epsilon84

Distinguished
Oct 24, 2006
1,689
0
19,780
Actually, the 2D/3D clocks have been implemented far earlier than the X1800 cards. I'm running a fairly ancient 6600GT on my office PC and it has separate 2D and 3D clocks.
 

4745454b

Titan
Moderator
Sorry, ATI fan here, not sure how early Nvidia had them. Actually, the x800 series might have had it also, but i'm not 100% sure. (if anyone with an x800 series card wants to educate me, please do.) I know the 9800 didn't have it.
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
The video card "thinks", is smart, it "knows" the CPU, it "knows" the user.
Be good to the video card or it might not like you. Or the video card can make you run 25% slower!
 

4745454b

Titan
Moderator
No real problems. I just avoid all problems with the law now... It did cause me to add the troll comment to my sig. Some people are just so stupid that your best not replying to them at all.

Are you sure about the MX? I wouldn't think it had seperate clocks.
 

leo2kp

Distinguished
Nope, not completely sure but I remember trying to tweak it a while back on something. It could very well have been my 5900 which I spent more time trying to make faster than actually playing on it lol.
 

apt403

Distinguished
Oct 14, 2006
2,923
0
20,780
Hell yeah. The GMA950 gfx chipset actually rivals the hd2900xt in terms of performance. It's just that ATi and Nvidia have struck up deals with all the leading game developers to optimize games for THEIR products. It's fairly common knowledge, but Intel isnt going to do anything since their gfx division accounts for such a small percentage of sales, and ATi and Nvidia pay Intel tons of cash under the table to keep quiet about it.


:lol: :lol: :lol: :lol: :lol:
 

4745454b

Titan
Moderator
Not quite. Intel graphics suck because they only make onboard graphics. Onboard graphics needs to use system memory instead of their own. This means that GPU memory is smaller then card amounts, has a much longer path to go through (request for ram needs to go through the NB AFAIK), and the memory bandwith is lower then the mob informant in the east river. I'm kinda curious how well Intel GPUs would be if they put them on their own card with their own memory.
 
As others have mentioned, anything could be possible through the use of drivers. Frankly, I don't see anyone doing this, as it would be easy to catch. The backlash wouldn't be pretty.

As for the GPU version of cool and quiet, its been in use since the days of the x1800. ATI has seperate clocks for 2D and 3D. When your just sitting in windows playing solitare or surfing the web, it uses lower clock speeds on the core. (possibly the memory to, I don't remember.) Once you turn on a game however, the drivers load the 3D clocks, and the card speeds up to run the game. This is one problem with Vista. If you enable Aero, it uses DX9 to render the visual effects, forcing your GPU to stay in 3D mode all the time. I'm not sure if ATI has fixed this yet.
To add to this, ATI has now implemented the 3D to 2D settings in hardware, I cant remember if they fixed it for AERO tho
 

heartview

Distinguished
Jul 20, 2006
258
0
18,780
In a way, what you've said is possible, but unlikely considering the consequences for being discovered. The drivers for any device sitting on the bus (PCI Express, for example) do the translation for the device. They pass data needed by the card to the card and take information from the card to pass it on to the rest of the system. This is all done through a standardized bus architecture, though, so any "corruption" like you speak would have to be at the driver level (and therefore easily detectable) or in the OS. The cards themselves are not really "aware" of anything on the other side of that bus unless it has been passed to it by the drivers.

Consider that you can have an entire "computer" running on a PCI card complete with CPU, video, audio, etc. But, to be honest, I don't know how broad the two-way communication can be using the bus/driver architecture. I don't really know for sure if the card can "query" the driver for information on the main system, or if the driver has to do all of the thinking. I'm sure the driver could pass ANY information to the card, but I'm not sure if the card can actually "ask" for it.
 

Heyyou27

Splendid
Jan 4, 2006
5,164
0
25,780
Hell yeah. The GMA950 gfx chipset actually rivals the hd2900xt in terms of performance. It's just that ATi and Nvidia have struck up deals with all the leading game developers to optimize games for THEIR products. It's fairly common knowledge, but Intel isnt going to do anything since their gfx division accounts for such a small percentage of sales, and ATi and Nvidia pay Intel tons of cash under the table to keep quiet about it.


:lol: :lol: :lol: :lol: :lol:
:lol:
 

yakyb

Distinguished
Jun 14, 2006
531
0
18,980
no as video card manufacturers want there video cards to run max on all chipsets to maximise market

having said that im yet to see a 2900xt comparison against a e6600 and 6000x2
 

dsidious

Distinguished
Dec 9, 2006
285
0
18,780
Now that we have 150 watt+ graphic cards, do graphic cards have something conceptually like "cool and quiet" where they throttle back while you read your email and are not playing a game?

Not yet, as far as I know, However, they do consume less when idle. For example I was looking at an 8800 GTX review at anandtech yesterday and they showed 170W consumption on idle (i.e. while reading e-mail, I guess) and 270W on full load (playing Oblivion, for example). Yuck... :x
 

sandmanwn

Distinguished
Dec 1, 2006
915
0
18,990
Im doubting there will be any difference. Apparently the 2900 uses far less CPU than previous models. There may no longer be a CPU bottleneck with the 2900 since the GPU and other dedicated chips on the graphics card are doing much more of the work that the CPU used to do.

Definitely wish they would have put more CPU utilization numbers in the last round of benchmarks. They alluded to much lower CPU utilization but showed no numbers. Holding out good information on the 2900 for some reason or other???? Guess its more popular to find every reason to bash something right from the start.
 

dsidious

Distinguished
Dec 9, 2006
285
0
18,780
this is totally possible and has been done for years
its called profiling.
its the reason intel graphics suck
nvidia and ati worked with game vendors to profile their cards so when the games launch they use the best settings for their cards.
all it takes is a device ID

But that's a good thing, right? I mean, if I know nothing about computers and I just install Oblivion, I want it to adapt itself to whatever card it finds and do the best it can, don't I?

Anyway, I don't think nVidia would want its cards to be handicapped when working with AMD CPUs, that would just encourage AMD owners to prefer AMD video cards. If there's a conspiracy here it's Microsoft and Intel and AMD and nVidia and a lot of others working together to make the O/S more and more bloated so we have to keep buying new and faster computers often. But that's a good thing too, it keeps the economy going :p
 

heartview

Distinguished
Jul 20, 2006
258
0
18,780
It is interesting that you bring that question up Vorlon. AMDs plans for the future is to merge the CPU and GPU. This merge is being called "Fusion." Here is a link to the article that explains a bit more:

AMD The Road Ahead

Which could be a mistake for the desktop market. Many users won't care so long as they can play their games and run their favorite apps.

Intel tried something similar with "digital signal processing" years ago and it pretty much failed outside the modem market (which is mostly irrelevant in these days of high speed Internet access). I see Fusion as having this same fate as a niche market that will be wholly irrelevant in years to come.

The current GPU market requires more and more memory on the card as time goes on. The main issue is memory speed. Even with a direct link to main memory, that main memory will not be fast enough for the GPU's of today. And you can forget about it being fast enough for the GPU's of tomorrow. That's a big problem that I don't see Fusion being able to fix. If the CPU and GPU continue to require separate memory pools for the sake of performance then Fusion gains you very little from a cost perspective and almost nothing from a performance perspective. The only exception here is in the low end and on portables.
 

commanderspockep

Distinguished
Jun 9, 2006
200
0
18,680
If the CPU and GPU continue to require separate memory pools for the sake of performance then Fusion gains you very little from a cost perspective and almost nothing from a performance perspective. The only exception here is in the low end and on portable

I don't understand Fusion much at all at this point. However this seems to solve those problems:

The final step in the evolution of Fusion is where the CPU and GPU are truly integrated, and the GPU is accessed by user mode instructions just like the CPU. You can expect to talk to the GPU via extensions to the x86 ISA, and the GPU will have its own register file (much like FP and integer units each have their own register files). Elements of the architecture will be shared, especially things like the cache hierarchy, which will prove useful when running applications that require both CPU and GPU power.

The GPU could easily be integrated onto a single die as a separate core behind a shared L3 cache. For example, if you look at the current Barcelona architecture you have four homogenous cores behind a shared L3 cache and memory controller; simply swap one of those cores with a GPU core and you've got an idea of what one of these chips could look like. Instructions that can only be processed by the specialized core will be dispatched directly to it, while instructions better suited for other cores will be sent to them. There would have to be a bit of front end logic to manage all of this, but it's easily done.