Sign in with
Sign up | Sign in
Your question

Graphics Beginners\' Guide, Part 1: Graphics Cards

Tags:
  • Memory
  • Hardware
  • Graphics Cards
  • Graphics
Last response: in Memory
Share
July 24, 2006 11:12:56 AM

Everything you wanted to know about graphics cards (but were afraid to ask) is in this primer.

Speak out in the Toms's Hardware reader survey!

More about : graphics beginners guide part graphics cards

July 24, 2006 4:27:29 PM

A pretty good read, even though it was all standard info for me (But beginners or not, who can pass up another Tom's article? ;) ). I liked the animated cooling images, those were a nice touch.
July 24, 2006 4:40:40 PM

Quote:
A pretty good read, even though it was all standard info for me (But beginners or not, who can pass up another Tom's article? ;) ). I liked the animated cooling images, those were a nice touch.


Indeed
Related resources
July 24, 2006 4:50:11 PM

I'm assuming theres going to be a Part 2 to this. Look forward to that.
July 24, 2006 5:53:02 PM

Good article.
I will also like to see information about pixel-shaders, vertex-shaders, unified shaders, anisotropic filtering etc. in PAINFUL detail. antialiasing , HDR, etc is easier to understand.
I'll search older articles on this.
July 24, 2006 8:49:00 PM

Awesome, Found it very interesting! But could there be a better description of chipsets and what the duration of the lifecycle estimated to be? 8O
July 24, 2006 9:51:02 PM

Quote:
Good article.
I will also like to see information about pixel-shaders, vertex-shaders, unified shaders, anisotropic filtering etc. in PAINFUL detail. antialiasing , HDR, etc is easier to understand.
I'll search older articles on this.


Thanks.
ALL that stuff is coming, and more. The whole article is colossal. Approaching 10,000 words.

Patrick had to split it up into many segments. It's friggin comprehensive, I promise. :) 

P.S. Those are pictures of my 'ol 9700 PRO, god bless her!
July 24, 2006 9:55:42 PM

thought I recognized that heatsink, mine was an AIW and I replaced that sink... truly a bad-a$$ card if ever there was one. The Champ.
July 24, 2006 11:39:08 PM

Hello , I read the article even though i know my share of knowledge when it comes to graphical cards , its always great to have an amazing article refresh everything up.

I have a question , and I'm not sure if this is the right place..If not id appreciate it if you can point me to the right place for such questions

Tom said that the Vram of a card stores mostly textures to feed the gpu , ive always wondered What exactly does the memory do? , ive read in articles that having a faster vram helps in rendering shadows and raster operations..also having a larger size helps when increasing resolution..

But what exactly does memory do when it comes to help increasing performance IE: when the speed increases..whats being fed faster? the textures? geometric information?

Any answer is greatly appreciated !
July 25, 2006 12:39:05 AM

Quote:

P.S. Those are pictures of my 'ol 9700 PRO, god bless her!


I'm still using mine. Runs FEAR just fine too. What a beast! :roll:
July 25, 2006 1:24:53 AM

Nice article, I'm looking forward to the next one. Keep up the good work.
July 25, 2006 1:30:46 AM

A quick overview of the power connectors and how they are different and sometimes not compatible would have been a perfect touch.
a b U Graphics card
July 25, 2006 5:40:08 AM

Sweet dude, been wondering when you'd get this out.

BTW, just curious, why is this thread in the 'memory' section instead of 'graphics'?

Just curious, because wondered why I hadn't seen it earlier.

Cheers! :trophy:
a b } Memory
a b U Graphics card
July 25, 2006 7:34:48 AM

I liked your animations, you want to do a couple for me?
July 25, 2006 9:37:20 AM

I don't think the info on HDMI is correct. As far as I know the video part of it is DVI except that it has Digital Restrictions Management. It's thus not a superior interface but rather a step backwards for the free world.
a b } Memory
a b U Graphics card
July 25, 2006 9:44:30 AM

Right, but HDCP is required for future (and some current) BRD and HD-DVD content. But don't new DVI standards also support HDCP?
July 25, 2006 1:16:50 PM

I really Thought that this was a great article. I cant wait to see( 8O ) more :D  . You guys have kicked this site up another notch. THANKS!! :D 
July 25, 2006 4:55:53 PM

When can we expect to see more parts of this beast? :tongue:
July 25, 2006 11:16:23 PM

Excellent to see this. Perhaps I can refer people to this article rather than writing up/Ctrl+V'ing a good-sized post when somebody needs an explanation on some site or other.

Just a few comments, though. First, I would've appreciated if in addition to showing what each sort of connection interface looks like on a card, you also showed what the appropriate slot looks like on the motherboard; with a lot of people looking for new video cards, even on older systems, I've found it's important to be able to help people tell apart AGP, PCI-e, and plain ol' PCI slots. (also, I saw a lack of mentioning for VESA in the slots; nit-picking, I know. :tongue: )

Oh, and when it refers to the auxiliary power connections for later AGP cards, I'm left a little confused by the "dual 4 pin power sockets," which, in contrast to the words that follow it, implies that lots of cards had multiple power sockets, when indeed, the 6800ultra was the only AGP card that had more than one power connector on it.

But aside from that, great article! I'm crossing my fingers and hoping that this might be among the harbingers of a return of THG's Graphics/Display section to being something other than a poor joke.
Quote:
Hello , I read the article even though i know my share of knowledge when it comes to graphical cards , its always great to have an amazing article refresh everything up.

I have a question , and I'm not sure if this is the right place..If not id appreciate it if you can point me to the right place for such questions

Tom said that the Vram of a card stores mostly textures to feed the gpu , ive always wondered What exactly does the memory do? , ive read in articles that having a faster vram helps in rendering shadows and raster operations..also having a larger size helps when increasing resolution..

But what exactly does memory do when it comes to help increasing performance IE: when the speed increases..whats being fed faster? the textures? geometric information?

Any answer is greatly appreciated !

VRAM performs three primary functions (or what it stores):
    [*:78c0555a4e]It acts as the PC's "frame buffer." Anything that's displayed on a screen attached to the PC (be it analog OR digital) is stored in this buffer. If multiple independent displays are used, there will be more than one frame buffer in the VRAM.
    [*:78c0555a4e]In more advanced 3D games (pretty much all of them today) it acts as a "redering buffer." It's like a frame-buffer, but it's to here that shaders, calculations, etc. that are performed by the GPU are written. Ostensibly, eveything written here will find its way, in some shape or form, into the final frame buffer.
    [*:78c0555a4e]Lastly, VRAM is used for holding texture-maps. Simply put, they're image data bits that can be used in various ways; most commonly, it's a "color-map" that might look like an ordinary image, that is then applied to the 3D meshes to turn an invisible "wireframe" into the models you see. Some shaders, such as bump-mapping, normal-mapping, paralax-mapping, and specular-mapping, rely on texture-maps as well, though they technically aren't images; the map, for, say, bump-mapping or paralax-mapping, is more of a "height-map" that tells how elevated that pixel should be.Of course, other things can be stored to; I believe most software also does use it to store geometric data that's not being calculated at the time, though the space for that would be negligable. (geometric data actually being worked with will be stored within cache and registers within the GPU itself)

    At any rate, faster memory (faster in bandwidth, not clock speed) means that the GPU can read/write to it faster, and hence more quickly access and modify what's stored there. The result: faster calculations overall. This is especially important when, say, you're rendering (or playing) something in a rather high resolution; the higher the resolution, the more pixels there are to write to, and hence, far more bandwidth used/needed.

    I hope this cleared some things up.

    Quote:
    Sweet dude, been wondering when you'd get this out.

    BTW, just curious, why is this thread in the 'memory' section instead of 'graphics'?

    Just curious, because wondered why I hadn't seen it earlier.

    Cheers! :trophy:

    It appears that a LOT of major threads are winding up in the "Memory" section. Perhaps an internal joke on someone's capabilities?! :lol: 
July 26, 2006 4:46:17 AM

Thanks for answering that, nottheking.

Just to answer sopme of your questions... some things weren't delved into with maximum detail, that was a conscious effort not to saturate the reader for whom this is a primer.

The idea was to have something to initiate a layman to the point where they would feel confortable reading an average graphics card review, maybe even reading video card specs and drawing a few hypotheses/expectations of performance on their own.

having said that, sometimes more info can be better so if you guys have alot of feedback on a particular subject I suppose the primer might be modified in the future to accomodate that.

But I urge you to wait until all the parts are released until you make your judgements. Like I said, lots and lots more info on the way. Can't give you a date though, that's up to Patrick. :) 
July 27, 2006 12:19:56 AM

Quote:
Everything you wanted to know about graphics cards (but were afraid to ask) is in this primer.

Speak out in the Toms's Hardware reader survey!


I wanted to congratulate Toms on its latest efforts with regards to "beginners guides." I unashamedly and unabashedly admit to thoroughly enjoying such guides because my pc knowledge is average at best and its in understanding the fundamentals that we can all better work with our beloved machines; even more terrific is being able to talk chop with a veteran who bemoans this and that on his/her pc. :lol: 

Thanks again Tom's; great work!
July 27, 2006 1:08:06 AM

I am quite surprised no one caught this:
Quote:
S-Video is an analog video standard used by the television industry. It provides a low resolution signal to televisions like single-cable composite, but the color information is separated into three channels, which represent the basic colors. It allows for a higher-quality signal than single-cable composite-but still at a low dynamic resolution. However, while S-VHS is superior to single cable composite video, it is vastly inferior to high-quality component video (Y, Pb, Pr) outputs.


I think that S-Video is divided into chroma and luma, as backed up by this page: http://en.wikipedia.org/wiki/S-Video

Quote:
Due to the separation of the video into brightness and colour components, S-Video is sometimes considered a type of component video signal, although it is also the most inferior of them, quality-wise, being far surpassed by the more complex component video schemes (like RGB). What differentiates S-Video from these higher component video schemes is that S-Video carries the colour information as one signal.


An interesting and confusing oversight, I hope that it was a mistake and the writer did not intend to convey the information as written.

Any insights?
July 28, 2006 2:43:47 PM

You are absolutely right, nubie. I'll advise to have it changed thusly:

Quote:
S-Video is an analog video standard used by the television industry. It provides a low resolution signal to televisions like single-cable composite, but the video information is separated into luminance (brightness) and chrominance (colors). It allows for a higher-quality signal than single-cable composite-but still at a low dynamic resolution. However, while S-VHS is superior to single cable composite video, it is vastly inferior to high-quality component video (Y, Pb, Pr) outputs.


Nice catch.
July 31, 2006 3:18:02 PM

A couple things about links... the Part 2 article links here, so I'm posting here. On Page 10, there are 3 links at the top about 3dfx, Crossfire and SLI. Those all point to... page 10! Whee! Seems they should either be removed or pointed elsewhere.

Also, on the same page:

Quote:
There are other factors to consider as well. While two graphics cards linked together will offer a performance boost, the result is rarely anywhere near twofold, so from a budgetary standpoint, it is important to keep in mind that paying twice the money will not yield cost effective results. A 120% - 160% performance increase is a more realistic expectation with multi-card solutions


I wish you'd get 120-160% performance increase! Then I'd soo go Crossfire. I think it should've been "A 20% - 60% performance increase" instead ;) 

Overall I liked the article a lot, and learned a few small things along the way. Great job with the animations again, it really helps to see the difference when the images are lined up just right and cycle through, rather than going from one full-screen image to another and looking for the difference. This is true even more so to a person who doesn't know what aliasing is and just hasn't ever noticed it, or thought about it as a problem that could be fixed.
July 31, 2006 3:27:34 PM

Well, I suppsoe its a good read for someone who doesnt already know about the technology, but since I already do, I just find myself reading the bolded topics, and moving on. I did find the difference between 16x AF and none, pretty funny, pretty much what I already had concluded on my own, it basicly gives no improvement, and in return eats CPU/GPU cycles :) 
July 31, 2006 3:55:07 PM

haha, i was wondering the same thing, if they hadn't labeled it, i would have never known it was 2 different pictures. As for AA, it just makes prettier clouds.. but when being shot at, who on earth is looking up at the clouds...
July 31, 2006 4:15:17 PM

This post actually comes before I've finished reading the second part of the article. Apparently, this thread should perhaps be re-named, if this is the thread to discuss all three parts on, rather than just the first.
Quote:
Thanks for answering that, nottheking.

Just to answer sopme of your questions... some things weren't delved into with maximum detail, that was a conscious effort not to saturate the reader for whom this is a primer.

The idea was to have something to initiate a layman to the point where they would feel confortable reading an average graphics card review, maybe even reading video card specs and drawing a few hypotheses/expectations of performance on their own.

having said that, sometimes more info can be better so if you guys have alot of feedback on a particular subject I suppose the primer might be modified in the future to accomodate that.

But I urge you to wait until all the parts are released until you make your judgements. Like I said, lots and lots more info on the way. Can't give you a date though, that's up to Patrick. :) 

Of course, I am well aware that a lot more is to come. (though I write this after reading Part II)

My main comments, aside from the bit on VESA, (which I mentioned since the article already covered effectively superflous interfaces) were based upon the idea that they might be rather valuable information to the neophyte, especially on telling what each sort of slot looks like on a motherboard. In the months before the release of Oblivion, I'd guess I answered a bit over 10,000 graphics-card related questions, and in a lot of cases, things hinged on what kind of graphics card slot the gamer in question had.

Of course, I, too, recognize the necessitate not to "over-saturate" the thing with information. I've found that such can be a turn-away, and defeat the whole purpose of the thing. Deep graphics knowledge is gained incrementally, as I've told many people outside of this site.

Quote:
I am quite surprised no one caught this:

<snip>

I actually did catch it, but even more embarrasing for me, (I've long known that Composite is one signal, S-Video is two, and it's Component that's 3) I had managed to forget it by the time I was actually writing my post. That happens a lot. :oops: 
a b U Graphics card
July 31, 2006 4:43:54 PM

Quote:

I actually did catch it, but even more embarrasing for me, (I've long known that Composite is one signal, S-Video is two, and it's Component that's 3) I had managed to forget it by the time I was actually writing my post. That happens a lot. :oops: 


I admit to skimming and not really noticing. I don't expect a description of the timing either, but I that's why it's here, not for us but for peope just learning about this. I didn't even notice, I noticed other bits.

Well and the thing is the granularity you want to get to, like you mentioned describing the connectors is a great Idea, giving a little information also a good idea, but getting down to the pin-out level not necessary so I think Cleeve's going about finding the balance of too little and too much info.
July 31, 2006 4:51:46 PM

Okay, as for the second part of the article...

Does nVidia actually state somewhere that the texture fill-rate has to do with "pixel pipelines" and actually differentiates these from TMUs? I think it may simply be the case that nVidia, since they have shown to be TMU-centric in their designs, they may consider TMUs to BE the pixel pipelines, and perhaps it might just be best to trim that section to reduce possible confusion.

Also, "features" comes conveniently right after fill-rate; it might've been prudent to mention that, indeed, this was another sort of fill-rate at work, to help put all four types of unit on a level, since you already mentioned fill-rate as related to ROPs and TMUs, and then proceed to have a list that includes those, and adds in shader units.

Next, when the mention of "pixel pipelines" comes up, I think it should mention the GeForce 7 as the first to go "fragmented;" I highly doubt that, with 24 TMUs/PSUs, and only 16 ROPs, that it could be an arcitecture that's anything but. Also, tracing through history, "pipeline" count has been fairly accurate to either two things: (when applicable) the number of ROPs, and the number of pixel shader units.

It could also be dangerous to imply that "24-pipeline" cards would be generally superior to all "16-pipeline" cards, particularly given that Radeon X1900s are generally considered "16-pipeline," rather than "48-pipeline."

The bit on manufacturing processes is right on, though I might add something in parenthasis to make sure the reader knows that "µ" stands for the prefix "micro-" :p 

As for the section on memory interfaces, I can't quite write it out, (partially because I'm falling asleep as I write this) but there's got to be a better way to explain how bit width is very important, perhaps involving a statement outright telling that bit width is the number of bits that the VRAM can transfer per clock cycle.

Fortunately, I'm glad to see that the article covered the all-important issue of some people being confused by seeing "half-speed" reported for their DDR VRAM. Though it might be wise to start including "GDDR4" in that list, given that it appears to be only a month or two away.

As for the whole long page on Direct3D and shader models, I must congratulate the writer of the article on that; I especially liked the animated example of differences, and I'm suprised that so much detail could be retained in just 256 colors. It might've also been a good idea to mention DirectX 9.0b and the accompanying SM 2.0b, but since it doesn't really ever have much of an impact on gamers, as far as they're concerned, perhaps it was right to leave it out.

However, I must correct the article on the mention of HDR. While I'm glad to see that the writer recognized that OpenEXR was developed for movies, not games, it is not used by Oblivion, at least if the game's developers are to be believed; while functionally and statistically, it seems to be identical, Bethesda purports that it is a shader they developed in-house, and hence not found elsewhere. (though in 2005, they reported they used a SM 2.0 version, so it may be that they simply ditched it, and licensed OpenEXR, though they make no mention of it in the credits/etc.)

Oh, and on the section of "AA," an embarrasing mistake was made:
Quote:
Aliasing (abbreviated 'AA') is a term to describe jagged or blocky patterns associated with displaying digital images.
AA is for anti-aliasing, not aliasing. :p  It might also have been a good idea to give a basic description of HOW it works, (and hence why it increases the workload on the GPU) even if it was a mere sentence describing how it effectively draws each pixel multiple times, from a slightly different vantage, and blends them together.

Oh, and the bit on texture filtering, I'm afraid to say, should probably be just thrown out and re-written from scratch. It describes texture filtering as being invented to combat mip-mapping blur, when that's what anisotropic and trilinear filtering are for; each needs its own description. Perhaps some info can be shamelessly taken from This excellent THG article that I've often referenced?

Oh, and that image demonstrating AF x16, in contrast to the one demonstrating different shader models, completely sucks. Even I had a hard time discerning that it was, indeed, AF x16. (I can tell AF levels apart from a screenshot) Perhaps a different scene, or even a different game, should've been used. I might recommend Morrowind, or perhaps Oblivion, though Morrowind makes mip-mapping far more apparent. Like the article I linked above, it should show the difference between no filtering, and various types/levels of filtering.

Anyway, all in all, this is a great article. Obviously nothing I didn't already know, but I think it might prove rather helpful for countless people who are new to this sort of thing. Keep it up, and I eagerly await Part III! :D 
July 31, 2006 5:01:45 PM

Quote:

However, I must correct the article on the mention of HDR. While I'm glad to see that the writer recognized that OpenEXR was developed for movies, not games, it is not used by Oblivion, at least if the game's developers are to be believed; while functionally and statistically, it seems to be identical, Bethesda purports that it is a shader they developed in-house, and hence not found elsewhere. (though in 2005, they reported they used a SM 2.0 version, so it may be that they simply ditched it, and licensed OpenEXR, though they make no mention of it in the credits/etc.)


Well according to the Bethesda web site, and a Link I found a few months ago, they didnt even write the graphics 'engine' for thier own game, but rather used a third party 'engine'. SO personally, I find it highly unlikly that they wrote any graphics routines for thier own game (other than perhaps enumeration graphics devices). Then again, I'm not a game dever, so what do I know =/
July 31, 2006 5:53:38 PM

A good read for beginners like myself and a good refresher for those who have been doing this for a while.
July 31, 2006 5:55:15 PM

Simple question.

How is graphics memory used? For screen size? For texture size?
I think it's for screen size because I can for example play HL2 at MAX settings when my video card is only 64meg and screen size is only 1 mega-pixel.
Am I missing something?

Thanks!
a b } Memory
a c 119 U Graphics card
July 31, 2006 6:01:03 PM

at lower screen sizes the game may be running lower textures.

I would say most of the memory goes to textures..

A good article either way.....
July 31, 2006 6:23:11 PM

Quote:
Well according to the Bethesda web site, and a Link I found a few months ago, they didnt even write the graphics 'engine' for thier own game, but rather used a third party 'engine'. SO personally, I find it highly unlikly that they wrote any graphics routines for thier own game (other than perhaps enumeration graphics devices). Then again, I'm not a game dever, so what do I know =/

Yes, Bethesda did license out the GameBryo engine from Emergent Technologies. However, I know that they did write most of their shaders. And they have reported using their own HDR shader, though they made no further mention on that specific question within a few months of the release.

I would know, as I paid a LOT of attention to this game since it was announced.

Quote:
Simple question.

How is graphics memory used? For screen size? For texture size?
I think it's for screen size because I can for example play HL2 at MAX settings when my video card is only 64meg and screen size is only 1 mega-pixel.
Am I missing something?

Thanks!

Well, it's used for all of them, and the ratio would depend on the game in question.

If, say, you're using an emulator for an old console, such as the original Playstation or the Nintendo64, where the PSX only had 512KB of VRAM, and the N64 likely never allocated more than 2-3MB of its RDRAM to textures, (unless you had the N64 expansion pack) and you run at a 1+ megapixel resolution, you're going to be using more memory for the frame buffer than for textures.

Likewise, this can shift according to the game; the use of other rendering buffers, which work just like the final frame buffer, may change depending on the shaders used for rendering; things like cube-mapped reflections require their own buffer space for rendering their scene; hence, in a scene with that reflective water, you're rendering everything twice. Shadow-maps, (as opposed to shadow stencils) as used in many games, also require their own rendering space.
July 31, 2006 8:01:41 PM

When it comes to AF, looking very closely at the picture I do notice a difference. It's almost as if AF is acting as a high pass filter on the image, and then adding the high pass back into the image (aka a sharpening filter), but it also looks as if it does some gamma correction as well. To my eyes it seems it does improve the image considerably, though maybe not at a glance, but when really looking at it, it does.
See, after they do the low pass filter (AA) to prevent aliasing (as you can have aliasing in images just like sound) it is good to try to bring back some of the high frequency part of the image, so I think the AF does a good job of this by applying the sharpening effect that it apears to do. At least that's my opinion


Also, I think the reason why so much memory is needed for large resolution images, is that you have to remember that there is at least 3x the image data for each pixel, the combination of each pixel is from a color map that typically has red, green, and blue info, and if each of those is 32 bit... then for 1900x1200 reslolution you need 1900x1200x3x4 = 27,360,000 = 27 megs of memory, for 1 frame... now if you need to do 30 of those a second.... yea that's alot of memory. Not ot mention that when you do your filters on the data you need more space, as you have to padd them out to get the filters work right many times, as say an FIR (finite impulse response) filter for an image would get rid of some of the data on the edges as you go through the fitlering, so you have to padd the image. On top of this, to do a filter, many times you have to muliply a matrix times some part of the image, then add it with a previous muliplied part, aka it's a sum of products (called convolution), then shift it over a pixel, and repeat until the multipling matrix has been multiplied times the entire image. To do that, you need ALOT of memory, processing power, etc. This is why filters need so much processing power and memory.... At least, that's how I would explain it....
July 31, 2006 8:04:40 PM

Quote:
Does nVidia actually state somewhere that the texture fill-rate has to do with "pixel pipelines" and actually differentiates these from TMUs?


Good question. That puzzled me a bit too, it was edited in after I submitted the article I think... don't remember writing that.


Quote:
Next, when the mention of "pixel pipelines" comes up, I think it should mention the GeForce 7 as the first to go "fragmented;" I highly doubt that, with 24 TMUs/PSUs, and only 16 ROPs, that it could be an arcitecture that's anything but. Also, tracing through history, "pipeline" count has been fairly accurate to either two things: (when applicable) the number of ROPs, and the number of pixel shader units.


I believe the ROPs have been detached from everything else for quite a while; I imagine this is because some pixels may have to go through multiple passes before they're rasterized? That's just a total guess though.


Quote:
It could also be dangerous to imply that "24-pipeline" cards would be generally superior to all "16-pipeline" cards, particularly given that Radeon X1900s are generally considered "16-pipeline," rather than "48-pipeline."


Well, the way the article is written it explains that the X1x00 series doesn't apply to the 'pipeline' term very well, hopefully readers will take that to heart.


Quote:
The bit on manufacturing processes is right on, though I might add something in parenthasis to make sure the reader knows that "µ" stands for the prefix "micro-" :p 


If they don't know that, they probably don't know 'micro' means 'millionth' anyway, so screw 'em. :) 


Quote:
Fortunately, I'm glad to see that the article covered the all-important issue of some people being confused by seeing "half-speed" reported for their DDR VRAM. Though it might be wise to start including "GDDR4" in that list, given that it appears to be only a month or two away.


Yeah, another thing for the list of addendums.


Quote:
However, I must correct the article on the mention of HDR. While I'm glad to see that the writer recognized that OpenEXR was developed for movies, not games, it is not used by Oblivion, at least if the game's developers are to be believed;


Wow... Ill look into that.


Quote:
Oh, and on the section of "AA," an embarrasing mistake was made:Aliasing (abbreviated 'AA') is a term to describe jagged or blocky patterns associated with displaying digital images.
AA is for anti-aliasing, not aliasing. :p 

Yep, 'Anti' definitely missing there.


Quote:
Oh, and the bit on texture filtering, I'm afraid to say, should probably be just thrown out and re-written from scratch. It describes texture filtering as being invented to combat mip-mapping blur, when that's what anisotropic and trilinear filtering are for; each needs its own description. Perhaps some info can be shamelessly taken from This excellent THG article that I've often referenced?


Hmmm. I'll have a second look at it.


Quote:
Oh, and that image demonstrating AF x16, in contrast to the one demonstrating different shader models, completely sucks.


Yeah, I've already promised grape I'd redo the AF animation. :p  That one is a victim of tight timelines but it's not so good and needs to be redone...
July 31, 2006 10:36:36 PM

Quote:
When it comes to AF, looking very closely at the picture I do notice a difference. It's almost as if AF is acting as a high pass filter on the image, and then adding the high pass back into the image (aka a sharpening filter), but it also looks as if it does some gamma correction as well. To my eyes it seems it does improve the image considerably, though maybe not at a glance, but when really looking at it, it does.
See, after they do the low pass filter (AA) to prevent aliasing (as you can have aliasing in images just like sound) it is good to try to bring back some of the high frequency part of the image, so I think the AF does a good job of this by applying the sharpening effect that it apears to do. At least that's my opinion

Well, it is an interesting analogy, though in practice, it really doesn't mean anything in 3D; AA only applies its filtering technique to the EDGES of polygons, while AF only really applies to the BODY of the polygons, where the textures are placed; if you look at the same scene with/without AA, you'd notice it has no impact on the texturing of the image.

Quote:
Also, I think the reason why so much memory is needed for large resolution images, is that you have to remember that there is at least 3x the image data for each pixel, the combination of each pixel is from a color map that typically has red, green, and blue info, and if each of those is 32 bit... then for 1900x1200 reslolution you need 1900x1200x3x4 = 27,360,000 = 27 megs of memory, for 1 frame... now if you need to do 30 of those a second.... yea that's alot of memory. Not ot mention that when you do your filters on the data you need more space, as you have to padd them out to get the filters work right many times, as say an FIR (finite impulse response) filter for an image would get rid of some of the data on the edges as you go through the fitlering, so you have to padd the image. On top of this, to do a filter, many times you have to muliply a matrix times some part of the image, then add it with a previous muliplied part, aka it's a sum of products (called convolution), then shift it over a pixel, and repeat until the multipling matrix has been multiplied times the entire image. To do that, you need ALOT of memory, processing power, etc. This is why filters need so much processing power and memory.... At least, that's how I would explain it....

That is correct, until you get to the "30 a second" part. Only one frame's worth of memory will ever be used for a frame buffer at a time; after that, it's cleared, and a new frame is written to the same space. That merely has to do with the high requirements for memory bandwidth that games have. And indeed, all those extra shaders, and anything else that adds "overdraw" (where more than one thing affects a pixel) dramatically increase bandwidth usage. This is why the XBox 360 shifted to a "tile-based" GPU; it performs all the basic buffer-writing into a "tile buffer" that is located on the same die as the ROPs, and hence has close to unlimited write bandwidth; the processed frame-fragments are then transfered o the actual VRAM to compose the final frame buffer, significantly reducing memory usage.

Quote:
Good question. That puzzled me a bit too, it was edited in after I submitted the article I think... don't remember writing that.

I don't remember writing a lot of things I apparently wrote. :oops: 

Quote:
I believe the ROPs have been detached from everything else for quite a while; I imagine this is because some pixels may have to go through multiple passes before they're rasterized? That's just a total guess though.

Well, in the stardard sort of rendering process, it appears that each function performed, either by pixel shader or by texturing, is recorded to the VRAM, that's the source of term "overdraw." So in reality, I guess neither of us quite knows.

Quote:
Wow... Ill look into that.

I think I'll try my hand at it first; (I'm not 100% certain myself on the issue, I'll admit) the developers may still be willing to speak to me, though I haven't a clue how they might take it were I to directly ask them such a question. (they might be more willing to answer a technical question PM if it came from a well-established member of the Elder Scrolls community rather than from an unknown person from the media, particularly if it was only one question)

Quote:
Yeah, I've already promised grape I'd redo the AF animation. :p  That one is a victim of tight timelines but it's not so good and needs to be redone...

It did indeed appear rushed, though I'm surprised you were so good at getting a shot that broke down into 8-bit color so easily...

Of course, the alternative would be to use an animated .PNG, and say "screw 'em" to the IE users. :p 
July 31, 2006 11:20:56 PM

Quote:
I did find the difference between 16x AF and none, pretty funny, pretty much what I already had concluded on my own, it basicly gives no improvement, and in return eats CPU/GPU cycles :) 


I would be inclined to disagree with you here. The reason you can't see the difference in the article is because the two screenshots they gave were not good examples. Look at these images and tell me that you can't see a difference:

Bilinear


Trilinear


2xAniso


4xAniso


8xAniso


I left out 16xAniso because the gains over 8x were negligible. It is important to note, however, that these screenshots were taken on a 9800Pro graphics card, and 16xAniso on an X1800/X1900 is a big improvement (look at the THG review of the X1800 launch).

Also, these screenshots were not taken for the sole purpose of this thread. I actually took them over a year ago for a presentation about advancements in 3D rendering in one of my technology electives.

Just for the heck of it, here are some shots to show the effects of antialiasing:

No AA


4xAA



Note to any TG employees:
You are free to use these screenshots if you so desire. Let me know if you want to use any of the other shots I took as examples of pixel shaders (heat, vapors, water), sprites (flames, explosions), or bump mapping. I'd be happy to send them to you.
August 1, 2006 1:38:42 AM

Quote:
Does nVidia actually state somewhere that the texture fill-rate has to do with "pixel pipelines" and actually differentiates these from TMUs? I think it may simply be the case that nVidia, since they have shown to be TMU-centric in their designs, they may consider TMUs to BE the pixel pipelines, and perhaps it might just be best to trim that section to reduce possible confusion.


Not in any papers I have from Nvidia. I have spoken to Ugesh Desi and Nick Stam on occasions about their calculations. Insted of listening to the marketing and PR machinery I choose to go beyond that to find out. In conversations they have confirmed this is the way they count their texture fill rate. On the other side guys like Guennadi Riguer and David Nalasco have given me the way ATI does things.


As for the deal with S-Video... that is to the web team and will get fixed.
August 1, 2006 1:59:07 AM

Quote:
Quote:
Well according to the Bethesda web site, and a Link I found a few months ago, they didnt even write the graphics 'engine' for thier own game, but rather used a third party 'engine'. SO personally, I find it highly unlikly that they wrote any graphics routines for thier own game (other than perhaps enumeration graphics devices). Then again, I'm not a game dever, so what do I know =/

Yes, Bethesda did license out the GameBryo engine from Emergent Technologies. However, I know that they did write most of their shaders. And they have reported using their own HDR shader, though they made no further mention on that specific question within a few months of the release.

I would know, as I paid a LOT of attention to this game since it was announced.

Well, I wasnt doubting what you wrote, and now that I think about it, I think I've heard what you said about the shaders somewhere else also. What I was writting was more targeted at doubting what bethesda was saying (as per what you were saying), but honestly I dont know, maybe they did write all that stuff, and thats why Oblivion is such a PoS at the moment. The game is ridden with bugs, and performance wise it could be alot better IMO, but then again, I also realize its not a FPS game. Anyhow, even though I personally havent experienced the plethora of bugs alot of others have (although the random CTD's, and corrupted saves are really annoying), I dont even play the game anymore, I just dont find it all that fun any longer.
August 1, 2006 3:22:04 PM

Quote:
I would be inclined to disagree with you here. The reason you can't see the difference in the article is because the two screenshots they gave were not good examples. Look at these images and tell me that you can't see a difference:

<snip>

I left out 16xAniso because the gains over 8x were negligible. It is important to note, however, that these screenshots were taken on a 9800Pro graphics card, and 16xAniso on an X1800/X1900 is a big improvement (look at the THG review of the X1800 launch).

Good shots you took; I hoped you aced that elective.

As for the bit on x16 AF, I've found that there IS a difference, though at even obscene resolutions, it's not noticable, but rather, counter-acts the purpose of mip-mapping, resulting in the infamous "crackle" you get looking at distant textures, as it rapidly shifts between multiple texels that fall within that particular pixel. Hence, I typically leave it AF at x8 simply for that.

Quote:
Not in any papers I have from Nvidia. I have spoken to Ugesh Desi and Nick Stam on occasions about their calculations. Insted of listening to the marketing and PR machinery I choose to go beyond that to find out. In conversations they have confirmed this is the way they count their texture fill rate. On the other side guys like Guennadi Riguer and David Nalasco have given me the way ATI does things.


As for the deal with S-Video... that is to the web team and will get fixed.

Okay, thank you for clearing that bit up.

Quote:
Well, I wasnt doubting what you wrote, and now that I think about it, I think I've heard what you said about the shaders somewhere else also. What I was writting was more targeted at doubting what bethesda was saying (as per what you were saying), but honestly I dont know, maybe they did write all that stuff, and thats why Oblivion is such a PoS at the moment. The game is ridden with bugs, and performance wise it could be alot better IMO, but then again, I also realize its not a FPS game. Anyhow, even though I personally havent experienced the plethora of bugs alot of others have (although the random CTD's, and corrupted saves are really annoying), I dont even play the game anymore, I just dont find it all that fun any longer.

Indeed, Oblivion's technical state is amazingly bad for a top-shelf title; I was quite shocked. Performance-wise, it's what might be expected for the obscene amount of shaders used, though I frown upon the rediculously flimsy LOD scaling for objects, that feels more fitting for the year 2001 than 2006. Also, I hate that shadows aren't actually filtered, but simply done like F.E.A.R.'s, only with shadow-maps rather than stencil volumes.

And as far as bugs... I can't believe they considered it done. Perhaps this is the part where I can repeatedly tell them, "I told you so!" :p 

I still play it, but because of the flawed state of it, it's nowhere near as fun as either Daggerfall or Morrowind were. Even though I only got a corrupted save once. (an auto-save I got entering a shop, when it CTD'd while loading the cell)
August 2, 2006 4:25:22 PM

Quote:
Also, I think the reason why so much memory is needed for large resolution images, is that you have to remember that there is at least 3x the image data for each pixel, the combination of each pixel is from a color map that typically has red, green, and blue info, and if each of those is 32 bit... then for 1900x1200 reslolution you need 1900x1200x3x4 = 27,360,000 = 27 megs of memory, for 1 frame... now if you need to do 30 of those a second.... yea that's alot of memory. Not ot mention that when you do your filters on the data you need more space, as you have to padd them out to get the filters work right many times, as say an FIR (finite impulse response) filter for an image would get rid of some of the data on the edges as you go through the fitlering, so you have to padd the image. On top of this, to do a filter, many times you have to muliply a matrix times some part of the image, then add it with a previous muliplied part, aka it's a sum of products (called convolution), then shift it over a pixel, and repeat until the multipling matrix has been multiplied times the entire image. To do that, you need ALOT of memory, processing power, etc. This is why filters need so much processing power and memory.... At least, that's how I would explain it....

That is correct, until you get to the "30 a second" part.
:? I'm confused now. I had always thought that the 32 bit color we used was 8 bits per color channel, RGB, and 8 bits for the alpha channel (which wouldn't even really apply to frames ready to be displayed). So I thought it was 1900x1200 pixels at 24 bits = 3 bytes, or 1900x1200x3 = 6.84 MiB (Mebibytes, not Men in Black) = 6.52 MB. But now I hear it would be 27 MB which is like 4 times larger? :? Can someone explain this to me?

Quote:
Just for the heck of it, here are some shots to show the effects of antialiasing:

No AA


4xAA

I'm with you in that AA makes a big difference. It makes less of a difference when you use a CRT at a high resolution, or play a game with lots of movement, but if you use an LCD (especially 19" with 1280x1024, where pixels are obviously bigger than on a 17" with 1280x1024) or experience any periods of time when you're not whirling around after being shot at, you will notice it... especially if you're used to AA.

You should note, however, that you're using the X800 generation of cards like me. Modern cards can anti-alias textures with alpha channels if I'm not mistaken. In the screenshots you showed off, the chain link fence is aliased because they chose to make it a see-through texture rather than a bunch of T&L polygons. So people should know that even your screenshot could be improved upon significantly by smoothing out those fence lines and the diagonal supports for that power infrastructure (especially noticable against the AA'd vertical bars which Valve did have the sense to make out of polygons). Another notable thing: One big problem that thin lines like with the chain link fence is that, as you get far away from them, they can start to disappear and reappear seemingly at random when they're less than 1 pixel wide. This is quite visible with your AA'd screenshot where areas of the fence seem to just disappear.
August 2, 2006 6:35:24 PM

Those are great examples, CapnBFG. Much better than my originals. Thanks for offering to let us use them.

I've turned them into animated GIFs (you might have to click 'em to see them fully sized):



August 3, 2006 12:31:31 AM

Quote:
Those are great examples, CapnBFG. Much better than my originals. Thanks for offering to let us use them.

I've turned them into animated GIFs (you might have to click 'em to see them fully sized):





This is all fine and dandy, except that AF does NOT make anywhere near that much difference on my system, but AA does, albiet not much since I play games normally at 1440x900, it is however noticable. I must say though, I dont know what resolution you used for those shots, but it seems very low res, because even with AA off, on my system, the screenshots would look no where near that bad, even IF I were using a non optimal resolution for my monitor.
August 3, 2006 12:52:58 AM

Ah. ok, I see, you zoomed, which pretty much defeats the purpose of the whole discussion, of course its going to look worse.

Heres an example of what my screen looks like while playing Oblivion (sorry for the text, I was doing some testing).


This is no AA, and no AF. As you can see, unless you get RIGHT UP to the monitor, all those jaggies are pretty hard to see. As for the FPS, well this is what I have to live with when fighting a whole city at once in Oblivion ;) 
August 3, 2006 3:26:07 PM

Quote:
Ah. ok, I see, you zoomed, which pretty much defeats the purpose of the whole discussion, of course its going to look worse.


The AF comparison is not Zoomed, it's cropped... big difference.
The texture blurriness can easily be seen in the screenshot above at it's regular resolution with no AF.
That's 1280x960, too... a very reasonable resolution. That Anistropic filtering comparison is exactly what you'd see in game. It surprised you that it's so dramatic, didn't it? I guess AF is a pretty nice feature after all. :wink:

I zoomed in the AA comparison, but it clearly says so on the screenshot. I personally find AA makes a big differrence, even at 1600x resolution.

But of course, it's subjective. I find the oblivion screenshot very blurry on the road texture as it goes off in the distance - AF would help that alot, like in the animated gif. And I think some AA wouldn't hurt there either, the aliasing is pretty obvious on some things, like the edge of the road.
August 3, 2006 7:59:23 PM

I wish I could use AA and AF on my system, but 1600x1200 with an X800 XT tends to push my computer to its limits as is with modern games. Ahhh, back in the days of the 9700 Pro, I could play the newest games at 1280x1024 with 6x AA and 16x AF with wonderful framerates. The memories! :cry: 
August 3, 2006 8:18:20 PM

Quote:
Ah. ok, I see, you zoomed, which pretty much defeats the purpose of the whole discussion, of course its going to look worse.


The AF comparison is not Zoomed, it's cropped... big difference.
The texture blurriness can easily be seen in the screenshot above at it's regular resolution with no AF.
That's 1280x960, too... a very reasonable resolution. That Anistropic filtering comparison is exactly what you'd see in game. It surprised you that it's so dramatic, didn't it? I guess AF is a pretty nice feature after all. :wink:

I zoomed in the AA comparison, but it clearly says so on the screenshot. I personally find AA makes a big differrence, even at 1600x resolution.

But of course, it's subjective. I find the oblivion screenshot very blurry on the road texture as it goes off in the distance - AF would help that alot, like in the animated gif. And I think some AA wouldn't hurt there either, the aliasing is pretty obvious on some things, like the edge of the road.

Well, not pointing at anyone specificly, but there comes a point where trying to get your game looking absolutely perfect, it becomes obsession. Then the point is, WHY on gods earth are you even bothering with games . . . because obviously, you're not enjoying it.

[EDIT]

Really I wasnt paying so much attention to the AF as the AA, and when I mentioned 'zoomed' I was talking specificly of AA.

The blurry roads in Oblivion has more to do with poor textures than AF, trust me, I've tweaked Oblivion to hell and back, and that screen is basicly where I found my sweet spot for performance / looks. Anyhow, Ive loaded second party texture packs, and the roads DO look much better, albiet, even just creeping around in the wilderness, I get 5 FPS...



This is everything maxed out running the first beta patch, And as you probably already have noticed, I havent even used LOD texture packs for this screenshot. There comes a point in time when you're a gamer, that you have to just remind yourself there is only so much your hardware can do, and just enjoy the game.
August 8, 2006 3:00:40 PM

This was a very nice read... liked it a lot, thnx.
      • 1 / 2
      • 2
      • Newest
!