Graphics cards always talk about measuring their performance vis-a-vis Operations Per Second. For example, my GeForce 4600 claims it can process 1.23 trillion OPS.
So what? I guess that this is good if a video game is sending lots of graphics commands to a system...then the system can process them faster, making the game move more smoothly and at a higher resolution. Correct me when I'm leaving the path of logic. But I have yet to find a website that explains how many OPS any video game generates.
Question 1. So how does one know how many trillions of OPS are enough? Or is there an infinite potential for increased performance as one increases the resolution? Still, how much is enough to Play Unreal 2003 on, say, 1280x1024. How do you relate system OPS to a game that doesn't tell you how many OPS it demands? So if this info is available, where do I find it?
Question 2. What game engines are considered the most graphically complex? Give me 3 or 4 companies, please?
Question 3. How would you most simply compare the complexity of one game engine to another? For instance, in layman's terms, what makes Unreal II better than the latest Quake?
Question 4. Are there any other methods of measuring a systems capability in graphics rendering that you would suggest other than (or more usefully than) OPS?
Unfortunately like everything in life, it's abit more complicated than that...
First, #1 and #4:
I think Bandwidth (and subsequently, fill-rate) are probably more important as far as high resolution gaming goes.
In most cases, even a lowly Geforce 2 MX can perform the operations per second to run, say, Unreal 2003 at 640*480 with playable framerates.
With increased resolution comes increased reliance on a video card's memory subsystem. Antialiassing also requires tons of bandwidth. This is why the Radeon 9700 with it's 256 bit memory bus fares so well at high resolutions and with FSAA turned on.
Although the 9700 PRO can perform exactly the same OPS as a Radeon 9500 PRO, as they use exactly the same GPU, note that the 9700 PRO has 19.8 Gigabytes/second of memory bandwidth compared to the 9500 PRO's 8.8 GB/s.
The two cards have a sizable difference in performance even though they are using the exact same graphics processors. The only difference is that the memory bus on the 9500 PRO has been crippled by limiting it to 128 bits wide.
Remember too, alot of the transistors in newer graphics cards are comitted to directx hardware operations.
With all this in mind, I think that's why the "Ops per second" isn't, in itself, really can't be used to accurately compare graphics hardware.
Graphically complex engines, eh? Unreal 2, Aquanox, and Morrowind come to mind
The amount of polygons per scene is an indication of how complex the geometry is in a game, but this fluctuates with the number of objects/characters on screen at a given moment. Still, developers have in-house guidelines in any given game.
Also, the size of textures used make games more intensive for graphics cards.
Finally, effects (like lights, realtime shadows, and DirectX8 water) eat up graphics card resources like crazy.
don't forget the doom III engine! though it's yet to come out
smallberries: you're better off looking at benchmarks (synthetic and game ones) than going with what video card makers' marketing companies put on the box. they're gonna put the big numbers whether they matter or not
there are lots of graphics card benchmarks on this site that will cover any card you could be considering
I have a geforce 2 mx 400 at home that will run UT2K3 nicely @ 800x600. Just turn off the shadows :smile:
It shows how much lights in games (and subsequentialy shadows) affect the game performance itself. Just watch the benchmark on 3dmark2001se with the merry-go-round. 1 light and most cards run fine. 8 lights, even my 9700 gets a bit jerky....
<font color=red>*</font color=red><font color=white>*</font color=white><font color=blue>*</font color=blue>
... And I'm proud to be an American, where at least I know I'm free, and I won't forget the men who died, who gave that right to me.