Like has been previously said, the PS3 has about a 7900GT and the Xbox 360 uses an R600 variant probably around an HD 3600 or HD 2900 XT; both DirectX 9. The difference is it has some processor cores that can also be used for graphics processing and it only runs in 720p or sometimes lower which is at best 1280x720. You can run at that resolution with, well, a 7900GT. The consoles upscale the image to 1080p, but there's a very large difference processing graphics at 1280x720 and 1920x1080 (which you can actually demonstrate using ARMA II on the PC -- graphics settings allow the resolution to be in 1080p but it can render everything in a lower resolution, case in point); need I count how many pixels different that is.
Now of course the consoles don't look terrible and are definitely comparable, but you also have to keep in mind these things are designed from the ground up to work with the development platform people make the games on. That means no driver issues/optimizations, no worrying about different display settings, no worrying about memory or processor speeds. If AMD or NVidia took the time to optimize each of their GPUs they sell you for your exact hardware and that exact GPU with that exact game the PC would probably beat out console performance by multiples of ten, as it should. Also, if you were able to strip the operating system and just run bare games off of, well, something, that would help.
As we move into DirectX 11 and adopt ATi's HD 5 series and (hopefully) NVidia's new cards I think you'll start to see a disparity in console graphics until it's time for their new revisions/replacements.
The PS3 gpu has 1.8 TeraFLOPS of processing power btw, which is 50% more than an HD 4870.
Yeah keep dreaming.
For example the ps3 cell cpu runs pararrell with the 7800gt making the system very powerful. the cell cpu is more powerful than any gaming cpu on market while the GPu is rubbish. cell is 7 cores.
The cell consists of 1 PowerPC core and 8 SPEs (one of which is disabled to help yields and one of which is dedicated to running the OS). SPEs are very simplistic and are in no way comparable to any current CPU core from AMD or Intel.
the Xbox 360 uses an R600 variant probably around an HD 3600 or HD 2900 XT; both DirectX 9.
It's more of a R500 / R600 hybrid. If anything it leans more towards a X1900XT in design.
Are you sure? The PS3 video card i think is better than the 7900GT, X1950 HD 4650. After all, it doesnt lag ever in MW2 or COD4.
Thats because its running at a lower resolution, and the games are specifically designed for exactly one variant of hardware. Console hardware is actually pretty whimpy compared to a gaming PC. Games such as Dragon Age make this point for me. The console versions are stripped down quite a bit to run well there, and MW2 looks much better running in Eyefinity at multiples of the resolution a console could muster. We can max out graphics settings that aren't even available on the console version.
Not that I'll give Infinity Ward a dime until they inluded independent servers. Bastards.
HD 4870 has 1.2 teraflops computational power as seen here on amd's website.
The PS3's gpu has 1.8 teraflops computational power (I never said it was all used effectively) as seen here along with many other sites that did reviews on the PS3 when it came out.
Finally, the PS3 has 50% more processing power than an HD 4870; this is just simple math (1.8/1.2 = 1.5 = 150%, meaning a 50% increase).
Arbitrary claims of computational power that are stated only by the companies themselves are completely meaningless. They could say it had 5 teraflops of power, there's no way to check this since they didn't give out the specifics of how they determined the number.
A 7900GT with half the memory bandwidth and half the ROPS is not 50% faster than a HD4870. Check any benchmark even comparing a full 7900GT with a HD4870.
Memory bandwidth has nothing to do with computational power, which is probably one reason the PS3 doesn't have the performance of the HD 4870.
Ok, so now we can't trust any manufacturer's specifications. How do we know the HD 4870 even has 1.2 TeraFLOPS computational power? Did you disassemble one and reverse engineer it to count the rops, shader processors, etc.? Just because we are told it has 800 Shader processors and 16 ROPs, what if AMD is lying?
After all, it doesnt lag ever in MW2 or COD4. said:
After all, it doesnt lag ever in MW2 or COD4.
put in a 4670 on top of an athlon2 x4 620 and run mw2 @ 720p. i'll cut my dick off if it hints a tiny bit of LAG. and yeah turn vsync-on as well (sync every frame) to match that frame-capped visuals of a console.
I mean, seriously, Nvidia wouldn't have bothered with producing any new cards at all if all they needed to do was make a 7900GT with a 512bit bus to stay on top.
And yes, you can look at a piece of silicon with a microscope and actually count the stream processors, etc. AMD's claim of 1.2 teraflops is at least plausible. A G70, with no known changes other than a reduction in memory controllers and ROPs, does not suddenly gain 400% of its performance once it's put in a PS3.
...And yes, you can look at a piece of silicon with a microscope and actually count the stream processors, etc....
I asked if you yourself have, since we can't trust manufacturers claims as you said. Also, FLOPS are a measure of theoretical processing power, and it certainly doesn't always equate to real world processing power (as we see with the NVidia RSX and even with the ATI Xenos). Also, processing power of the GPU doesn't necessarily equate to graphical performance either.