None of the above. Both will be destroyed by the 540 GTX. S3 540 GTX that is. It must because it is only a measly GTX 380 which is obviously less than 540.

That argument is about as valid as deciding a victory based off the meagre specs we got in the paper. Have patience, we should know the truth in the next few months.
 

Dekasav

Distinguished
Sep 2, 2008
1,243
0
19,310
The big thing I don't like about that whitepaper, is it has nothing to do with gaming. So, GPGPU should (still) excel on Nvidia architecture, but gaming might flop. Transistors were doubled, and CUDA Cores were doubled, but there was a lot of cache added, and IIRC cache uses a lot of transistors, so what happened to everything else? Same TMUs? Same ROPS?

Just looks like they didn't expect ATI to catch up on gaming this fast, so they went full force into GPGPU, and let GRAPHICS processing lag.
 
Yeah, the only thing we can say with some degree of certainty is that GPGPU stuff should be awesome on it. They certainly innovated in that area. But the complete lack of info on the graphics side is discouraging, and the info we have gotten so far is not good.
 
It took NVIDIA a while to give us an honest response to the RV770. At first it was all about CUDA and PhsyX. RV770 didn't have it, so we shouldn't be recommending it; that was NVIDIA's stance.

Today, it's much more humble.

Ujesh is wiling to take total blame for GT200. As manager of GeForce at the time, Ujesh admitted that he priced GT200 wrong. NVIDIA looked at RV670 (Radeon HD 3870) and extrapolated from that to predict what RV770's performance would be. Obviously, RV770 caught NVIDIA off guard and GT200 was priced much too high.

Ujesh doesn't believe NVIDIA will make the same mistake with Fermi.

Jonah, unwilling to let Ujesh take all of the blame, admitted that engineering was partially at fault as well. GT200 was the last chip NVIDIA ever built at 65nm - there's no excuse for that. The chip needed to be at 55nm from the get-go, but NVIDIA had been extremely conservative about moving to new manufacturing processes too early.

It all dates back to NV30, the GeForce FX. It was a brand new architecture on a bleeding edge manufacturing process, 130nm at the time, which ultimately lead to its delay. ATI pulled ahead with the 150nm Radeon 9700 Pro and NVIDIA vowed never to make that mistake again.

With NV30, NVIDIA was too eager to move to new processes. Jonah believes that GT200 was an example of NVIDIA swinging too far in the other direction; NVIDIA was too conservative.

The biggest lesson RV770 taught NVIDIA was to be quicker to migrate to new manufacturing processes. Not NV30 quick, but definitely not as slow as GT200. Internal policies are now in place to ensure this.

Architecturally, there aren't huge lessons to be learned from RV770. It was a good chip in NVIDIA's eyes, but NVIDIA isn't adjusting their architecture in response to it. NVIDIA will continue to build beefy GPUs and AMD appears committed to building more affordable ones. Both companies are focused on building more efficiently.

We can only hope :D
 
Sorry, but given all the questions of the pictures of the card at the demo, I couldnt help it
So, I borrowed this
tesafilm.png



Hes an old softy for fermi
 
honestly, who cares about the white papers, many things looked good on paper (including communism) though how they really perform is different, so i say stop making this stupid threads and wait for the benchmarks (both synthetic and real games)
 
just read through the white paper(well most of it) and it seems like the 5870 really should have the upper hand in super computing tasks, with both cards able to do 1 64-bit FP MAD per core per clock, 1600 cores vs 512, unless the nVidia card is clocked higher it seems like ATI would be the better way to go for super computing.

With the confirmation that it will have ECC memory im sure the G300 is going to be super expensive now, it seems like nVidia beat intel to Larrabee just from the other direction, unfortunately most people will never need half of what the G300 can do but will probably have to pay for it anyway. Im really starting to dislike this concept of getting to a system on a chip.
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


On paper, is almost revolutionary. Debbugging from microsft visual studio ? You joking right ? Once they can go full compatibility with Visual studio and full with C++, it will be great. Even better if they allow the same code to be run in normal Geforce cards. With teh amout of cores, IPC and many other grat things, this wil be a beast.

Think On the fly encription and decription. Although this is a Tesla. The mARQ seems too GP. And too less GPU. I think Larrabee got its match here no doubt. RV870 ? I still need to see it benchmark.

Anyway, although i'm an ATI fanboy, Bottoms up for Nvidia !! Good work i guess. At least on paper is bloody revolutionary.
 
To be fully honest, I agree.
If this pans out, and nVidia is given time as things change in pc usage, including approaches and expansion of languages et al, G300 could be a very future looking/working componant, tho, it is well ahead of its time, as are the hex cores etc.
Time will tell if the overall body of pc computing can keep up with all these changes, and I hope they do
As for gaming? Who knows, it could just be ho hum