nVidia G80 Reviews

G

Guest

Guest
Have that post under the DX10 post of pschmid, but it was slightly of topic.
Here's a line up of the review I found, mainly thru Dailytech, sofar

Quality review:
HardOCP(BFG tech)
Firing Squad
HotHardware
GamePyre(EVGA)
Guru 3d
Beyond3D- Arch review
THG
Anandtech

Other review/preview
PcPerspective
TBreak
TechPowerUp
AmdZone
Bjorn 3D
Elite Bastards(FoxConn)
Bit tech (Reference card)
Driverheaven(XFX)
TweakDown
Gotfrag
Motherboard.org
Phoronix
Hexus G80
Hexus (Asus)
Hexus (XFX)


Out of category...for Paul =):
http://www.theinquirer.net/default.aspx?article=35604


Look at this beast!
http://www.guru3d.com/admin/imageview.php?image=8747
wafer.jpg


Yield anyone?
 

Track

Distinguished
Jul 4, 2006
1,520
0
19,790
HOLY SHIT! This is amazing. I didnt think it would be here so soon.
Thank You!

Wait, how are they going to seriously review the card without DX10?
 

prozac26

Distinguished
May 9, 2005
2,808
0
20,780
Wait, how are they going to seriously review the card without DX10?
You can't. DX10 is not a factor now, because it's not available. You test current games.

When DX10 games will come out, then you can do full DX10 tests.
 
G

Guest

Guest
I didn't go thru all of the review, obviously, did a quick peak in all of them. Didn't see SLi review except for a few 3Dmark 06 scores. I would think drivers are not on there yet so I would take them with a grain of salt for the moment. Also nowhere did I see why the 8800GTX has 2 SLi conector vs one for GTS. Pretty sur it's for 3 card setup but no hint yet...or maybe I skipped over it :?
 
G

Guest

Guest
Really interesting little part from Anandtech...

Running at up to 1.35GHz, NVIDIA had to borrow a few pages from the books of Intel in order to get this done. The SPs are fairly deeply pipelined and as you'll soon see, are only able to operate on scalar values, thus through simplifying the processors and lengthening their pipelines NVIDIA was able to hit the G80's aggressive clock targets. There was one other CPU-like trick employed to make sure that G80 could have such a shader core, and that is the use of custom logic and layout.

The reason new CPU architectures take years to design while new GPU architectures can be cranked out in a matter of 12 months is because of how they're designed. GPUs are generally designed using a hardware description language (HDL), which is sort of a high level programming language that is used to translate code into a transistor layout that you can use to build your chip. At the other end of the spectrum are CPU designs which are largely done by hand, where design is handled at the transistor level rather than at a higher level like a HDL would.

Elements of GPUs have been designed at the transistor level in the past; things like memory interfaces, analog circuits, memories, register files and TMDS drivers were done by hand using custom transistor level design. But shaders and the rest of the pipeline was designed by writing high level HDL code and relying on automated layout.

You can probably guess where we're headed with this; the major difference between G80 and NVIDIA's previous GPUs is that NVIDIA designed the shader core at the transistor level. If you've heard the rumors of NVIDIA building more than just GPUs in the future, this is the first step, although NVIDIA was quick to point out that G80 won't be the norm. NVIDIA will continue to design using HDLs where it makes sense, and in critical areas where additional performance or power sensitive circuitry is needed, we'll see transistor level layout work done by NVIDIA's engineering. It's simply not feasible for NVIDIA's current engineering staff and product cycles to work with a GPU designed completely at the transistor level. That's not to say it won't happen in the future, and if NVIDIA does eventually get into the system on a chip business with its own general purpose CPU core, it will have to happen; but it's not happening anytime soon.

The additional custom logic and layout present in G80 helped extend the design cycle to a full four years and brought costs for the chip up to $475M. Prior to G80 the previous longest design cycle was approximately 2.5 - 3 years. Although G80 did take four years to design, much of that was due to the fact that G80 was a radical re-architecting of the graphics pipeline and that future GPUs derived from G80 will have an obviously shorter design cycle.