Sign in with
Sign up | Sign in
Your question

nVidia G80 Specs Leaked

Last response: in Graphics & Displays
Share
October 2, 2006 1:15:25 PM

Quote:
Description
Earlier today some information regarding the possible specs of nVidia's upcoming G80 based boards surfaced on VR-Zone, only to be "quietly" removed later. Thankfully X-bit labs managed to collect the information on the new cards and we can now present it to you below.

The most important aspect of the leaked documentation are the specs revealed:

- Unified Shader Architecture;
- Support FP16 HDR+MSAA;
- Support GDDR4 memories;
- Close to 700M transistors (G71 - 278M / G70 - 302M);
- New AA mode: VCAA;
- Core clock scalable up to 1.5GHz;
- Shader performance: 2x Pixel/12x Vertex over G71;
- 8 TCPs & 128 stream processors;
- Much more efficient than traditional architecture;
- 384-bit memory interface (256-bit+128-bit);
- 768MB memory size (512MB+256MB)

The unified shader architecture suggests, what the experts have been saying for a while now, that the cards will be support DX10.

According to the same, leaked, information the G80 will surface in November in two varieties, the GeForce 8800 GTX and GeForce 8800 GT priced at a mere USD 649 and USD 449-499 respectively. The GTX model (higher-end) will feature a 384-bit memory interface, a hybrid water and air cooler and 7 TCPs while the GT will have a 320-bit memory interface, a standard air cooler, and 6 TCPs.

No one has actually explained what TCP actually stands for although Techreport has uncovered an nVidia patent which refers to thread control processors. According to the same patent these TCPs will be used to set the function of the unified shader processors.
October 2, 2006 2:35:23 PM

nice work, and thanks for the info
Related resources
October 2, 2006 3:15:44 PM

shame there havent been any prices leaked yet
October 2, 2006 3:31:48 PM

Quote:
shame there havent been any prices leaked yet


you must not have read the first post?

Quote:
According to the same, leaked, information the G80 will surface in November in two varieties, the GeForce 8800 GTX and GeForce 8800 GT priced at a mere USD 649 and USD 449-499 respectively. The GTX model (higher-end) will feature a 384-bit memory interface, a hybrid water and air cooler and 7 TCPs while the GT will have a 320-bit memory interface, a standard air cooler, and 6 TCPs.


Sometimes i wonder
October 2, 2006 3:40:22 PM

I wonder whether Nvidia will open up the capabilities of the GPU for outside application floating point calculations. I'd really like to use my spare CPU and GPU cycles for something worthwhile like Folding@home. I understand the current crop of NV cards aren't supported but I am sure that only a minor tweak could open them up. The G80 series have some serious horse power.... so lets use it for something more than high dynamic range and obscene frame rates :) 

Well I for one am looking forward to Vista, DX10 and Quad cores... I just hope it isn't all hype.

Thanks for the Specs... it got me salivating.
October 2, 2006 3:42:36 PM

oops sorry me dont know how i missed that
October 2, 2006 4:04:04 PM

So this G80 will fully support the Anti-Aliasing and HDR during gaming?

Also just read more info from Techreport, the G80 is 11 inches in length. Significantly longer than the 7900GTX which is 8.6 inches. So make sure you have the space. I have no problem with Full Tower case.
October 2, 2006 4:36:49 PM

I still dont understand the specs and why/who originally came up with that info. First big thing being the core speeds....how the hell do you manage to obtain that on a 90nm process still? The memory config I bet is going to be interesting......going from an even 512 too 768?? Why not just 1GB?

Plus the info we see is rather....conflicting. I'm harping on the lines of the USD design and the shader performance. How can you get 2xpixel and 12x vertex ....it would be absolute since the ratio of vertex: pixels arent truely relative to whats being processed.
October 2, 2006 5:04:35 PM

Quote:
I still dont understand the specs and why/who originally came up with that info. First big thing being the core speeds....how the hell do you manage to obtain that on a 90nm process still? The memory config I bet is going to be interesting......going from an even 512 too 768?? Why not just 1GB?

Plus the info we see is rather....conflicting. I'm harping on the lines of the USD design and the shader performance. How can you get 2xpixel and 12x vertex ....it would be absolute since the ratio of vertex: pixels arent truely relative to whats being processed.


12 times as many veryex shaders than the G71 core, and 2 times as many as pixel shades than the G71 core,

need any more clearing up,

note that the other 256mb and the 128 bit interface most likely will be for physics effects while the other 512mb and 256bit interface will be what we were expecting in the first place
October 2, 2006 5:08:31 PM

I am sure you are going to see HDR as standard on all new cards ( except the bottom of the barrel "business cards" ). And I imagine that AA will forever move forward too... 32x perhaps ?... but I wonder if my 1.4Ghz Celeron will keep up with this thing ?

Just Kidding !!!
October 2, 2006 5:28:03 PM

650... Will probably be around 700 when it first comes out due to mark-up... add another 200 due to Demand and Supply...

Yep... just as I thought. Prison-rape prices.
October 2, 2006 5:49:53 PM

Quote:


12 times as many veryex shaders than the G71 core, and 2 times as many as pixel shades than the G71 core,

need any more clearing up,



Apparently either A) You didnt understand what I said, or B) You really are somewhat confused. Because in a USD design there is no set amount of dedicated pixel and vertex shaders.....its a flexible and free assignment design. Where you can have certain units on vertex then right around and turn to pixel shader performance. Which is why I dont understand....how can they give a ratio of performance when there is no dedicated unit thats only assigned for X...or assigned for Y. These two statments contradict each other......with the info provided I'd say its not a USD and NV is just trying to get much needed hype. Or its a clever cover to mask a product from ATI.

So to recap. Unless thats the MAX potential of improvement (which in one case sucks and the other fine) then I can understand why those numbers were thrown out for the world to see. But if it is a USD then whoever gave the info to be leaked shot themselves in the foot and maybe didnt read over it quite as well.
October 2, 2006 6:19:04 PM

Yea, as Raven said, G80 is not gonna be unified shader like ATI R600. Rather it will be 48 pixel pipelines and even more number of vertex shaders if it is what the website said. Besides, that information is from a website based in Taiwan. Xbits said it sounds a bit fishy because if it means 12x vertex performance then it would be 96 unified shaders, while 2x pixel shader means 48 unified shader. So the information is either wrong or Nvidia didnt take the Unified Shader design.

Link to xbits lab

http://www.xbitlabs.com/web/display/20060919075610.html
October 2, 2006 6:27:10 PM

Maybe with their unified shader, if it shades verticies better than pixels, but then that doesn't make much sense either...

Maybe its because current graphic cards has so many pixel shaders already compared to vertex shaders, thus when all the "pixel shaders" become "vertex shaders" there would be a greater allotment than before...

Shrug... shot in the dark.
October 2, 2006 6:57:50 PM

Quote:
Xbits said it sounds a bit fishy because if it means 12x vertex performance then it would be 96 unified shaders, while 2x pixel shader means 48 unified shader.

That's actually simple to explain; pixel shaders are made with two ALUs, (an ALU being a combined unit with a FP unit and a 3+ wide vector unit) while vertex shaders only consist of a single ALU.

Hence, in theory, a PC unified shader would consist of 2 ALUs together, and would be able to act as one pixel shader, or two vertex shaders.

This is a similar story as to the R500 Xenos in the Xbox 360, only it uses completely fragmented ALUs, 48 of them, that are not really grouped in any particular way.

Of course, this should not really be taken as something to further substantiate these rumors... I personally don't believe them.
October 2, 2006 7:01:16 PM

I believe any extra shaders will be for the use of geometry pipline
October 2, 2006 7:43:28 PM

Quote:


12 times as many veryex shaders than the G71 core, and 2 times as many as pixel shades than the G71 core,

need any more clearing up,



Apparently either A) You didnt understand what I said, or B) You really are somewhat confused. Because in a USD design there is no set amount of dedicated pixel and vertex shaders.....its a flexible and free assignment design. Where you can have certain units on vertex then right around and turn to pixel shader performance. Which is why I dont understand....how can they give a ratio of performance when there is no dedicated unit thats only assigned for X...or assigned for Y. These two statments contradict each other......with the info provided I'd say its not a USD and NV is just trying to get much needed hype. Or its a clever cover to mask a product from ATI.

So to recap. Unless thats the MAX potential of improvement (which in one case sucks and the other fine) then I can understand why those numbers were thrown out for the world to see. But if it is a USD then whoever gave the info to be leaked shot themselves in the foot and maybe didnt read over it quite as well.

i understand completely as i've already posted this question in an identical thread created a week or so ago Some Info. G80 Specs.( read somewhere )

we will just have to wait and see what the unified shaders are all about, until we get some concrete information
October 2, 2006 7:48:04 PM

Quote:
[Post] (Msg. 12) Posted: Wed Sep 20, 2006 4:27 pm
Post subject: Re: Some Info. G80 Specs.( read somewhere ) [in reply to: theaxemaster] Reply with quote Reply without quote Edit/Delete this post
theaxemaster wrote:
Yeah it is a dual core chip. They're still low on pixel shader units if you ask me though. It will be interesting to see how nvidia's dual core solution will stack up to ati's single core solution.

The graphics card companies have said it themselves though, the upcoming generation of cards is going to be the most power hungry and hottest ever. After that, I've read, they're going to work on improving performance without increasing transistor count/heat on the scale they have been. The G80/r600 is the gen to skip if you're wanting to keep the power requirements down.




* Unified Shader Architecture
* Support FP16 HDR+MSAA
* Support GDDR4 memories
* Close to 700M transistors (G71 - 278M / G70 - 302M)
* New AA mode : VCAA
* Core clock scalable up to 1.5GHz
* Shader Peformance : 2x Pixel / 12x Vertex over G71

they label it a unified shader architecture but then go on to say they'll have 2x24=48 pixel shaders and 12x8=96 vertex shaders... thats a decent amout of shaders if you ask me, considereing nvidia has had less shaders than ati the in the last generation not surprised at this count of shaders although i'm still confused on how they have a "unified shaders" but still label how many pixel and vertex shaders there will be

i was simply reiterating what the rumor originally said, you labeled it as a max performance ratio when i was just saying that there could be a total of 48 pixel shaders whilst there could be a total of 96 vertex shaders, obviously this numbers are not equal therefore not indicating a UFA but considering that the vertex are 2x as the pixel it could still make sense
October 2, 2006 8:07:17 PM

Quote:
I wonder whether Nvidia will open up the capabilities of the GPU for outside application floating point calculations. I'd really like to use my spare CPU and GPU cycles for something worthwhile like Folding@home. I understand the current crop of NV cards aren't supported but I am sure that only a minor tweak could open them up. The G80 series have some serious horse power.... so lets use it for something more than high dynamic range and obscene frame rates :) 

Well I for one am looking forward to Vista, DX10 and Quad cores... I just hope it isn't all hype.

Thanks for the Specs... it got me salivating.


About this Folding@Home thing... to me it seems like we'd all be better off either shutting off our computers or at the very least letting them idle/hibernate... the energy consumed by running Folding@Home contributes to higher energy consumption (resulting in more pollution) ... just don't think it's as noble of an idea as everyone makes it out to be. Just my 2 cents.
October 2, 2006 8:28:09 PM

Quote:
About this Folding@Home thing... to me it seems like we'd all be better off either shutting off our computers or at the very least letting them idle/hibernate... the energy consumed by running Folding@Home contributes to higher energy consumption (resulting in more pollution) ... just don't think it's as noble of an idea as everyone makes it out to be. Just my 2 cents.

I would disagree. That research is going to be done at any rate, and even if you don't endorse research toward finding a cure for cancer, it's still best that the costs, and hence pollution, are minimized.

Were it not for Folding@home, Sanford would have to build another supercomputer for the process; yes, supercomputers tend to be a bit more energy-efficient for the work they do than desktop PCs, but that would be largely negated by the energy expended (and pollution created) in order to construct the pieces for such a machine, as well as the facility to house it.
October 2, 2006 9:00:12 PM

I wasn't saying it was a bad investment... I was merely posing the question IF it were in fact a good investment of resources. My mind just can't comprehend/calculate the value of the work being done... but I do have a more real-world grasp on the pollution that coal burning power plants produce.
October 2, 2006 11:48:59 PM

The more and more I hear...this is what I've gathered from the info presented so far.


An added Geometry controller then an extra ram buffer (hense the wierd ass numbers were seeing). Almost like two functional chips in one setup package. The buffer sizes and bus widths are again wierd numbers. the 384 is really 256 to the main and 126 to the secondary or inline. Probably your lower ranged 88xx series will have the 2nd buffer crippled, but its why were seeing the extra ram.

So can you call this a unified technology.......?
October 3, 2006 12:08:50 AM

Regarding the energy consumption versus folding@home.... well you have a point. I did some calculations based on carbon dioxide emissions published on the web and here's what is breaks down to.

If you leave your 500w PC on 24hrs a day for a year... it will take the work of 1000 trees that same year to remove the carbon dioxide produced (during the creation of the energy used by the PC). Personally I find that a pretty scary figure.

In fact I think we are all doomed !!!

8O
October 3, 2006 12:40:13 AM

I agree that folding will boost energy consumption, but I imagine that a lot of people here already leave their computers on... so it's really a matter of how much extra energy is consumed when having a 300+ million transistor GPU crunching away at some mathematical problem. A GPU under load consumes considerably more power than one at idle.
October 3, 2006 12:46:46 AM

I'm @ home downloading the client now...I'll tell you how it is with my X1900.
October 3, 2006 5:08:34 PM

8 TCPs & 128 stream processors
Does anyone have idea what are these?
October 3, 2006 5:25:45 PM

i'd just like to say nvidia is crazy with their pricing. beofre they had unified pipes, i thought ati was gonna obliterate nvidia but now it's a battle i have to see. anyway, watch ati's flagship dx10 card be priced at 499USD or something sweet like that to punch nvidia in the gut. 100$ difference between ati/nvidia cards at equal performance levels? (not going past 1280 resolution) pfft.
October 3, 2006 6:41:13 PM

oh man !! i can`t wait to buy a 8800GTX :D 
long live the nVidia :tongue:
October 3, 2006 7:08:36 PM

Quote:
i'd just like to say nvidia is crazy with their pricing. beofre they had unified pipes, i thought ati was gonna obliterate nvidia but now it's a battle i have to see. anyway, watch ati's flagship dx10 card be priced at 499USD or something sweet like that to punch nvidia in the gut. 100$ difference between ati/nvidia cards at equal performance levels? (not going past 1280 resolution) pfft.


i'm sure the 8800GTX will demand that premium when it is the only DX10 card on the market, when ATis card comes out either the GTX will be lower priced or ATi will price their card similiarly, ATis introduction price points have not been lower than nV in the past, don't expect them to be lower now
October 3, 2006 7:41:02 PM

I think the price is pretty fair, considering it is dual core.

Seriously, everyone who's asking how it is possible to get that many times the shader performance, look at the transistor count. Its 700 million, it is a dual GPU setup. There is currently no way you could do that on a single die and expect reasonable yields.

Go read the wikipedia entry on stream processing if you're curious. It seems to be some sort of parallelism method they're probably using between the GPUs. It is probably a lot like Raven said, 256 bit memory to the main, 128 to the secondary and you get 384.
October 3, 2006 7:56:22 PM

Whats the deal with that 256-bit + xxx-bit memory interface?
October 3, 2006 8:36:38 PM

Quote:
Whats the deal with that 256-bit + xxx-bit memory interface?


I started to explain that in my post page back....
October 3, 2006 8:51:58 PM

The other guy has mentioned about phyics, that might explain the odd amount of ramage on this card. I don't know if there's a possibility that this card can render graphics and physics.
October 3, 2006 8:59:11 PM

Quote:
The other guy has mentioned about phyics, that might explain the odd amount of ramage on this card. I don't know if there's a possibility that this card can render graphics and physics.


its a possibility nvidia decided to integrate the second memory of 128mb with the lower bus width for physics for the second core, the first core with 512mb of ram at 256bit would be the standard gpu we've been used too,

this is just speculation but with nv and ati's attitude toward developing there own physics cards this could surely be an option that nvidia is going with
October 3, 2006 9:17:56 PM

That would be nice if finally we would be able to have good visuals and physics. :D 
October 3, 2006 9:23:40 PM

wow some of the specs looking kinda weird, 256bit-128bit hybrid memory bus, 512-258 hybrid memory, whats that all about, it will be interesting to see how this would affect performance of the graphics card
October 3, 2006 9:58:17 PM

From my knowledge, the only thing thats prevent true physics calculation in current Nv architecture is memory caching and buffering. I doubt the extra buffer and ram is an onboard physics processor though....
October 4, 2006 3:51:41 PM

Quote:
The other guy has mentioned about phyics, that might explain the odd amount of ramage on this card. I don't know if there's a possibility that this card can render graphics and physics.


its a possibility nvidia decided to integrate the second memory of 128mb with the lower bus width for physics for the second core, the first core with 512mb of ram at 256bit would be the standard gpu we've been used too,

this is just speculation but with nv and ati's attitude toward developing there own physics cards this could surely be an option that nvidia is going with
It's not a possibility anymore. It's the only sane logic reason how such memory bus configuration can exist on a single chip packaging through two cores.

All explained here which kaotao pointed out and you participated in and got this idea from.

exactly, with these almost identical topics i was relaying the info from the other section that i thought was worth while
October 4, 2006 4:12:44 PM

If that's the case I can't wait to see some reviews and especially with the upcoming game Crysis which I'm hoping it will supports physics processing unit.
October 4, 2006 7:25:51 PM

@deathwingx

Thanx, I try to do my part.

@kaotao

Sorry didnt realise it was already posted..
October 4, 2006 7:29:52 PM

Quote:
@deathwingx

Thanx, I try to do my part.

@kaotao

Sorry didnt realise it was already posted..


All good.
!