Sign in with
Sign up | Sign in
Your question
Closed

GF100 (Fermi) previews and discussion

Last response: in Graphics & Displays
Share
a b U Graphics card
January 18, 2010 3:18:26 AM

Note that these a previews and not proper reviews. Most of the ones I've seen contain no real benchmarks except the one from Hardware Canucks which has a small handful. I believe NVIDIA ran the GF100 benchmarks though, not Hardware Canucks (well, at least for the 1920x1200 results). But it was just the standard built in benchmark, nothing special.

http://www.tomshardware.com/reviews/gf100-fermi-directx...
http://bit.ly/7uDKh0 (Hardware Canucks)
http://www.pcper.com/article.php?aid=858
http://www.techpowerup.com/reviews/NVIDIA/GF100_Fermi_A...
http://www.anandtech.com/video/showdoc.aspx?i=3721
a c 106 U Graphics card
January 18, 2010 3:47:59 AM

Your hardware Canucks link is broken.

Anyway, nothing is final yet so we'll see how great the performance is at launch, as well as how much that performance will cost.
a b U Graphics card
January 18, 2010 3:58:28 AM

Link fixed. For some reason TinyURL no longer works to get around the word filter here.
Related resources
a c 169 U Graphics card
January 18, 2010 5:14:34 AM

thanks a lot for the links :)  waiting for final reviews and benchies :D 
January 18, 2010 5:35:25 AM

i was fairly certain theyd regain the performance crown, but what seems odd to me is how linked the gpu is to memory bit rate or whatever. x number of cuda cores equals x memory inerface value. seems a bit cumbersome to plan for.
Anonymous
a b U Graphics card
January 18, 2010 9:50:59 AM

It's not bad but if you assume these FC2 benches are best case (and you have to), it's not exactly brilliant either? The 5970 will walk all over this while costing the same to produce.
a b U Graphics card
January 18, 2010 10:33:03 AM

Hmm...I see what nVidia did there, but I well underwhelmed. In the youtube videos we see a Fermi card (GTX360 or GTX380) in Ranch Small, not the typical level most review site use to benchmark cards. The Fermi card scored 84 FPS. Now lets see some benchmarks:

http://www.pcper.com/article.php?aid...e=expert&pid=6

The 5870 scores about 83 FPS at the same res and quality. We can have 2 conclusions for this depending on which Fermi card was tested:

1. The GTX360 tested (448 Cuda processors) =>
- 5870 ~ GTX360
- 5870 < GTX380 (512 CP) < 5970

2. The GTX380 tested... => nVidia screwed.
January 18, 2010 11:27:45 AM

It's always exciting to see a brand new GPU come out. I like that the GF100 is being compared to a dual GPU card.

I'm kind of partial to nVidia, but I appreciate their competition with AMD. (I did have a 9800XT that I was happy with) It helps to keep them honest, and visa versa.
a b U Graphics card
January 18, 2010 11:33:25 AM

I bet Nvidia will have a 5850 beater at launch, price/performance.
a b U Graphics card
January 18, 2010 11:36:23 AM

notty22 said:
I bet Nvidia will have a 5850 beater at launch, price/performance.


By the supposed scalability of the Fermi chip that could be possible, but I wouldn't get my hopes to high until I see some benchmarks of this chip.
a b U Graphics card
January 18, 2010 11:38:18 AM

notty22 said:
I bet Nvidia will have a 5850 beater at launch, price/performance.

All it takes is a price drop from AMD and that win can become a loss.
a b U Graphics card
January 18, 2010 11:43:39 AM

randomizer said:
All it takes is a price drop from AMD and that win can become a loss.


^+1. And from the die size of the Fermi chip and the poor 40 nm TMSC yields we can assume that ATI has a little more room for a price drop.
a b U Graphics card
January 18, 2010 11:49:29 AM

I think another BIG part of the story is going to be that Fermi does DX 11 faster,
much faster. So then it will be which one is really more future proof.
a b U Graphics card
January 18, 2010 11:53:05 AM

notty22 said:
I think another BIG part of the story is going to be that Fermi does DX 11 faster,
much faster. So then it will be which one is really more future proof.


You can only assume that based on the architecture, but that is not always guaranteed. My suggestion is to wait for some real world game benchmarks.
a b U Graphics card
January 18, 2010 11:59:16 AM

I wonder what will nVidia do regarding the 5970, because from the look of things, they don't have any competitive card for Hemlock.
a c 145 U Graphics card
January 18, 2010 12:14:27 PM

hallowed_dragon said:
My suggestion is to wait for some real world game benchmarks.


i did that with HD5000 series

hallowed_dragon said:
I wonder what will nVidia do regarding the 5970, because from the look of things, they don't have any competitive card for Hemlock.


well who knows. nvidia is also known for it's brute force. if they really want the dual crown the will do it even if they have to stick 2 PCB into one card. :lol: 
a b U Graphics card
January 18, 2010 12:16:58 PM

renz496 said:
i did that with HD5000 series



well who knows. nvidia is also known for it's brute force. if they really want the dual crown the will do it even if they have to stick 2 PCB into one card. :lol: 


From the rumor-mill the power required for the Fermi chips will be high so I don't know how feasible a dual chip can be.
January 18, 2010 12:50:33 PM

I hope the performance hit to 32 aa is real,
not that i'll be buying a video card that's that expensive, but in the future it'll be good for us all.

i'm just not buying it 100% cuz Nvidia is making the benchmarks....
7% decrease is probably 20% decrease in games if we would benchmark ourselves :p 
a b U Graphics card
January 18, 2010 12:54:31 PM

Ehsan w said:
I hope the performance hit to 32 aa is real,
not that i'll be buying a video card that's that expensive, but in the future it'll be good for us all.

i'm just not buying it 100% cuz Nvidia is making the benchmarks....
7% decrease is probably 20% decrease in games if we would benchmark ourselves :p 


Word.
a b U Graphics card
January 18, 2010 1:28:57 PM

NVIDIA will probably leave 5970 for at least 2nd quarter of 2010

At start they will give at least two cards of 5770 and 5850 range.
Then they will go for 5870
Then for lower market.


Lately in Quarter 2, they will come up with a card to go toe-to-toe with 5970

a b U Graphics card
January 18, 2010 1:46:51 PM

mfarrukh said:
NVIDIA will probably leave 5970 for at least 2nd quarter of 2010

At start they will give at least two cards of 5770 and 5850 range.
Then they will go for 5870
Then for lower market.


Lately in Quarter 2, they will come up with a card to go toe-to-toe with 5970


I believe then they will have other problems with the 5000 series refresh products. 6 months later...the 6800 series will come, and I simply can't see what will nVidia do to compete against them.
a c 145 U Graphics card
January 18, 2010 2:02:54 PM

it's obvious isn't it? 6000 series will compete with nvidia 400 series. :na: 

btw when 6000 comes to see us will it use whole new architecture or will AMD just doubling everything 5000 series have today with some new feature?
a b U Graphics card
January 18, 2010 2:07:23 PM

renz496 said:
it's obvious isn't it? 6000 series will compete with nvidia 400 series.

btw when 6000 comes to see us will it use whole new architecture or will AMD just doubling everything 5000 series have today with some new feature?


It is rumored that the new series from ATI is a whole new architecture. And your comment about the 400 series from nVidia...hmm...they are already late for the 5000 series, where do you think they will be for the 6000 series?
a c 145 U Graphics card
January 18, 2010 2:17:46 PM

if 300 series is absolute no match to 6000 series then they have to come up with 400 series. it doesn't matter how late they are. but in market and profit stand point it won't be good for them if they late again just like now
January 18, 2010 3:32:18 PM

mfarrukh said:
NVIDIA will probably leave 5970 for at least 2nd quarter of 2010

At start they will give at least two cards of 5770 and 5850 range.
Then they will go for 5870
Then for lower market.


Lately in Quarter 2, they will come up with a card to go toe-to-toe with 5970

I doubt it
this is the only thing nvidia does, meaning no stone left unturned to make a profit. I think theyll do a dual core card right out of the gate.
January 18, 2010 3:33:41 PM

renz496 said:
it's obvious isn't it? 6000 series will compete with nvidia 400 series. :na: 

btw when 6000 comes to see us will it use whole new architecture or will AMD just doubling everything 5000 series have today with some new feature?

i missed the 6000 cards news , are you guys joking or is there an actual press release?

http://forums.pureoverclock.com/showthread.php?threadid...

I dont see how taping out in Q4 is going to do much to bother nvidia.
a b U Graphics card
January 18, 2010 4:36:39 PM

verndewd said:
i missed the 6000 cards news , are you guys joking or is there an actual press release?

http://forums.pureoverclock.com/showthread.php?threadid...

I dont see how taping out in Q4 is going to do much to bother nvidia.


We have absolutely nothing on the 6000 series other than it will be a new architecture, will probably be on 28nm at global found, and will probably not be here until late Q410 at the earliest.

As for this:

Quote:
VIDIA will probably leave 5970 for at least 2nd quarter of 2010

At start they will give at least two cards of 5770 and 5850 range.
Then they will go for 5870
Then for lower market.


We only have info on the highest end cards. Nvidia will be top down like always. Probably we will see something faster than the 5870 first (360), then at the same time but in limited quantity we will see the 380 slipping in likely under the 5970. Around late spring they might have a working dual 360 card. We won't see 5850 level cards until the spring summer time frame, starting higher and moving lower.
a b U Graphics card
January 18, 2010 6:56:16 PM

http://www.hardocp.com/article/2010/01/17/nvidias_fermi...

And my favorite:



Looks like Fermi blows the 5000 series out of the water...Redesigning the entire pipeline is paying of giant dividends for NVIDIA right now.

Also, I was right on the 512 cores while everyone else claimed "only" 448. :D 
a b U Graphics card
January 18, 2010 7:11:18 PM

gamerk316 said:
http://www.hardocp.com/article/2010/01/17/nvidias_fermi...

And my favorite:

http://www.hardocp.com/images/articles/1263608214xxTstzDnsd_1_18_l.gif

Looks like Fermi blows the 5000 series out of the water...Redesigning the entire pipeline is paying of giant dividends for NVIDIA right now.

Also, I was right on the 512 cores while everyone else claimed "only" 448. :D 


Most of the major sites have stated that the tessellation implementation is too different to really compare using that benchmark. On Fermi it is tied to the shader cores, so it will hit performance more than on ATI. Though that may not matter if it is sufficiently faster.

As for the "only" 448.. didn't most people claim that the 512 will come around just not in great numbers right away? Well except for the trolls. I don't think anyone rational was claiming otherwise.


Also, until I am installing my fermi it is blowing nothing out of the water...

I'm excited about the possibility.. but we have nothing yet.. I can't say I'm too keen on slides from nvidia either.. as the ATI slides for the 5870 claimed some ridiculous things as well.
January 18, 2010 7:32:09 PM

daedalus685 said:
Most of the major sites have stated that the tessellation implementation is too different to really compare using that benchmark. On Fermi it is tied to the shader cores, so it will hit performance more than on ATI. Though that may not matter if it is sufficiently faster.

As for the "only" 448.. didn't most people claim that the 512 will come around just not in great numbers right away? Well except for the trolls. I don't think anyone rational was claiming otherwise.


Also, until I am installing my fermi it is blowing nothing out of the water...

I'm excited about the possibility.. but we have nothing yet.. I can't say I'm too keen on slides from nvidia either.. as the ATI slides for the 5870 claimed some ridiculous things as well.

I agree but I do get a feeling fermi will be faster, waiting for the 3rd party benches to bring a realism to the table. I flip nvidia fans some pooh but NV does a great job ,and in areas ATI fails to compete. Some one showed me a gpu sales trend over the span of a few years and the entire market followed nvidias lead, ATI never rose above in discrete sales and when you think in terms of rendering farms and commercial gfx ability ATI is like nowheresville
a b U Graphics card
January 18, 2010 7:54:47 PM

Implemention != performance, true. But as others have pointed out, performance is what people buy, and the first comparison across a single benchmark gives a 20fps advantage to Fermi.

The only question I have now is actual gaming performance on non-DX11 SW and Price.
January 18, 2010 9:12:11 PM

RAWR need more benchmarks preferably by 3rd party ppl not nvidia, as we all know by reading their driver change log apparently they know magic or something when it comes to performance increases and benchmarks.

Oh well their little **** tease with this *** they were showing off put me in a better mood about it although the price is still *** in the air.

If they can throw down a 5770 and 5850 competitor that out preforms it in DX11 then they can gain quite a bit of market back imo.
a b U Graphics card
January 18, 2010 10:02:04 PM

verndewd said:
I doubt it
this is the only thing nvidia does, meaning no stone left unturned to make a profit. I think theyll do a dual core card right out of the gate.

There is an ATX specified 300W limit (which is why the 5970 uses downclocked but higher binned 5870 Chips).

AMD circumvented the 300W limit by making their 5970 highly overclockable in the hands of the end user but the limit remains.

We know that a Fermi based card with 448SPs (and a 600-750MHz core clock) has a power envelope of >225W. How much of a cut down would nVIDIA need to get two Fermi's on a single card? I would think too much of a cut down, so much so that a Dual GPU card is no longer feasible as it cannot compete with the 5970.

I think nVIDIA will wait for a die shrink before they release a Dual GPU product.
January 18, 2010 10:04:14 PM

ElMoIsEviL said:
There is an ATX specified 300W limit (which is why the 5970 uses downclocked but higher binned 5870 Chips).

AMD circumvented the 300W limit by making their 5970 highly overclockable in the hands of the end user but the limit remains.

We know that a Fermi based card with 448SPs (and a 600-750MHz core clock) has a power envelope of >225W. How much of a cut down would nVIDIA need to get two Fermi's on a single card? I would think too much of a cut down, so much so that a Dual GPU card is no longer feasible as it cannot compete with the 5970.

I think nVIDIA will wait for a die shrink before they release a Dual GPU product.

good to know. Tsmc is at 28nm Q4 so likely not any earlier right? If so it seems ATI AMD has done a bang up strategy with 5970.
January 18, 2010 10:06:18 PM

IzzyCraft said:
RAWR need more benchmarks preferably by 3rd party ppl not nvidia, as we all know by reading their driver change log apparently they know magic or something when it comes to performance increases and benchmarks.

Oh well their little **** tease with this *** they were showing off put me in a better mood about it although the price is still *** in the air.

If they can throw down a 5770 and 5850 competitor that out preforms it in DX11 then they can gain quite a bit of market back imo.

:lol:  :lol:  :lol:  really , tell us how you truly feel.
January 18, 2010 10:08:41 PM

ElMoIsEviL said:
There is an ATX specified 300W limit (which is why the 5970 uses downclocked but higher binned 5870 Chips).

AMD circumvented the 300W limit by making their 5970 highly overclockable in the hands of the end user but the limit remains.

We know that a Fermi based card with 448SPs (and a 600-750MHz core clock) has a power envelope of >225W. How much of a cut down would nVIDIA need to get two Fermi's on a single card? I would think too much of a cut down, so much so that a Dual GPU card is no longer feasible as it cannot compete with the 5970.

I think nVIDIA will wait for a die shrink before they release a Dual GPU product.

Or they can just lop off parts(if i remember parts from the like 5-7 articles about the architecture right) until they get it down to the power requirements. They never said the dual gpu single card will be dual chips from their flagship card.
a b U Graphics card
January 18, 2010 10:09:28 PM

ElMoIsEviL said:
There is an ATX specified 300W limit (which is why the 5970 uses downclocked but higher binned 5870 Chips).

AMD circumvented the 300W limit by making their 5970 highly overclockable in the hands of the end user but the limit remains.

We know that a Fermi based card with 448SPs (and a 600-750MHz core clock) has a power envelope of >225W. How much of a cut down would nVIDIA need to get two Fermi's on a single card? I would think too much of a cut down, so much so that a Dual GPU card is no longer feasible as it cannot compete with the 5970.

I think nVIDIA will wait for a die shrink before they release a Dual GPU product.


I am almost certain they will release a beast of a dual GPU card if only to ensure that the fastest thing on the planet is Nvidia for some unknown amount of time.

While the radeons are certainly more powerful per area of die.. I'm not sure how it will stack up performance/power. We might see a dual 8pin 'special' edition 5980 that is clocked slightly higher with better cooling (to overclock the 590 well requires special VRM cooling, they are not dealt with well on the stock cooler) but that is as far as ATI would bother going with this gen. While I would not at all be surprised to see a GTX390 that was effectively two 448 parts together with no regard for staying in spec. I'm sure there is some stupid way around the spec they will exploit.. They would only be after claiming the single card crown, if one fermi does not do it they will figure out a way to smash exactly 300W of computing power together as maintaining the crown seems important to their business model.
a b U Graphics card
January 18, 2010 10:18:50 PM

gamerk316 said:
http://www.hardocp.com/article/2010/01/17/nvidias_fermi...

And my favorite:

http://www.hardocp.com/images/articles/1263608214xxTstzDnsd_1_18_l.gif

Looks like Fermi blows the 5000 series out of the water...Redesigning the entire pipeline is paying of giant dividends for NVIDIA right now.

Also, I was right on the 512 cores while everyone else claimed "only" 448. :D 

That benchmark is tessellation-heavy, more than most games will ever be. It doesn't represent the real world much more than 3D Mark.
a b U Graphics card
January 18, 2010 10:24:35 PM

randomizer said:
That benchmark is tessellation-heavy, more than most games will ever be. It doesn't represent the real world much more than 3D Mark.


It is still impressive to watch it chug it out..

Mind you, I'm more interested in how everything else light that benchmark is, as opposed to tessellation heavy. It doesn't look nice without tessellation giving me the impression that other than tessellation it is very easy on a GPU... I'd like to see how well fermi handles tessellation if it is also computing a texture fill heavy scene with AA. At what point does the dedicated hardware of ATI's tessellation engine overtake fermi, if ever, or is the computing power of fermi such that it won't make a difference.
a b U Graphics card
January 18, 2010 10:26:17 PM

Quote:
Yes the unigine engine used was created very much for tesselation. Still I see not much worth discussing till they are out in the wild. So much speculation without any real numbers.

Odd that gamer posted a [H] link considering how much flak they are giving nvidia right now.


I thought it was odd as well.. though Kyle didn't write that article, just the bit at the end. He seemed rather impartial though, a few stern words here and there though he did seem impressed.
a b U Graphics card
January 18, 2010 11:08:20 PM

Speaking of Eyefinity, Nvidia will get it in their new cards through 3D Vision Surround. Ive allways thought that Nvidia would have to come out with there own version sooner or later, and well here it is!. The only realy incredibly stupid thing I see with Nvidias implementation is that they force you to have TWO cards in SLI. Youd think that they would learn form ATIs mistake, being that the 3rd monitor had to be Displayport, and just put 3 outputs on the back of the card...
a b U Graphics card
January 18, 2010 11:43:41 PM

randomizer said:
That benchmark is tessellation-heavy, more than most games will ever be. It doesn't represent the real world much more than 3D Mark.


Still, considering how badly the 5770 and below did in actual DX11 benchmarks, I'd say anything over 40FPS is outstanding at this point. Tesselation is the hardest part of DX11 computation, and if the first DX11 entry can handle that benchmark, which is overkill, I would expect to see much better numbers in DX11 titles (flawed as they may be at this point) compared to ATI.

If NVIDIA can keep the price of the top card under $450 and beat ATI in performance, we might yet have a repeat of the HD2000 vs G80...Everyone here knows I view the 5000 series as too weak for DX11/Tesselation (except maybe the 5870 and above; we'll see), so if these numbers hold, I don't think thats a bad comparison...

Having an actual DX11 game that wasn't a DX9 add-on would be interesting right about now...
Anonymous
a b U Graphics card
January 19, 2010 12:03:40 AM

gamerk316 said:
Still, considering how badly the 5770 and below did in actual DX11 benchmarks, I'd say anything over 40FPS is outstanding at this point. Tesselation is the hardest part of DX11 computation, and if the first DX11 entry can handle that benchmark, which is overkill, I would expect to see much better numbers in DX11 titles (flawed as they may be at this point) compared to ATI.


Wait..lol?

Can we see half a g100 running tesselation before you claim the 5770 and below does dx11 'badly'? :whistle: 

Quote:
If NVIDIA can keep the price of the top card under $450 and beat ATI in performance, we might yet have a repeat of the HD2000 vs G80...


I really, really, really, really, really hope you are being sarcastic.
a b U Graphics card
January 19, 2010 12:21:11 AM

It will cost them $450 just to make the GPU :lol: 
January 19, 2010 12:41:47 AM

repeat of the RV600? vs G80, 800 bucks for just a single gpu, but in that situation nvidia wins by far in terms of performance.
a b U Graphics card
January 19, 2010 12:50:03 AM

IzzyCraft said:
Or they can just lop off parts(if i remember parts from the like 5-7 articles about the architecture right) until they get it down to the power requirements. They never said the dual gpu single card will be dual chips from their flagship card.

Yes... two cut down variants, but how much of a cut down would they need to fit the 300W power envelope? Probably something to the tune of a Dual 320SP part (to fit the power envelope) which only means a theoretical 640SP performance (not enough to compete with a Radeon HD 5970 (I would think).

I am thinking around 15-20% better performance for GF100 (Fermi) over a Radeon 5870 1GB (averaged out over various titles).

Radeon HD 5970 are two RV870 (Radeon HD 5870) cores (highly binned ones) on a single board (stock clock is lowered to fit the 300W envelope).

You would need around a 40% + win for GF100 over Radeon 5870 across the board to make a Dual GPU 320SP part feasible (and we have to remember that memory units in GF100 are tied to SP blocks so a 320SP card is lacking several memory blocks to boot).

I don't see a dual GPU card (could still happen I just don't see it) at launch. With the move to 32 or 28nm though, anything is possible in the second half of 2010.
a b U Graphics card
January 19, 2010 12:54:52 AM

The 3D nVidia Implementation is similar to ATIs 6 monitor. If its that much better, you shouldnt need SLI.
My take is this. Fermi is the first in a 3 gen (most likely) rehash down the road, and has the baby steps of whats going to happen down the road, much like the R600 was thus the 5xxx series, and next new gen, where we'll most likely see similar implementations, as early DX11 games wont be that hard, since most wont be ground up games using it, and top cards should handle it, its the second gen as usual thatll do better, as the DX model and games mature in its usage.
I would say AOHD is fairly demanding on DX11, and if used, its not nearly as scalable as tesselation is.
Theres other conversations going elsewheres saying some things, most arent forgetting where these "benches" are coming from.
The nVidia slide showing the 8xAA abilities at 35x16 is a great example of how these "benches" may be applied.
If the Fermi marks are indeed 2.33 times better than G200 at 25x16 using 8xAA, all this proves is what we do know.
We do know G200 8xAA sucks, and often fails at 25x16, so saying 2.33 is very subjective, and is what nVidia shows as triumph.

As for highend? I think we need to look at the partners here, as itll be them doing an end around on the PCI specs, and creating sone kind of frankensteinian "top" card, using more than 300 watts.
If this is so, not sure how nVidia will be able to claim victory here, as such a card will be outside their specs, even tho they can unofficially help in its desgn
January 19, 2010 1:32:53 AM

Eh as long as i don't see a tri slot card anytime soon.
a b U Graphics card
January 19, 2010 1:41:41 AM

gamerk316 said:
Still, considering how badly the 5770 and below did in actual DX11 benchmarks, I'd say anything over 40FPS is outstanding at this point. Tesselation is the hardest part of DX11 computation, and if the first DX11 entry can handle that benchmark, which is overkill, I would expect to see much better numbers in DX11 titles (flawed as they may be at this point) compared to ATI.

If NVIDIA can keep the price of the top card under $450 and beat ATI in performance, we might yet have a repeat of the HD2000 vs G80...Everyone here knows I view the 5000 series as too weak for DX11/Tesselation (except maybe the 5870 and above; we'll see), so if these numbers hold, I don't think thats a bad comparison...

Having an actual DX11 game that wasn't a DX9 add-on would be interesting right about now...

A few more things we know
http://www.xtremesystems.org/forums/showpost.php?p=4204...
Read the bottom of the post, and this is coming from Rys, whos had the best and closests understanding/ability of Fermi thus far.
I quote:
As for my clock estimates, I doubt 1700 MHz hot clock at launch (:sad, but the base clock should be usefully higher, up past 700 MHz. They still haven't talked about GeForce productisation or clocks, but at this point it looks unlikely the fastest launch GeForce will texture faster than a GTX 285.

Since its tied to the hot clock, going smaller will require large jumps in clocks to overcome in scaling, so, the lil Fermis may not show much promise at all, or worse.
a b U Graphics card
January 19, 2010 2:48:42 AM

Thats not a few more things we know, thats one persons supposition heres another on textures. Front page at Hocp.
http://www.hardocp.com/article/2010/01/17/nvidias_fermi...

Some GF100 Specifications

GF100 will have 512 CUDA cores, which more than doubles its cores compared to the GeForce GTX 285 GPU’s 240 core. There are 64 texture units, compared to the GTX 285’s 80, but the Texture Units have been moved inside the Third Generation Streaming Multiprocessors (SM)for improved efficiency and clock speed. In fact, the Texture Units will run at a higher clock speed than the core GPU clock. There are 48 ROP units, up from 32 on the GTX 285. The GF100 will use 384-bit GDDR5, so depending on clock speeds it actually operates at, there is potential for high memory bandwidth. These changes seem logical, and encouraging, but without knowing clock speeds actual shader performance is anyone’s guess.
    • 1 / 29
    • 2
    • 3
    • 4
    • 5
    • More pages
    • Next
    • Newest
!