Sign in with
Sign up | Sign in
Your question

Pics of G280?

Last response: in Graphics & Displays
Share
a b U Graphics card
May 20, 2008 10:35:04 PM

Heres the link, you decide, me Im not sure. On the link, it also says this will be nVidias last single chip highend gpu http://www.pczilla.net/en/post/13.html

More about : pics g280

May 20, 2008 10:52:07 PM

it couldnt be smaller..
and it looks like a gf8
Related resources
Can't find your answer ? Ask !
a b U Graphics card
May 20, 2008 10:58:55 PM

The chip is too far back for GF8
May 20, 2008 11:27:14 PM

as long as it performs like predicted and doesnt consum energy from a nuclear reactor, i really dont care how big it is! if it fits in my case obviously :F
May 20, 2008 11:44:02 PM

Progress is progress... unless it's like the 9800GTX and then it's just an annoying market ploy. I think I might actually get one of these next gen graphics cards this time around.

-mcg
May 21, 2008 5:43:58 AM

Any idea of the card length, I planning to replace my 8600GTS in one of my computer with Slim Case; certainly I need to check what is the draw back of performance by PCI-E version 1.1 on this new generation.
May 21, 2008 8:57:33 AM

MrCommunistGen said:
Progress is progress... unless it's like the 9800GTX and then it's just an annoying market ploy. I think I might actually get one of these next gen graphics cards this time around.

-mcg

Indeed the 8800GTX was at 500+ when it arrived, but anyone who invested that much money in it got a huge return as that card still hasnt been convincingly beaten, 2 years afterward/ I might buy one myself, after I get 8 GB RAM, Quad core, Vista 64, oh BTW, will these cards be PCI-E2.0 exclusive?
a b U Graphics card
May 21, 2008 12:37:13 PM

It says "its for showing off, not for sales in itself". If thats so, when will we see this card down to 300 US like th 8800GTX? Then itll be more mainstream. The last paragraph says this. Thats crazy, I wonder what the pricing of the G260 will be then? At this point, it will be the only thing nVidia will have in the 300 range, if its priced that low, thats competitive
May 21, 2008 12:59:42 PM

I'll have to see the performance difference between the two cards to see if making the jump for the GTX280 will be worth it. Again, are these cards PCI2.0 exclusive? maybe they should be since they are to fast for gen 1 PCi-E
a b U Graphics card
May 21, 2008 1:08:28 PM

Theres not 1 PCI2 made thatll be exclusive to PCI2. All will always be backwards compatable, just like the apg cards. 8x's worked on 4x's as well as 2xs. And yes, they are PCI2 compatable
May 21, 2008 1:11:25 PM

well that quells my fears.
a b U Graphics card
May 21, 2008 1:26:47 PM

No worries, just make sure A: you have room in your case and B: you have a reeeeaaaallll good psu heheh
a b U Graphics card
May 21, 2008 3:47:39 PM

JAYDEEJOHN said:
Theres not 1 PCI2 made thatll be exclusive to PCI2. All will always be backwards compatable, just like the apg cards. 8x's worked on 4x's as well as 2xs. And yes, they are PCI2 compatable


However there may come a time when it is PCIe 2.0 exclusive, just like many 8X could not work on 2X and lower slots.
The issue of power at the connector versus adding more 6x or 8x external power plugs, may push some card makers to PCIe 2.0 requirement.
IMO, I would expect the X2 cards in the future generations to require PCIe 2.0 for practical purpose with regards to both power and bandwidth to the bridge/split requirements.

Probably nothing until maybe late 2008, but by 2009 I would expect to see more than one that is PCIe 2.0 requirement as a minimum. May still work on PCIe1.0/1.1 but on reduced functionality.
May 22, 2008 10:38:37 AM

This guys good^v^
a b U Graphics card
May 23, 2008 12:19:52 AM

Still can't see that link, but Fuad reposted them @ Fudzilla.

By the looks of things we have the return of the NVIO,
http://img357.imageshack.us/img357/7162/gtx280nviozu9.j...

But you can tell it's not a G80 by the 8 memory modules (which gives credence to a symmetric memory interface unlike the G80s). Still not enough detail to make out traces or memory type though.

PS why a close-up of the back of the card and not the front where the exposed memory module and NVIO are, poor choice. (As if you can't make out the 6+8 power connectors without that pic).
a b U Graphics card
May 23, 2008 3:28:45 AM

Trying to hold to NDA? Or doesnt want to be exsposed of breaking NDA. Only thing I can think of
a b U Graphics card
May 23, 2008 6:08:43 AM

Yeah could be, but I think even what they posted would be an problem for an NDA.

Weird choice for someone trying to get/give info, but good choice for someone selectively leaking info to keep the product in the news.

Seriously you have the perfect opportunity to give a pic of memory modules and you give people power regulation components?
a b U Graphics card
May 23, 2008 8:00:15 AM

I have the feeling the last leaked info on the launch of the r600 hurt ATI. This could be a rub it in your face by nVidia? Keep the hype going, knowing AMD is going tight lipped. Without really showing anything, but the hype still goes on
May 23, 2008 9:07:50 AM


I have the feeling the last leaked info on the launch of the r600 hurt ATI. This could be a rub it in your face by nVidia? Keep the hype going, knowing AMD is going tight lipped. Without really showing anything, but the hype still goes on



GeForce GTX 280 will be capable of folding slightly more than 500 mol/day, which is three times more than what Radeon HD 3870 can do ...

http://www.nordichardware.com/news,7777.html

now the HD 4870 should be like 150% of the HD 3870 on default clock speed , with the HD 4870 x2 almost doubling that , with a less price margin . If we take the power consumption factor in mind (which I think will be in favour of ATI ) and the dx10.1 support , I think it'll be a tough sell for Nvidia
a b U Graphics card
May 23, 2008 9:16:46 AM

Did you see my post? It includes a link to the graph on how fast the GTX280 is compared to a 3870. Already been posted. You may be right, however. Thats some rough estimates, using F@H and rounding on top of that. But we can at least hope theres competition this time around
a b U Graphics card
May 23, 2008 9:17:25 AM

What itll come down to, I believe is this, the X2 will have its CF problems, but be competitively as fast, a little less power, and even maybe cheaper, but at this point who knows?
May 23, 2008 12:11:50 PM

the 3870 x2 was a good card , it scalled up to 200% in some tittles , but it had problems because of games that didn't support CF ..( which i see will become less with time ) . the 4870 x2 should even scale better because of the improved architecture , drivers and the PCIe2 connection between the 2 cards.
for the time of being when there's still some games doesn't support CF,the performance of a single 4870 should be enough for them, but u're right after all .. it's still to early to say.
May 23, 2008 12:41:39 PM

Does anyone have an idea about power consumption of this two cards?
Nv and ATI!
May 23, 2008 3:43:21 PM

Zen911 said:
the 3870 x2 was a good card , it scalled up to 200% in some tittles


Correct me if I'm wrong, but if you add one additional card that doubles your performance, I believe the scaling would be 100%. Are you talking about a quad GPU setup with 2 3870x2's? If so, that is pretty cool, I thought the quad GPU thing was basically a flop for both red and green this go around.
a b U Graphics card
May 23, 2008 4:44:31 PM

Zen911 said:

now the HD 4870 should be like 150% of the HD 3870 on default clock speed ,


Actually, it's closer to 200% the HD3870's SPU power with 50% more SPUs running at 35% faster (remember the SPUs are clocked higher than the core now).

Quote:
with the HD 4870 x2 almost doubling that , with a less price margin . If we take the power consumption factor in mind (which I think will be in favour of ATI ) and the dx10.1 support , I think it'll be a tough sell for Nvidia


Yep it'll be interesting.
The thing I'm interested in seeing is what the memory situation was, because I know that they were talking about the HD2900 doing some folding and GPGPU operations better than the HD3870 due to memory bandwidth, so running on that GDDR5 may help that bottleneck.

Also, it'll be interesting to see what's new that finally allows nV to have a GPU client after almost 3 years. Also at least this is a solid number and not some 'missing-mul' based Gflops number.

If you think about it from a pure 'folding giant' perspective, buying a ton of old X1900 or HD2900 would likely give you significantly more folding power per $ than anything new from either AMD or nV. Without looking hard, the X1950XTX can be had for $100 on NewEgg, that's more folding power than the PS3 (about 85-90% of the HD series) , so for $400 you would have more power than both those solutions, about twice that of the HD4K and about 50% more than the GTX280. Of course it requires more front end tweaking by the user, but once setup, voila more both than both for the same price from old tech. :D 
a b U Graphics card
May 23, 2008 4:47:45 PM

scooterlibby said:
Correct me if I'm wrong, but if you add one additional card that doubles your performance, I believe the scaling would be 100%.


I didn't think you wanted me to actually correct your post, so I'll explain instead. :kaola: 
He's talking about adding a card results in 200% performance, you'r talking about a 100% performance boost (starting from 100%). Same thing just said differently.

Quote:
Are you talking about a quad GPU setup with 2 3870x2's? If so, that is pretty cool, I thought the quad GPU thing was basically a flop for both red and green this go around.


No, he was talking about simple dual VPU X2.
As for the success at quad, ATi had markedly more success than nV with that, but yeah neither could really make it a compelling reason to buy, however their groundwork is important for the next generation, which is why their relative success in a crap situation does matter for this generation. I still don't like the solution in it's current implementation, but maybe GDDR5 gets us closer to the 2 die 1 package solution we've been talking about, maybe for the R800 series, but not sure what push the fusion/Larabbee considerations have on those designs.
a b U Graphics card
May 23, 2008 5:00:29 PM

Milos-stancene said:
Does anyone have an idea about power consumption of this two cards?
Nv and ATI!


Here's my idea.

Power consumption GTX280 >> HD4870

Which isn't surprising as performance should be GTX280>HD4K (likely by smaller ratio IMO)

The big question IMO will be power consumption between the X2 and the GTX280, that could be closer in both respects, but not sure which would favour which without actual hard tests of the single chip solutions to begin with.
May 23, 2008 8:20:15 PM

TheGreatGrapeApe said:
I didn't think you wanted me to actually correct your post, so I'll explain instead. :kaola: 
(starting from 100%). Same thing just said differently.



Hey I'll give you my login and password so you really can correct me! It'd be nice to have a personal fact checker/editor haha. ;) 

--
Wink back [:thegreatgrapeape:6] from TGGA.


a b U Graphics card
May 23, 2008 11:06:24 PM

No need for login/pass as you can see, but would rather not be your fact checker, mod is almost too much as it is. :sweat: 
May 24, 2008 12:13:16 AM

TGGA .. if wht u were sayin' that the 4870 is near 200% as fast as the 3870 , then the 4870x2 should be ahead of the gtx280 by a good margin.

I didn't think about it from a pure "folding giat' prespective cause i know too little about folding :D 

about the power consumption , I think the GTX280 will consume more than the 4870 by a big margin actually . but again as u said we should be conserned about the gtx280 vs 4870 X2 which as said again is too early to say
a b U Graphics card
May 24, 2008 12:37:46 AM

Zen911 said:
TGGA .. if wht u were sayin' that the 4870 is near 200% as fast as the 3870 , then the 4870x2 should be ahead of the gtx280 by a good margin.


Yeah about 45% faster assuming 100% efficiency (which there never is) so I suspect close to a few 10s of % faster. Like maybe 20-30% faster since the guys at Stanford talked about the above 90% processing efficiency.

Quote:
I didn't think about it from a pure "folding giat' prespective cause i know too little about folding :D 


Yeah I only mentioned it because someone asked me late last year about whether it was worth it to buy an HD2900 for folding, to which I told him, no just get 2 X1900s cheap, so thought I'd mention that when talking about efficiency and value.

Quote:
about the power consumption , I think the GTX280 will consume more than the 4870 by a big margin actually . but again as u said we should be conserned about the gtx280 vs 4870 X2 which as said again is too early to say


Yep, it's tough, for gaming it's different, for folding it's more raw compute power, which achieves very good efficiency in multi GPGPU setups, but gaming kinda goes downhill fast even if supported.
And who knows how efficient the less memory will be versus the power consumption of anther PCIe bridge (this time 2.0) etc. And which is more/less efficient really depends on the final what numbers once they've been tested the first time.

The other interesting thing about a HD4K X2 solution is that it's supposed to share memory which is both more efficient, and also may help GPGPU apps that share the same data (you would likely have to tweak the client a bit, but it should be easy) which helps for single instances, but wouldn't be as helpful for when you run one instance per GPU since they would work on different sets of WUs. You would want them to split a WU and work on them together.

Both sides should offer interesting solutions in the GPGPU field, and an area I'll focus on is the improvements on double precision speed which is currently very VERY slow on the HD3K (1/2-1/4 speed) and not available in hardware on the G92.
May 24, 2008 7:29:56 PM

What is the rumored core clock of the GTX280?
a b U Graphics card
May 24, 2008 10:43:03 PM

The rumours I've hear range from (too low IMO) 600Mhz to 700Mhz.

I personally would thought 650Mhz, but according to folks like the InQ they're having issues getting it that fast, they say 602Mhz core and 1296Mhz shaders for the GTX280, which is a faster core but slower shaders (by only 50mhz) than the GF8800GTX, so the processing power will be for all intents and purposes be close to double at 190% the G80GTX , just a little less than double, and around of the the 163% G80Ultra and 150% that of the G92GTS and only 144% of the G92GTX.

They're still saying 32 ROPs, but still not too much detail on the TMUs.

Of course we'll know for sure at launch, and I can't wait to see the change to the SPUs to have such an interesting change in ratios with not quite a doubling of SPUs.

PS, interesting to think that if that early AMD slide is correct (50% more shader @ 1050Mhz shader clock), then the RV770XT will have 212% the shader power of the R600XT and 203% the shader power of the HD3870.

To me this is somewhat humours because ATi was ahead in shader power but lacked texture strength, nVidia was ahead in texture ability (and hardware AA) but had little less raw shader power, and they both have ended up strengthening the thing they were already doing well at all ready. ATi did hopefully double their TMU count if the early specs are true so that and a 50mhz core boost over the HD3870 is nice but still lagging, so to me it's funny that in relationship to each other it's still ATi with the Shader advantage and nVidia with the texture advantage, and almost by the same margins as before.

The question is how do these new strength present themselves and where do they shift the new bottlenecks. I'm a little more optimistic that this will be an early entertaining matchup now.

Anywhoo, interested in finding out about that NVIO too, are they adding display port to the new NVIO, does it have support for higher bit depth to allow for 12bit HDMI 1.3 @ 1080P. Technically they could've supported it before but didn't have them built into either the ATi or the nV hardware. Now supposedly the RV770 offers added deep colour support from 1.3 (as I mentioned before in another thread I think that'll be 12 bit only, not 16bit, just based on the Dual-Link TMDS limitations [which would be just at the extreme edge of 16bit at best]).

Anywhoo, just a few more weeks. :sol: 
May 25, 2008 1:14:07 AM

June will be a busy month. Review-checking galore!!!
a b U Graphics card
May 25, 2008 3:38:50 AM

Reynod said:

Bit more info for you.


Yeah, you may have noticed it already in my post when I referenced it. [:mousemonkey:1]
May 25, 2008 7:19:48 AM

Aww damm ... you hid it ... I didn't read it properly ... Its Sunday !!

I have no excuse ... I apologise profusely.

Charlie is probably right on the money with this too.

I can't see him wearing another bunny suit ... :) 





a b U Graphics card
May 25, 2008 7:35:09 AM

I cant ( and dont want to ) see him wearing a bunny suit again period heheh
May 25, 2008 9:07:10 PM

Do you wanna buy any of these cards?````
a b U Graphics card
May 25, 2008 9:40:59 PM

If the 280 isnt as power hungry as I hear, and is performing to the level its rumored at, and has all the additional things thatll come with it, and they work well (gpgpu,onboard physics processor) and its used to its potential, then maybe on the refresh, or smaller die shrink, yes maybe
a b U Graphics card
May 26, 2008 3:04:59 AM

BTW, yet again, I say the 'on board physics processor' is a misunderstanding of how nV does things. what benifit is it to them to put on an onboard physics processor versus simply using Cuda-like programing to enable PhysX features in their GPU lineup, essentially building on the old 'Quantum Effects' marchitecture.

I'm pretty certain they would not waste transistors for a dedicated PPU, if Ageia taught us anything it's that it's not worth the chips space. Especially for a card looking to reduce transistor count and power and heat, no point in putting in something that's at best a 5% market concern, and currently not even a 1% benefit.
May 26, 2008 11:10:46 AM

apart from a 512bit memory watever that means compared to the g92 256bit the clocks are actually a lot slower than the g92 GPUs
a b U Graphics card
May 26, 2008 11:15:12 AM

Its almost as if its like an IPC improvement on a cpu, lower frequencies, higher output. Also,this card is maxxed out, they simply couldnt run the clocks any higher, since its running at 234 watts TDP. I agree Ape, and hope they didnt waste the space, and just use the CUDA client, but this is Jen-Hsuns baby... ya never know
!