Sign in with
Sign up | Sign in
Your question

NV40 ~ GeForce 6800 specs

Last response: in Graphics & Displays
Share
Anonymous
a b U Graphics card
April 13, 2004 2:51:07 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

the following is ALL quote:


http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...


Tuesday, April 13, 2004

NVIDIA GeForce 6800 GPU family officially announced — Cormac @ 17:00
It's time to officially introduce the new GPU generation from NVIDIA
and shed the light on its architecture and features.

So, the GeForce 6800 GPU family, codenamed NV40, today officially
entered the distribution stage. Initially it will include two chips,
GeForce 6800 Ultra and GeForce 6800, with the same architecture.


These are the key innovations introduced in NVIDIA's novelties:

*16-pipeline superscalar architecture with 6 vertex modules, DDR3
support and *real 32-bit pipelines
*PCI Express x16, AGP 8x support
*222 million transistors
*400MHz core clock
*Chips made by IBM
*0.13µm process


40x40mm FCBGA (flip-chip ball grid array) package
ForceWare 60+ series
Supports 256-bit GDDR3 with over 550MHz (1.1GHz DDR) clock rates
NVIDIA CineFX 3.0 supporting Pixel Shader 3.0, Vertex Shader 3.0;
real-time Displacement Mapping and Tone Mapping; up to 16
textures/pass, 16-bit and 32-bit FP formats, sRGB textures, DirectX
and S3TC compression; 32bpp, 64bpp and 128bpp rendering; lots of new
visual effects
NVIDIA HPDR (High-Precision Dynamic-Range) on OpenEXR technology
supporting FP filtering, texturing, blending and AA
Intellisample 3.0 for extended 16xAA, improved compression
performance; HCT (High-resolution compression), new lossless
compression algorithms for colors, textures and Z buffer in all modes,
including hi-res high-frequency, fast Z buffer clear
NVIDIA UltraShadow II for 4 times the performance in highly shadowed
games (e.g. Doom III) comparing to older GPUs


Extended temperature monitoring and management features
Extended display and video output features, including int.
videoprocessor, hardware MPEG decoder, WMV9 accelerator, adaptive
deinterlacing, video signal scaling and filtering, int. NTSC/PAL
decoder (up to 1024x768), Macrovision copy protection; DVD/HDTV to
MPEG2 decoding at up to 1920x1080i; dual int. 400MHz RAMDAC for up to
2048x1536 @ 85Hz; 2 x DVO for external TMDS transmitters and TV
decoders; Microsoft Video Mixing Renderer (VMR); VIP 1.1 (video
input); NVIDIA nView
NVIDIA Digital Vibrance Control (DVC) 3.0 for color and image clarity
management
Supports Windows XP/ME/2000/9X; MacOS, Linux
Supports the latest DirectX 9.0, OpenGL 1.5


http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...



We have almost received a GeForce 6800 sample, so it's still early to
speak of GPU power consumption. Though a giant core with 222 million
transistors imply some high appetite for power. At least NVIDIA
recommends testers to use 480W and over power supplies. By the way,
GeForce 6800 Ultra reference cards will occupy two standard slots.
However, it's not obligatory for all vendors, so we might see
single-slot models as well.

Well, having seen the GPU, we now have to wait a bit for its test
results. Please be patient, we are going to publish the respective
article in the nearest future.

Ending this news I'll mention NVIDIA partners that will support the
new release by solutions on it. They are Albatron, AOpen, ASUSTeK
Computer, Chaintech, Gainward, Leadtek Research, MSI, Palit
Microsystems, PNY Technologies, Prolink Computer, Shuttle and XFX
Technologies.


http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...






quote:

"8 shader units per pipeline and 16 pipelines..."

http://www.beyond3d.com/forum/viewtopic.php?t=11484
Anonymous
a b U Graphics card
April 14, 2004 12:56:48 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On 13 Apr 2004 10:51:07 -0700 As truth resonates honesty
nvidianv55@mail.com (NV55) wrote :

>the following is ALL quote:
>
>
>http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...
>
>
>Tuesday, April 13, 2004
>
>NVIDIA GeForce 6800 GPU family officially announced — Cormac @ 17:00
>It's time to officially introduce the new GPU generation from NVIDIA
>and shed the light on its architecture and features.
>
>So, the GeForce 6800 GPU family, codenamed NV40, today officially
>entered the distribution stage. Initially it will include two chips,
>GeForce 6800 Ultra and GeForce 6800, with the same architecture.
>
>
>These are the key innovations introduced in NVIDIA's novelties:
>
>*16-pipeline superscalar architecture with 6 vertex modules, DDR3
>support and *real 32-bit pipelines


>*PCI Express x16, AGP 8x support

Looks like new mother boards required?


>*222 million transistors
>*400MHz core clock
>*Chips made by IBM
>*0.13µm process
>
>
>40x40mm FCBGA (flip-chip ball grid array) package
>ForceWare 60+ series
>Supports 256-bit GDDR3 with over 550MHz (1.1GHz DDR) clock rates
>NVIDIA CineFX 3.0 supporting Pixel Shader 3.0, Vertex Shader 3.0;
>real-time Displacement Mapping and Tone Mapping; up to 16
>textures/pass, 16-bit and 32-bit FP formats, sRGB textures, DirectX
>and S3TC compression; 32bpp, 64bpp and 128bpp rendering; lots of new
>visual effects
>NVIDIA HPDR (High-Precision Dynamic-Range) on OpenEXR technology
>supporting FP filtering, texturing, blending and AA
>Intellisample 3.0 for extended 16xAA, improved compression
>performance; HCT (High-resolution compression), new lossless
>compression algorithms for colors, textures and Z buffer in all modes,
>including hi-res high-frequency, fast Z buffer clear
>NVIDIA UltraShadow II for 4 times the performance in highly shadowed
>games (e.g. Doom III) comparing to older GPUs
>
>
>Extended temperature monitoring and management features
>Extended display and video output features, including int.
>videoprocessor, hardware MPEG decoder, WMV9 accelerator, adaptive
>deinterlacing, video signal scaling and filtering, int. NTSC/PAL
>decoder (up to 1024x768), Macrovision copy protection; DVD/HDTV to
>MPEG2 decoding at up to 1920x1080i; dual int. 400MHz RAMDAC for up to
>2048x1536 @ 85Hz; 2 x DVO for external TMDS transmitters and TV
>decoders; Microsoft Video Mixing Renderer (VMR); VIP 1.1 (video
>input); NVIDIA nView
>NVIDIA Digital Vibrance Control (DVC) 3.0 for color and image clarity
>management
>Supports Windows XP/ME/2000/9X; MacOS, Linux
>Supports the latest DirectX 9.0, OpenGL 1.5
>
>
>http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...
>
>
>
>We have almost received a GeForce 6800 sample, so it's still early to
>speak of GPU power consumption. Though a giant core with 222 million
>transistors imply some high appetite for power. At least NVIDIA
>recommends testers to use 480W and over power supplies. By the way,
>GeForce 6800 Ultra reference cards will occupy two standard slots.
>However, it's not obligatory for all vendors, so we might see
>single-slot models as well.
>
>Well, having seen the GPU, we now have to wait a bit for its test
>results. Please be patient, we are going to publish the respective
>article in the nearest future.
>
>Ending this news I'll mention NVIDIA partners that will support the
>new release by solutions on it. They are Albatron, AOpen, ASUSTeK
>Computer, Chaintech, Gainward, Leadtek Research, MSI, Palit
>Microsystems, PNY Technologies, Prolink Computer, Shuttle and XFX
>Technologies.
>
>
>http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...
>
>
>
>
>
>
>quote:
>
>"8 shader units per pipeline and 16 pipelines..."
>
>http://www.beyond3d.com/forum/viewtopic.php?t=11484



--
Free Windows/PC help,
http://www.geocities.com/sheppola/trouble.html
email shepATpartyheld.de
Free songs to download and,"BURN" :o )
http://www.soundclick.com/bands/8/nomessiahsmusic.htm
Anonymous
a b U Graphics card
April 14, 2004 2:41:01 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"NV55" <nvidianv55@mail.com> wrote in message
news:1c4cde47.0404130951.5eccb20@posting.google.com...
>
> the following is ALL quote:
>


Regardless of if someone wants the new high-end nVidia or ATI product,
I've read that a person better have a monster power supply and excellent
case cooling before even considering such cards. I also wonder how loud
the fans on these new cards are going to need to be. It'd be
interesting to see what they can do with regards to cooling and power
consumption on future video cards too - I see this as getting to be more
and more of a problem with time.
Related resources
April 14, 2004 4:10:26 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Tue, 13 Apr 2004 20:56:48 +0100, Shep© wrote:


>
>>*PCI Express x16, AGP 8x support
>
> Looks like new mother boards required?
>
>

If there is AGP 8x support, why would you need a new motherboard?

K
April 14, 2004 4:10:27 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

K wrote:

> On Tue, 13 Apr 2004 20:56:48 +0100, Shep© wrote:
>
>
> >
> > > *PCI Express x16, AGP 8x support
> >
> > Looks like new mother boards required?
> >
> >
>
> If there is AGP 8x support, why would you need a new motherboard?
>
> K






Because most well known manufacturers will eventually stop carrying AGP
cards all together.



If you're into business at all, you know that it's more cost effective
to produce one version of a product than two... unless you're Microsoft
or Donald Trump (aka God).




The voltages on PCI-E and AGP are entirely different, so different
components (such as resistors) must be used.


In order to avoid confusion between the two, you'd have to hire two
different production lines, have twice as many labs, and pay for two
types of packaging, manuals, etc.




Having one version of a product cuts down on confusion and returns,
which helps both consumers and retail sales.
Anonymous
a b U Graphics card
April 14, 2004 4:47:41 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"teqguy" <teqguy@techie.com> wrote in news:uI_ec.26606$F9.15486
@nwrddc01.gnilink.net:

> K wrote:
>
>> On Tue, 13 Apr 2004 20:56:48 +0100, Shep© wrote:
>>
>> If there is AGP 8x support, why would you need a new motherboard?
>>
>> K
> Because most well known manufacturers will eventually stop carrying AGP
> cards all together.

The thing is, we've had AGP slots out for over 5 years now, and yet you
still find vendors making PCI video cards. So I wouldn't be too worried
about any lack of AGP video cards for some time to come. They'll be around
long enough to follow any of the current motherboards into obsolescence, at
which point you wouldn't want to be buying a video card or any other
upgrade for them anyways.
April 14, 2004 5:14:30 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Tue, 13 Apr 2004 23:28:26 +0000, teqguy wrote:


>
> Because most well known manufacturers will eventually stop carrying AGP
> cards all together.
>

Eventually, yes, but AGP will be with us well into next year. DDR2 will
replace DDR1. Socket 939 will replace socket 940, Socket T will replace
Socket 728, BTX will eventually replace ATX, the list goes on in the never
ending upgrade cycle.


>
> Having one version of a product cuts down on confusion and returns,
> which helps both consumers and retail sales.

Absolutely, and I'm sure that the likes of ATI and Nvidia as
well as the motherboard makers will push us to PCI Express as soon as they
can. But it would be suicide for one of them to bring out a new card and
only cater for those who are prepared to buy new motherboards. It's just
the poster I replied to implied that there would be an immediate need to
replace your motherboard, which is clearly not the case.

I have a gut feeling that PCI Express will do very little for performance,
just like AGP before it. Nothing can substitute lots of fast RAM on the
videocard to prevent shipping textures across to the much
slower system RAM. You could have the fastest interface imaginable for
your vid card; it would do little to make up for the bottleneck that
is your main memory.


K
Anonymous
a b U Graphics card
April 14, 2004 8:37:19 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Wed, 14 Apr 2004 00:10:26 +0000 As truth resonates honesty K
<kayjaybee@clara.net> wrote :

>On Tue, 13 Apr 2004 20:56:48 +0100, Shep© wrote:
>
>
>>
>>>*PCI Express x16, AGP 8x support
>>
>> Looks like new mother boards required?
>>
>>
>
>If there is AGP 8x support, why would you need a new motherboard?
>
>K

Because it's my understanding that although the new protocol/cards
support AGP 8X this is merely a data rate comparison and the new cards
will only fit a,"PCI-Express" slot,not an AGP one.
http://www.pcstats.com/articleview.cfm?articleID=1087

HTH :) 




--
Free Windows/PC help,
http://www.geocities.com/sheppola/trouble.html
email shepATpartyheld.de
Free songs to download and,"BURN" :o )
http://www.soundclick.com/bands/8/nomessiahsmusic.htm
April 14, 2004 10:16:37 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

NightSky 421 wrote:

> "NV55" <nvidianv55@mail.com> wrote in message
> news:1c4cde47.0404130951.5eccb20@posting.google.com...
> >
> > the following is ALL quote:
> >
>
>
> Regardless of if someone wants the new high-end nVidia or ATI product,
> I've read that a person better have a monster power supply and
> excellent case cooling before even considering such cards. I also
> wonder how loud the fans on these new cards are going to need to be.
> It'd be interesting to see what they can do with regards to cooling
> and power consumption on future video cards too - I see this as
> getting to be more and more of a problem with time.





The power consumption should stay below 15v.

The Geforce FX does NOT use the 12v rail, for anyone wondering.


All 4 pins are connected for potential usage, but the overall
consumption never raises above 5.5v so 17v is not neccessary.




Most companies are starting to push for water cooling. Gainward is one
of them that announced they are going to start shipping a version of
their cards that have a waterblock in place of a conventional heatsink
and fan.



As far as the reference Nvidia cards go... I'm pretty sure we'll start
out with the dustbuster again... at least until someone can decide on a
more effective method.


Solid silver heatsink anyone? =P
April 14, 2004 10:22:44 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

K wrote:

> On Tue, 13 Apr 2004 23:28:26 +0000, teqguy wrote:
>
>
> >
> > Because most well known manufacturers will eventually stop carrying
> > AGP cards all together.
> >
>
> Eventually, yes, but AGP will be with us well into next year. DDR2
> will replace DDR1. Socket 939 will replace socket 940, Socket T will
> replace Socket 728, BTX will eventually replace ATX, the list goes on
> in the never ending upgrade cycle.
>
>
> >
> > Having one version of a product cuts down on confusion and returns,
> > which helps both consumers and retail sales.
>
> Absolutely, and I'm sure that the likes of ATI and Nvidia as
> well as the motherboard makers will push us to PCI Express as soon as
> they can. But it would be suicide for one of them to bring out a new
> card and only cater for those who are prepared to buy new
> motherboards. It's just the poster I replied to implied that there
> would be an immediate need to replace your motherboard, which is
> clearly not the case.
>
> I have a gut feeling that PCI Express will do very little for
> performance, just like AGP before it. Nothing can substitute lots of
> fast RAM on the videocard to prevent shipping textures across to the
> much slower system RAM. You could have the fastest interface
> imaginable for your vid card; it would do little to make up for the
> bottleneck that is your main memory.
>
>
> K






Current high end graphics cards do very little with an AGP 4x bus, let
alone an 8x bus.



The best possible optimization that could ever be made, would be to
start manufacturing motherboards with sockets for a GPU and either
sockets or slots for video memory.


This would allow for motherboards to potentially reduce in size, while
increasing in performance and upgradability.


The price would increase, but it would be worth it.
Anonymous
a b U Graphics card
April 14, 2004 11:45:25 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"teqguy" <teqguy@techie.com> wrote:

>NightSky 421 wrote:
>
>> Regardless of if someone wants the new high-end nVidia or ATI product,
>> I've read that a person better have a monster power supply and
>> excellent case cooling before even considering such cards. I also
>> wonder how loud the fans on these new cards are going to need to be.
>> It'd be interesting to see what they can do with regards to cooling
>> and power consumption on future video cards too - I see this as
>> getting to be more and more of a problem with time.
>
>The power consumption should stay below 15v.
>
>The Geforce FX does NOT use the 12v rail, for anyone wondering.
>
>All 4 pins are connected for potential usage, but the overall
>consumption never raises above 5.5v so 17v is not neccessary.

Surely you can't believe that we can take the advice of someone who
thinks that power "consumption" is measured in Volts. What you wrote
is complete drivel, sorry.
Anonymous
a b U Graphics card
April 14, 2004 11:47:13 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"teqguy" <teqguy@techie.com> wrote:

>The best possible optimization that could ever be made, would be to
>start manufacturing motherboards with sockets for a GPU and either
>sockets or slots for video memory.
>
>
>This would allow for motherboards to potentially reduce in size, while
>increasing in performance and upgradability.
>
>
>The price would increase, but it would be worth it.

No it wouldn't.
Anonymous
a b U Graphics card
April 14, 2004 1:40:59 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

Ah,

http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...

I see from the pictures (assuming not fakes ;)  that the card should fit
reasonably to "single" AGP (8x) slot more or less.. that's nice, but the
best part about this debacle is two DVI ports. That is the part I like the
most, currently using DVI + DB25 to two TFT's.

Looks like a winner to me compared to ATI, performance alone isn't what
turns me on, RADEON 9700 PRO - RADEON 9800 XT are plenty fast as they come,
the enhanced feature set is what turns me on. Especially these two:

- 3.0 shaders (vertex samplers will be SO cool)
- 32 bit precision for the whole pipeline from vertex to output fragment, v.
cool

The rest is yada yada yada.. but those two features are what 'do it',
atleast for me from coder's point of view. Extra performance is so
yesterday. ;-)
April 14, 2004 3:25:45 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

K <kayjaybee@clara.net> wrote in message news:<pan.2004.04.14.01.14.28.698900@clara.net>...
>
> I have a gut feeling that PCI Express will do very little for performance,
> just like AGP before it. Nothing can substitute lots of fast RAM on the
> videocard to prevent shipping textures across to the much
> slower system RAM. You could have the fastest interface imaginable for
> your vid card; it would do little to make up for the bottleneck that
> is your main memory.
>
>


But what about for things that don't have textures at all?

PCI Express is not only bi-directional, but full duplex as well. The
NV40 might even use this to great effect, with its built-in hardware
accelerated MPEG encoding/decoding plus "HDTV support" (which I assume
means it natively supports 1920x1080 and 1280x720 without having to
use Powerstrip). The lower cost version should be sweet for Shuttle
sized Media PC's that will finally be able to "tivo" HDTV.

I can also see the 16X slot being used in servers for other things
besides graphics. Maybe in a server you'd want your $20k SCSI RAID
Controller in it. Or in a cluster box a 10 gigabit NIC.

There's more to performance than just gaming. And there's more to PCI
Express than just the 16X slot which will be used for graphics cards
initially. AGP was a hack, and (as others have said) it hit the wall
at "4X". PCI Express is a *VERY* well thought out bus that should be
alot better than PCI, PCI-X, and AGP... not to mention things bolted
directly to the Northbridge. If it helps games a little in the
process, it's just gravy.
April 14, 2004 7:57:06 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

Yeah. That's one odd feature I don't get. Those power connectors are tied
to the same source. I guess the wires can only hold so much. But what
about the traces on the power supply?

DaveL


"teqguy" <teqguy@techie.com> wrote in message
news:2_ffc.8446$hg1.4378@nwrddc02.gnilink.net...
>
> The two power connectors will eventually come down to one... right now
> testing is only showing that stability is better achieved using 4 rails
> instead of two.
April 14, 2004 8:13:13 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

It just occured to me that it may be the traces on the 6800U that Nvidia is
worried about. Why not just use wider traces?

DaveL


"DaveL" <dave1027@comcast.net> wrote in message
news:LO6dnTxq3rK2XODdRVn-tA@comcast.com...
> Yeah. That's one odd feature I don't get. Those power connectors are
tied
> to the same source. I guess the wires can only hold so much. But what
> about the traces on the power supply?
>
> DaveL
>
>
> "teqguy" <teqguy@techie.com> wrote in message
> news:2_ffc.8446$hg1.4378@nwrddc02.gnilink.net...
> >
> > The two power connectors will eventually come down to one... right now
> > testing is only showing that stability is better achieved using 4 rails
> > instead of two.
>
>
Anonymous
a b U Graphics card
April 14, 2004 9:17:07 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"NV55" <nvidianv55@mail.com> wrote in message
news:1c4cde47.0404130951.5eccb20@posting.google.com...
> the following is ALL quote:
>
>
> http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...
>
>
> Tuesday, April 13, 2004
>
> NVIDIA GeForce 6800 GPU family officially announced - Cormac @ 17:00
> It's time to officially introduce the new GPU generation from NVIDIA
> and shed the light on its architecture and features.
>
> So, the GeForce 6800 GPU family, codenamed NV40, today officially
> entered the distribution stage. Initially it will include two chips,
> GeForce 6800 Ultra and GeForce 6800, with the same architecture.
>
>
> These are the key innovations introduced in NVIDIA's novelties:
>
> *16-pipeline superscalar architecture with 6 vertex modules, DDR3
> support and *real 32-bit pipelines
> *PCI Express x16, AGP 8x support
> *222 million transistors
> *400MHz core clock
> *Chips made by IBM
> *0.13µm process
>

Isn't it time for NVidia to use 0.09um process? How could they put some
many features if still using 0.13 um process?
April 14, 2004 9:17:08 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

I think Nvidia learned their lesson about that from the 5800U debacle. It
was ATI that stayed with the old standard and took the lead in performance.
Meanwhile, Nvidia was struggling with fab problems.

DaveL


"Ar Q" <ArthurQ283@hottmail.com> wrote in message
news:nmefc.10025$A_4.6776@newsread1.news.pas.earthlink.net...
>
> Isn't it time for NVidia to use 0.09um process? How could they put some
> many features if still using 0.13 um process?
>
>
April 14, 2004 9:22:07 PM

Archived from groups: alt.comp.periphs.videocards.ati (More info?)

"Shep©" <nospam@nospam.net> wrote in message
news:p acp70dea7e25ld76iuu78v659og2mt2eu@4ax.com...
> On Wed, 14 Apr 2004 00:10:26 +0000 As truth resonates honesty K
> <kayjaybee@clara.net> wrote :
>
> >On Tue, 13 Apr 2004 20:56:48 +0100, Shep© wrote:
> >
> >
> >>
> >>>*PCI Express x16, AGP 8x support
> >>
> >> Looks like new mother boards required?
> >>
> >>
> >
> >If there is AGP 8x support, why would you need a new motherboard?
> >
> >K
>
> Because it's my understanding that although the new protocol/cards
> support AGP 8X this is merely a data rate comparison and the new cards
> will only fit a,"PCI-Express" slot,not an AGP one.
> http://www.pcstats.com/articleview.cfm?articleID=1087
>
> HTH :) 
>
>
>
>
> --
> Free Windows/PC help,
> http://www.geocities.com/sheppola/trouble.html
> email shepATpartyheld.de
> Free songs to download and,"BURN" :o )
> http://www.soundclick.com/bands/8/nomessiahsmusic.htm

They still releasing AGP 8x versions along side PCI x16. I read somewhere
nvidia is doing something with a bridging device while ATI is making totally
seperate cards, ie R420 is agp 8x and R423 is a proper PCI x16 card. I
cannot for the life of me remember where I read it though sorry. It *could*
have been anandtech
April 14, 2004 11:02:22 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> >The best possible optimization that could ever be made, would be to
> >start manufacturing motherboards with sockets for a GPU and either
> >sockets or slots for video memory.
> >
> >This would allow for motherboards to potentially reduce in size, while
> >increasing in performance and upgradability.
> >
> >The price would increase, but it would be worth it.
>
> No it wouldn't.

haha! I agree completely. Videocards have reached such a complexity
it's doubtful that a single company could produce both successfully. Not to
mention the question of upgradeability, which is why we have pci/agp in the
first place.

rms
April 14, 2004 11:08:04 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

chrisv wrote:

> "teqguy" <teqguy@techie.com> wrote:
>
> > The best possible optimization that could ever be made, would be to
> > start manufacturing motherboards with sockets for a GPU and either
> > sockets or slots for video memory.
> >
> >
> > This would allow for motherboards to potentially reduce in size,
> > while increasing in performance and upgradability.
> >
> >
> > The price would increase, but it would be worth it.
>
> No it wouldn't.




You're a moron.
Anonymous
a b U Graphics card
April 14, 2004 11:08:05 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"teqguy" <teqguy@techie.com> wrote:

>chrisv wrote:
>
>> "teqguy" <teqguy@techie.com> wrote:
>>
>> > The best possible optimization that could ever be made, would be to
>> > start manufacturing motherboards with sockets for a GPU and either
>> > sockets or slots for video memory.
>> >
>> >
>> > This would allow for motherboards to potentially reduce in size,
>> > while increasing in performance and upgradability.
>> >
>> >
>> > The price would increase, but it would be worth it.
>>
>> No it wouldn't.
>
>You're a moron.

My irony meter is off the scale. You're obviously clueless, if you
think what you proposed above is a good idea.
April 14, 2004 11:13:30 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

G wrote:

> K <kayjaybee@clara.net> wrote in message
> news:<pan.2004.04.14.01.14.28.698900@clara.net>...
> >
> > I have a gut feeling that PCI Express will do very little for
> > performance, just like AGP before it. Nothing can substitute lots
> > of fast RAM on the videocard to prevent shipping textures across to
> > the much slower system RAM. You could have the fastest interface
> > imaginable for your vid card; it would do little to make up for the
> > bottleneck that is your main memory.
> >
> >
>
>
> But what about for things that don't have textures at all?
>
> PCI Express is not only bi-directional, but full duplex as well. The
> NV40 might even use this to great effect, with its built-in hardware
> accelerated MPEG encoding/decoding plus "HDTV support" (which I assume
> means it natively supports 1920x1080 and 1280x720 without having to
> use Powerstrip). The lower cost version should be sweet for Shuttle
> sized Media PC's that will finally be able to "tivo" HDTV.
>
> I can also see the 16X slot being used in servers for other things
> besides graphics. Maybe in a server you'd want your $20k SCSI RAID
> Controller in it. Or in a cluster box a 10 gigabit NIC.
>
> There's more to performance than just gaming. And there's more to PCI
> Express than just the 16X slot which will be used for graphics cards
> initially. AGP was a hack, and (as others have said) it hit the wall
> at "4X". PCI Express is a VERY well thought out bus that should be
> alot better than PCI, PCI-X, and AGP... not to mention things bolted
> directly to the Northbridge. If it helps games a little in the
> process, it's just gravy.





Most MPEG encoding is processor dependent... I wish developers would
start making applications that let the graphics card do video encoding,
instead of dumping the work on the processor.




The bandwidth of AGP 2X can carry a high definition signal... so I
don't understand how you can expect PCI-Express to do it any better.





Last time I checked, an HD signal operates at 8Mb/s.... DVD @
2.5Mb/s... VCR @ 250Kb/s

PCI Express can potentially carry up to 4.3Gb/s... so do the math.





SCSI only operates at 320Mb/s.

In RAID stripe 0, it's roughly 460Mb/s.


So again... a lot more bandwidth than required.



And definitely a lot more expensive than using onboard SCSI.
April 15, 2004 12:27:44 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"teqguy" <teqguy@techie.com> wrote in message news:<u3gfc.31618$F9.31354@nwrddc01.gnilink.net>...
>
> The bandwidth of AGP 2X can carry a high definition signal... so I
> don't understand how you can expect PCI-Express to do it any better.

Nope. AGP's upstream bandwidth is only half-duplex. It's not the
bandwidth that's the problem.

Here's an article that explains it in detail (with further links to
PCI Express info as well): "PCI Express and HD Video: Marriage Made in
Heaven?"

http://www.extremetech.com/article2/0,1558,1533061,00.a...


> SCSI only operates at 320Mb/s.
> In RAID stripe 0, it's roughly 460Mb/s.
> So again... a lot more bandwidth than required.

That's not the point. SCSI controllers don't sit in the AGP slot. If
you're switching to comparing PCI Express with PCI/PCI-X then you have
to switch to talking about total bandwidth in the whole system.
Besides, SCSI is up to 640Mb/s.

> And definitely a lot more expensive than using onboard SCSI.

Being onboard has nothing to do with it either. The onboard controller
has to be connected somehow. It's on some bus or another even if it's
not sitting in a slot.
Anonymous
a b U Graphics card
April 15, 2004 2:30:59 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> pfft. You don't even know what the ATI offering is as yet, much less
> are you able to buy a 6800 until well into next month.

No, I do not. I wrote that the rumor is that ATI wouldn't have 3.0 level
shaders.. I was commenting on a rumor, if that isn't true then the situation
is naturally entirely different. The confidentially / NDA ends 19th this
month so soon after that we should begin to see cards dripping to the
shelves like always (just noticed a trend in past 5-7 years, could be wrong
but I wouldn't die if had to wait even 2 months.. or 7.. or 3 years.. the
stuff will get here sooner or later.. unless the world explodes before that
;) =

Relax dude, you don't have to pfff, obviously any intelligent person know
what you're saying.. I wasn't commenting on that, or claiming that the cards
will be here TOMORROW!!!! Or that ATI will definitely NOT have 3.0 spec
shaders, now, if you want to argue that fact look up the person who posted
the RUMOR about that, then PFFFF his ass! Pfff... <- now that is for a valid
reasons... heh :) 
April 15, 2004 2:47:39 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c5ime2$6pp$1@phys-news1.kolumbus.fi...
>
> Ah,
>
> http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...
>
> I see from the pictures (assuming not fakes ;)  that the card should fit
> reasonably to "single" AGP (8x) slot more or less.. that's nice, but the
> best part about this debacle is two DVI ports. That is the part I like the
> most, currently using DVI + DB25 to two TFT's.
>

It says right in that same article that the new cards will take two slots.
But it is possible for vendors to come out with single slot cards.
I find it amazing that it says that Nvidia recommended that their testers
use at lest a 480W PS. That's going to be a very expensive upgrade for a lot
of people. And a lot of guys that think they have a 480+ PS will find that
there cheap PS is not up to the task.
So the Ultra is gonna start at $499 + say another $100 for a quality PS, Wow
$599 just to play games that probably don't need a fraction of the power the
new card can deliver. Let's hope that Doom 3 runs great on with this card.
Of course by the time the game finally comes out this card will probably
cost $150. JLC
April 15, 2004 2:52:51 AM

Archived from groups: alt.comp.periphs.videocards.ati (More info?)

"Les" <a@aolnot.com> wrote in message news:bydfc.647$pL6.459@newsfe1-win...
>

> They still releasing AGP 8x versions along side PCI x16. I read somewhere
> nvidia is doing something with a bridging device while ATI is making
totally
> seperate cards, ie R420 is agp 8x and R423 is a proper PCI x16 card. I
> cannot for the life of me remember where I read it though sorry. It
*could*
> have been anandtech
>
Right on ATI's site it says that they are the only company to be making a
"True PCI Express card" It's right on there front end.
JLC
Anonymous
a b U Graphics card
April 15, 2004 3:45:59 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"JLC" <j.jc@nospam.com> wrote in news:fcjfc.142647$K91.357088@attbi_s02:
> It says right in that same article that the new cards will take two
> slots. But it is possible for vendors to come out with single slot
> cards. I find it amazing that it says that Nvidia recommended that
> their testers use at lest a 480W PS. That's going to be a very
> expensive upgrade for a lot of people. And a lot of guys that think
> they have a 480+ PS will find that there cheap PS is not up to the
> task. So the Ultra is gonna start at $499 + say another $100 for a
> quality PS, Wow $599 just to play games that probably don't need a
> fraction of the power the new card can deliver. Let's hope that Doom 3
> runs great on with this card. Of course by the time the game finally
> comes out this card will probably cost $150. JLC

I _really_ want a new system right now. I mean, I'm running dual P3-800
with Ti4200 video, and it just doesn't cut it for todays games. But the
game I know I want is Doom 3 and who know's when it will be out. When it
does come out, it's anyone's guess what will be the best video card for the
game. There will be the fastest, then there will be the best price /
performance cards, a little slower, a lot cheaper, etc. I'm just going to
have to wait until the game comes out if I don't want to spend too much
money and want to be really sure, making a decision based on real
benchmarks of production code and production hardware.

But I hate waiting! My current setup is killing me! I'm sure it's not
the Ti4200's fault, it's a great card, I'm just too CPU limited. But
again, it will be interesting to see which cpu / video card combo does Doom
3 the best. More waiting!

Argh!
Anonymous
a b U Graphics card
April 15, 2004 6:09:53 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> use at lest a 480W PS. That's going to be a very expensive upgrade for a
lot of people.

It sure will.

> new card can deliver. Let's hope that Doom 3 runs great on with this card.
> Of course by the time the game finally comes out this card will probably
> cost $150. JLC

That's a very good point, I was only speaking for myself. I don't play games
much at all, we do some Blackhack Down and Warcraft III TFT multiplayer a
couple of times a week. For that pretty old card would suffice. It's the
work that I need the latest features for, I won't even be paying for the
card myself anyway. :) 
Anonymous
a b U Graphics card
April 15, 2004 1:20:28 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> SCSI only operates at 320Mb/s.

320MB/s. But you need a lot of drives to saturate that. Any single
IDE drive could easily do 320Mb/s :)  Thats only 40MB/s.

Eric
Anonymous
a b U Graphics card
April 15, 2004 1:25:44 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

gaf1234567890@hotmail.com (G) wrote in message news:<b7eb1fbe.0404141025.24e572f4@posting.google.com>...
> K <kayjaybee@clara.net> wrote in message news:<pan.2004.04.14.01.14.28.698900@clara.net>...
> >
> > I have a gut feeling that PCI Express will do very little for performance,
> > just like AGP before it. Nothing can substitute lots of fast RAM on the
> > videocard to prevent shipping textures across to the much
> > slower system RAM. You could have the fastest interface imaginable for
> > your vid card; it would do little to make up for the bottleneck that
> > is your main memory.
> >
> >
>
>
> But what about for things that don't have textures at all?
>
> PCI Express is not only bi-directional, but full duplex as well. The
> NV40 might even use this to great effect, with its built-in hardware
> accelerated MPEG encoding/decoding plus "HDTV support" (which I assume
> means it natively supports 1920x1080 and 1280x720 without having to
> use Powerstrip). The lower cost version should be sweet for Shuttle
> sized Media PC's that will finally be able to "tivo" HDTV.
>
> I can also see the 16X slot being used in servers for other things
> besides graphics. Maybe in a server you'd want your $20k SCSI RAID
> Controller in it. Or in a cluster box a 10 gigabit NIC.

Why even mess with a 16X PCI-e slot? A 10Gbit NIC could be handled by
3-4x PCIe. All you need is a 1X slot for most of what is out there
today. I would like to see something that could handle 8GB/s
bandwidth :) 

Eric
April 15, 2004 9:37:06 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

DaveL wrote:

> I think Nvidia learned their lesson about that from the 5800U
> debacle. It was ATI that stayed with the old standard and took the
> lead in performance. Meanwhile, Nvidia was struggling with fab
> problems.
>
> DaveL
>
>
> "Ar Q" <ArthurQ283@hottmail.com> wrote in message
> news:nmefc.10025$A_4.6776@newsread1.news.pas.earthlink.net...
> >
> > Isn't it time for NVidia to use 0.09um process? How could they put
> > some many features if still using 0.13 um process?
> >
> >





Heat generation is still too much of a risk for moving to 90-nm.



If AMD moved to .09...... I'd have a new toaster. Say goodbye to
overclocking at that point.




The "features" can be expandable as much as they like.... right now
they aren't even using the entire wafer for such optimizations, only a
small section.



A lot of those optimizations are software based too... the GPU just has
to be able to support the relative ballpark of them.
Anonymous
a b U Graphics card
April 15, 2004 9:37:07 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"teqguy" <teqguy@techie.com> wrote in message
news:6Lzfc.33399$hd3.22511@nwrddc03.gnilink.net...
> DaveL wrote:
>
> > I think Nvidia learned their lesson about that from the 5800U
> > debacle. It was ATI that stayed with the old standard and took the
> > lead in performance. Meanwhile, Nvidia was struggling with fab
> > problems.
> >
> > DaveL
> >
> >
> > "Ar Q" <ArthurQ283@hottmail.com> wrote in message
> > news:nmefc.10025$A_4.6776@newsread1.news.pas.earthlink.net...
> > >
> > > Isn't it time for NVidia to use 0.09um process? How could they put
> > > some many features if still using 0.13 um process?
> > >
> > >
>
>
>
>
>
> Heat generation is still too much of a risk for moving to 90-nm.
>
>
>
> If AMD moved to .09...... I'd have a new toaster. Say goodbye to
> overclocking at that point.
>
>
>
>
> The "features" can be expandable as much as they like.... right now
> they aren't even using the entire wafer for such optimizations, only a
> small section.
>

That's going to be some big honked chip when they use the whole
wafer.

Jim M

>
>
> A lot of those optimizations are software based too... the GPU just has
> to be able to support the relative ballpark of them.
April 15, 2004 9:58:13 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

ewitte@hotmail.com (Eric Witte) wrote in message news:<3e738765.0404150825.6b7c0fa7@posting.google.com>...
> gaf1234567890@hotmail.com (G) wrote in message news:<b7eb1fbe.0404141025.24e572f4@posting.google.com>...
> > K <kayjaybee@clara.net> wrote in message news:<pan.2004.04.14.01.14.28.698900@clara.net>...
> > >
> > > I have a gut feeling that PCI Express will do very little for performance,
> > > just like AGP before it. Nothing can substitute lots of fast RAM on the
> > > videocard to prevent shipping textures across to the much
> > > slower system RAM. You could have the fastest interface imaginable for
> > > your vid card; it would do little to make up for the bottleneck that
> > > is your main memory.
> > >
> > >
> >
> >
> > But what about for things that don't have textures at all?
> >
> > PCI Express is not only bi-directional, but full duplex as well. The
> > NV40 might even use this to great effect, with its built-in hardware
> > accelerated MPEG encoding/decoding plus "HDTV support" (which I assume
> > means it natively supports 1920x1080 and 1280x720 without having to
> > use Powerstrip). The lower cost version should be sweet for Shuttle
> > sized Media PC's that will finally be able to "tivo" HDTV.
> >
> > I can also see the 16X slot being used in servers for other things
> > besides graphics. Maybe in a server you'd want your $20k SCSI RAID
> > Controller in it. Or in a cluster box a 10 gigabit NIC.
>
> Why even mess with a 16X PCI-e slot? A 10Gbit NIC could be handled by
> 3-4x PCIe. All you need is a 1X slot for most of what is out there
> today. I would like to see something that could handle 8GB/s
> bandwidth :) 
>
> Eric


Absolutely. The 16X comment was just an example. It's way more likely
that a server would have 1@16x, 3@4x, and 4@1x (or something like
that). In fact I don't see the number and/or speed of expansion slots
being a big "server vs desktop" differentiator after PCIe catches on.
I've even heard that external bus expansion housings are possible.

Anyway, PCI Express looks like it has tons of flexibility. Not that
PCI-X couldn't have lasted for a while longer. But one bus to get rid
of the four we have now *AND* increase headroom for the future *AND*
add new features that AGP lacks *AND* reduce the wire/pin count at the
same time is a Good Thing.
Anonymous
a b U Graphics card
April 16, 2004 12:55:42 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Wed, 14 Apr 2004 19:13:30 GMT, "teqguy" <teqguy@techie.com> wrote:


>Most MPEG encoding is processor dependent... I wish developers would
>start making applications that let the graphics card do video encoding,
>instead of dumping the work on the processor.

I'm pretty sure I read somewhere that the (new & improved) Prescotty
processor has been given a special hard wired instruction set which is
dedicated to encoding video, so that should speed things up some what.

I remember reading an article over a year ago which had Intel giving a
demo of a future release CPU which apparently was running 3 full screen
HD videos simultaneously rotating in a 3d cube. The processor prototype
was not specified, but it may have been a Tejas as it was rated at 5GHz.

Ricardo Delazy
Anonymous
a b U Graphics card
April 16, 2004 12:55:43 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Thu, 15 Apr 2004 20:55:42 +1000, Ricardo Delazy <celcius@ozemail.com.au> wrote:

>On Wed, 14 Apr 2004 19:13:30 GMT, "teqguy" <teqguy@techie.com> wrote:
>
>
>>Most MPEG encoding is processor dependent... I wish developers would
>>start making applications that let the graphics card do video encoding,
>>instead of dumping the work on the processor.
>
>I'm pretty sure I read somewhere that the (new & improved) Prescotty
>processor has been given a special hard wired instruction set which is
>dedicated to encoding video, so that should speed things up some what.
>
>I remember reading an article over a year ago which had Intel giving a
>demo of a future release CPU which apparently was running 3 full screen
>HD videos simultaneously rotating in a 3d cube. The processor prototype
>was not specified, but it may have been a Tejas as it was rated at 5GHz.


SSE3 won't make Intel CPUs as fast as dedicated DSPs for video encoding. It can be an improvement
over SSE and SSE2 but it's still not fast enough.
They should have embedded a full DSP (or more than one) inside CPUs to achieve the same performance.
SSE subsets are still too much tied to the general purpose x86 architecture and their efficiency
it's poor compared to dedicated DSPs.
A $40-50 floating point DSP can be 3x times faster than any SSE3 capable CPU at MPEG2/MPEG4
encoding.
If it's true that Nvidia has designed the NV40 as a full DSP then it's just a matter of time and SDK
availability to let programmers access the NV40 DSP thru DirectX or other dedicated APIs before
known Codecs such as DiVX would be able to take advantage of GPU power.
The only problem is that Nvidia needs a mainstream set of GPUs derived from this one with MPEG
encoding/decoding on the market ASAP to set a standard, before ATI releases its own DSP GPUs with
MPEG encoding/decoding capability.
If the MPEG encoding/decoding in NV40 were fixed in hardware, hardwired then it would be a pretty
low quality implementation and I really hope that the claims that the GPU it's a full DSP are true
so that programmers with DSP experience could upload their own filters codes onto the GPU DSP to
perform their own MPEG video encoding.
I also hope that the SDK to access DSP features and reprogram MPEG video encoding would be free so
that even non-commercial, freeware encoders could be available in the future to further exploit GPU
capabilities.
Anonymous
a b U Graphics card
April 16, 2004 8:55:58 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

On Wed, 14 Apr 2004 17:17:07 GMT, "Ar Q" <ArthurQ283@hottmail.com>
wrote:

>
>"NV55" <nvidianv55@mail.com> wrote in message
>news:1c4cde47.0404130951.5eccb20@posting.google.com...
>> the following is ALL quote:
>>
>>
>> http://frankenstein.evilgeniuslabs.com/~pszuch/nv40/new...
>>
>>
>> Tuesday, April 13, 2004
>>
>> NVIDIA GeForce 6800 GPU family officially announced - Cormac @ 17:00
>> It's time to officially introduce the new GPU generation from NVIDIA
>> and shed the light on its architecture and features.
>>
>> So, the GeForce 6800 GPU family, codenamed NV40, today officially
>> entered the distribution stage. Initially it will include two chips,
>> GeForce 6800 Ultra and GeForce 6800, with the same architecture.
>>
>>
>> These are the key innovations introduced in NVIDIA's novelties:
>>
>> *16-pipeline superscalar architecture with 6 vertex modules, DDR3
>> support and *real 32-bit pipelines
>> *PCI Express x16, AGP 8x support
>> *222 million transistors
>> *400MHz core clock
>> *Chips made by IBM
>> *0.13µm process
>>
>
>Isn't it time for NVidia to use 0.09um process? How could they put some
>many features if still using 0.13 um process?
>

The NV40 die is .75 inches square and all the features are in there.
The part will have been stress-tested by a vector-test program to
completely exercise all of its functions before it is ever supplied to
a 3rd party for incorporation into the 6800 video card.

Future generations of this GPU will be on a smaller process. The
current NV40 chip is made by IBM. IBM is working on a .065 nm process
that AMD will use when it is sufficiently mature. No doubt nVidia will
also be one of the first users of the process also. Will shrink the
existing die area by a factor of 4 and also drop the power by about a
factor of 6. Will probably take a couple of years to get there...
nVidia will not make the mistake of ever using an immature process
again.

John Lewis



>
Anonymous
a b U Graphics card
April 19, 2004 10:19:50 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c5k3hr$hn1$1@phys-news1.kolumbus.fi...
> > pfft. You don't even know what the ATI offering is as yet, much
less
> > are you able to buy a 6800 until well into next month.
>
> No, I do not. I wrote that the rumor is that ATI wouldn't have 3.0 level
> shaders.. I was commenting on a rumor, if that isn't true then the
situation
> is naturally entirely different. The confidentially / NDA ends 19th this
> month so soon after that we should begin to see cards dripping to the
> shelves like always (just noticed a trend in past 5-7 years, could be
wrong
> but I wouldn't die if had to wait even 2 months.. or 7.. or 3 years.. the
> stuff will get here sooner or later.. unless the world explodes before
that
> ;) =
>

[Snipped]

19th? Where did you get that date from?

--
Derek
Anonymous
a b U Graphics card
April 19, 2004 2:32:52 PM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> 19th? Where did you get that date from?

"Confidential until April 19th 2004" stamped over slides, etc. material you
find from here and there.
Anonymous
a b U Graphics card
April 21, 2004 1:55:33 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c5vvb4$99l$1@phys-news1.kolumbus.fi...
> > 19th? Where did you get that date from?
>
> "Confidential until April 19th 2004" stamped over slides, etc. material
you
> find from here and there.
>
>

Can't say I noticed much yesterday. :) 

--
Derek
Anonymous
a b U Graphics card
April 21, 2004 4:31:21 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

> Can't say I noticed much yesterday. :) 

Sorry, 14th.. which you obviously noticed..

http://mbnet.fi/elixir/NV40/
Anonymous
a b U Graphics card
April 21, 2004 4:31:22 AM

Archived from groups: alt.comp.periphs.videocards.nvidia,alt.comp.periphs.videocards.ati,comp.sys.ibm.pc.hardware.video (More info?)

"joe smith" <rapu@ra73727uashduashfh.org> wrote in message
news:c644rf$6nh$1@phys-news1.kolumbus.fi...
> > Can't say I noticed much yesterday. :) 
>
> Sorry, 14th.. which you obviously noticed..
>
> http://mbnet.fi/elixir/NV40/
>
>

Actually I thought you were talking about the R420. :) 

--
Derek
!