Will ATinVidia use cHT?

Will ATinVidia use cHT?

  • Yes, it's a great technology.

    Votes: 7 28.0%
  • No, it means increased costs and needs Intel support.

    Votes: 7 28.0%
  • Maybe, if the enthusiats want it.

    Votes: 11 44.0%

  • Total voters
    25

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
It is known by some that nVidia wanted to have an plug-in graphics chip. Could cHT be leveraged to only use PCIEx for a monitor port? It was something I was thinking about after the announcements of several co-processors for ASIC/FPGA-type software.
It seems that graphics could be afforded even more bandwidth AND RAM if connected to cHT since in a dual board each socket has a connection to usually 4 slots - up to 16 GB (DDR2 will come in 4GB sizes soon).

I know this is kind of a graphics post but cHT is based on a CPU bus so what do people think?

That would make for seriously powerful CAD stations and maybe make things like PhysX even faster.
 

spud

Distinguished
Feb 17, 2001
3,406
0
20,780
It is known by some that nVidia wanted to have an plug-in graphics chip. Could cHT be leveraged to only use PCIEx for a monitor port? It was something I was thinking about after the announcements of several co-processors for ASICFPGA-type software.
It seems that graphics could be afforded even more bandwidth AND RAM if connected to cHT since in a dual board each socket has a connection to usually 4 slots - up to 16 GB (DDR2 will come in 4GB sizes soon).

I know this is kind of a graphics post but cHT is based on a CPU bus so what do people think?

That would make for seriously powerful CAD stations and maybe make things like PhysX even faster.

Could you be ever so kind to fill me in on what you are talking about?
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
It is known by some that nVidia wanted to have an plug-in graphics chip. Could cHT be leveraged to only use PCIEx for a monitor port? It was something I was thinking about after the announcements of several co-processors for ASICFPGA-type software.
It seems that graphics could be afforded even more bandwidth AND RAM if connected to cHT since in a dual board each socket has a connection to usually 4 slots - up to 16 GB (DDR2 will come in 4GB sizes soon).

I know this is kind of a graphics post but cHT is based on a CPU bus so what do people think?

That would make for seriously powerful CAD stations and maybe make things like PhysX even faster.

Could you be ever so kind to fill me in on what you are talking about?

Word.
 

Action_Man

Splendid
Jan 7, 2004
3,857
0
22,780
Sockets are worse then being directly on the board. Also why would they use cHT? DDR2 has way less bandwidth then GDDR3 and they don't need GB's of memory which is also expensive.

Overall seems like a stupid idea to me.
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
Sockets are worse then being directly on the board. Also why would they use cHT? DDR2 has way less bandwidth then GDDR3 and they don't need GB's of memory which is also expensive.

Overall seems like a stupid idea to me.

DDR3 is coming eventually a-hole. I know you're used to just telling peope they don't know anything, but I think it's an interesting path that GPU makers could POSSIBLY take especially with nVidia already having mentioned.

You think the slinky and KB will fit? Give it a try. As a favor to me.

Thx.
 

dvdpiddy

Splendid
Feb 3, 2006
4,764
0
22,780
Sockets are worse then being directly on the board. Also why would they use cHT? DDR2 has way less bandwidth then GDDR3 and they don't need GB's of memory which is also expensive.

Overall seems like a stupid idea to me.

DDR3 is coming eventually a-hole. I know you're used to just telling peope they don't know anything, but I think it's an interesting path that GPU makers could POSSIBLY take especially with nVidia already having mentioned.

You think the slinky and KB will fit? Give it a try. As a favor to me.

Thx. DDR3 is here it's just that intel has'nt yet made a mem controller for it yet.
 

dvdpiddy

Splendid
Feb 3, 2006
4,764
0
22,780
DDR3 is coming eventually a-hole.

So is GDDR4 dipsh!t which will extend the lead even further. *sigh* Guy's will you think for a second ddr3 and gdr4 is here already ok it's just that ddr is still in intel's lab cause they working on a memory controller for it and gdr4 is coming next year to graphic cards first.
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
DDR3 is coming eventually a-hole.

So is GDDR4 dipsh!t which will extend the lead even further.


I got a little excited. Your little doll makes me crazy. ANyway, since coProc makers are seeing the ability to do 300x the work of an Opteron, why wouldn't graphics work? Server owners will sacrifice speed for amount of RAM to get 4GB PC2100. And since speed gives bandwidth somewhat, DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.

I still wonder at the possibilty. Even if you are hung up on Intel only developments - unless it's anti-AMD. The initiative has already caught the eye of Cray so coProcs are not so silly - or was it stupid?

Does anyone have any rational input or just trolling fanboy noise?
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.
Are you kidding me. Take a look at the latest graphics cards. For instance, the X1900XTX has it's GDDR3 running at 1550MHz with 49.6GB/s of memory bandwidth and it's still bandwidth starved considering it has 48 pixel shaders. The point is that a graphics card will see absolutely no benefit from 4GB of RAM. In fact, most graphics cards see no benefit from 512MB of RAM compared to 256MB except at the highest resolutions and quality settings. If graphics cards become co-processors and share system memory they will be no better than integrated graphics cards competing over system resources. A dedicated graphics card is far more appropriate. Besides, ATI and nVidia will never do this simply because that would eliminate all the manufacturers of add-in cards once expansion cards are no longer needed.
 

Action_Man

Splendid
Jan 7, 2004
3,857
0
22,780
Your idiocy amazes me.

And since speed gives bandwidth somewhat, DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.

The x1600xt has 22gb of bandwidth and as ltcommander data pointed out the X1900XTX has 49.6GB of bandwidth.

Even if you are hung up on Intel only developments - unless it's anti-AMD.

Intel has something new and interesting coming out, AMD doesn't. Its the way its presented that pisses me off.

The initiative has already caught the eye of Cray so coProcs are not so silly - or was it stupid?

:roll: CPU != GPU, moron.

Does anyone have any rational input or just trolling fanboy noise?

Christ, talk about the pot calling the kettle black or whatever that lame cliche phrase is.

BaronMatrix you're an AMD fanboy and you don't know sh!t.
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.
Are you kidding me. Take a look at the latest graphics cards. For instance, the X1900XTX has it's GDDR3 running at 1550MHz with 49.6GB/s of memory bandwidth and it's still bandwidth starved considering it has 48 pixel shaders. The point is that a graphics card will see absolutely no benefit from 4GB of RAM. In fact, most graphics cards see no benefit from 512MB of RAM compared to 256MB except at the highest resolutions and quality settings. If graphics cards become co-processors and share system memory they will be no better than integrated graphics cards competing over system resources. A dedicated graphics card is far more appropriate. Besides, ATI and nVidia will never do this simply because that would eliminate all the manufacturers of add-in cards once expansion cards are no longer needed.


So I take it you <b> DON'T</b> think this is a possibility. You guys act like you're chip designers at Intel. It was a simple observation. Just like some of my others that came true even though the usual suspects disagreed. By using the 940 socket, they may be able to hang the RAM off of it. The coProcs I've seen are not as large as the slot so maybe there would be enogh room for 512MB.

I still wonder at the possibility.
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
Your idiocy amazes me.

And since speed gives bandwidth somewhat, DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.

The x1600xt has 22gb of bandwidth and as ltcommander data pointed out the X1900XTX has 49.6GB of bandwidth.

Even if you are hung up on Intel only developments - unless it's anti-AMD.

Intel has something new and interesting coming out, AMD doesn't. Its the way its presented that pisses me off.

The initiative has already caught the eye of Cray so coProcs are not so silly - or was it stupid?

:roll: CPU != GPU, moron.

Does anyone have any rational input or just trolling fanboy noise?

Christ, talk about the pot calling the kettle black or whatever that lame cliche phrase is.

BaronMatrix you're an AMD fanboy and you don't know sh!t.


Listen up. coProcs ar eused to increase CPU power. The only thing that I see as an issue is connecting the video out port. They are designed to be faster than even GPUs in order to do scientific research. One of them supposedly wil get 250GFlops which is MUCH faster than a video card. Maybe you just need a hobby.
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.
Are you kidding me. Take a look at the latest graphics cards. For instance, the X1900XTX has it's GDDR3 running at 1550MHz with 49.6GB/s of memory bandwidth and it's still bandwidth starved considering it has 48 pixel shaders. The point is that a graphics card will see absolutely no benefit from 4GB of RAM. In fact, most graphics cards see no benefit from 512MB of RAM compared to 256MB except at the highest resolutions and quality settings. If graphics cards become co-processors and share system memory they will be no better than integrated graphics cards competing over system resources. A dedicated graphics card is far more appropriate. Besides, ATI and nVidia will never do this simply because that would eliminate all the manufacturers of add-in cards once expansion cards are no longer needed.



Even system RAM affects some games - BF2. Storing 5x the amount of textures seems like a good place to start. These coProcs will have their own RAM banks so it won't actually be shared as in the "classic sense." This is just a thought. Since some of you would rather call names I'll remember you in the future.

I guess that none of my other opinions went anywhere either.....NOT.




How about this for a post.........
Intel has finally made a chip as fast as the Alpha 21264. AMD has now eclipsed the 21264 though it only reached ~1200MHz and x86 clocks are above 2GHz.
 

Action_Man

Splendid
Jan 7, 2004
3,857
0
22,780
Listen up. coProcs ar eused to increase CPU power.

Again, CPU != GPU.

They are designed to be faster than even GPUs in order to do scientific research.

Again, completely different task.

One of them supposedly wil get 250GFlops which is MUCH faster than a video card.

Sigh. The x1900 does far more. Link.

Maybe you just need a hobby.

I do, shooting you down.

Storing 5x the amount of textures seems like a good place to start.

Textures don't use that much bandwidth.

You guys act like you're chip designers at Intel.

No, we're just not idiots and we know something.

Just like some of my others that came true even though the usual suspects disagreed.

Example?

Its funny how you constatnly get shot down but completely ignore it and say we're wrong. :roll:
 

WINDSHEAR

Distinguished
Jan 25, 2006
626
0
18,980
I know that more RAM helps BF2,... but I'm not sure I even understand what BaronMatrix is trying to get at. It doesn't make much sense. Things are fine the way they are, with PCI-E.
 

WINDSHEAR

Distinguished
Jan 25, 2006
626
0
18,980
Yeah but BF2 is so poorly written.

but I'm not sure I even understand what BaronMatrix is trying to get at. It doesn't make much sense.

Summed up nicely.

yes,... but wouldn't games be faster if they put more RAM on the card itself (like 1GB, I don't see a need for more than that, yet), and wrote drivers and apps to take advantage of that? Would there be a potential use for that?

I don't know much about games/programming.