Sign in with
Sign up | Sign in
Your question

Will ATinVidia use cHT?

Last response: in CPUs
Share

Will ATinVidia use cHT?

Total: 25 votes

  • Yes, it's a great technology.
  • 28 %
  • No, it means increased costs and needs Intel support.
  • 28 %
  • Maybe, if the enthusiats want it.
  • 44 %
May 4, 2006 11:50:55 PM

It is known by some that nVidia wanted to have an plug-in graphics chip. Could cHT be leveraged to only use PCIEx for a monitor port? It was something I was thinking about after the announcements of several co-processors for ASIC/FPGA-type software.
It seems that graphics could be afforded even more bandwidth AND RAM if connected to cHT since in a dual board each socket has a connection to usually 4 slots - up to 16 GB (DDR2 will come in 4GB sizes soon).

I know this is kind of a graphics post but cHT is based on a CPU bus so what do people think?

That would make for seriously powerful CAD stations and maybe make things like PhysX even faster.

More about : atinvidia cht

May 4, 2006 11:51:58 PM

Quote:
It is known by some that nVidia wanted to have an plug-in graphics chip. Could cHT be leveraged to only use PCIEx for a monitor port? It was something I was thinking about after the announcements of several co-processors for ASICFPGA-type software.
It seems that graphics could be afforded even more bandwidth AND RAM if connected to cHT since in a dual board each socket has a connection to usually 4 slots - up to 16 GB (DDR2 will come in 4GB sizes soon).

I know this is kind of a graphics post but cHT is based on a CPU bus so what do people think?

That would make for seriously powerful CAD stations and maybe make things like PhysX even faster.


Could you be ever so kind to fill me in on what you are talking about?
May 4, 2006 11:52:41 PM

Quote:
It is known by some that nVidia wanted to have an plug-in graphics chip. Could cHT be leveraged to only use PCIEx for a monitor port? It was something I was thinking about after the announcements of several co-processors for ASICFPGA-type software.
It seems that graphics could be afforded even more bandwidth AND RAM if connected to cHT since in a dual board each socket has a connection to usually 4 slots - up to 16 GB (DDR2 will come in 4GB sizes soon).

I know this is kind of a graphics post but cHT is based on a CPU bus so what do people think?

That would make for seriously powerful CAD stations and maybe make things like PhysX even faster.


Could you be ever so kind to fill me in on what you are talking about?

Word.
May 4, 2006 11:54:25 PM

Your poll is missing "No because that'd be a stupid idea"
May 4, 2006 11:56:14 PM

Last year nVidia was talking about moving the GPU to a mobo socket.

nVidia wants mobo socket


It would increase bandwidth - especially with HT3. Could nVidia use it for Quadro, or ATi for FireGL?
May 4, 2006 11:57:16 PM

Quote:
Your poll is missing "No because that'd be a stupid idea"



Well, thx for your input. Maybe you can find something "COHERENT" - pun intended - to say.
May 4, 2006 11:57:24 PM

Quote:
Last year nVidia was talking about moving the GPU to a mobo socket.

nVidia wants mobo socket


It would increase bandwidth - especially with HT3. Could nVidia use it for Quadro, or ATi for FireGL?


Oh very cool I was unaware of this thanks for the heads up.
May 5, 2006 12:07:54 AM

Sockets are worse then being directly on the board. Also why would they use cHT? DDR2 has way less bandwidth then GDDR3 and they don't need GB's of memory which is also expensive.

Overall seems like a stupid idea to me.
May 5, 2006 12:11:56 AM

Quote:
Your poll is missing "No because that'd be a stupid idea"
Give him a slinky he atleast deserve's that.
May 5, 2006 1:21:45 AM

Quote:
Sockets are worse then being directly on the board. Also why would they use cHT? DDR2 has way less bandwidth then GDDR3 and they don't need GB's of memory which is also expensive.

Overall seems like a stupid idea to me.


DDR3 is coming eventually a-hole. I know you're used to just telling peope they don't know anything, but I think it's an interesting path that GPU makers could POSSIBLY take especially with nVidia already having mentioned.

You think the slinky and KB will fit? Give it a try. As a favor to me.

Thx.
May 5, 2006 1:23:44 AM

Quote:
Sockets are worse then being directly on the board. Also why would they use cHT? DDR2 has way less bandwidth then GDDR3 and they don't need GB's of memory which is also expensive.

Overall seems like a stupid idea to me.


DDR3 is coming eventually a-hole. I know you're used to just telling peope they don't know anything, but I think it's an interesting path that GPU makers could POSSIBLY take especially with nVidia already having mentioned.

You think the slinky and KB will fit? Give it a try. As a favor to me.

Thx. DDR3 is here it's just that intel has'nt yet made a mem controller for it yet.
May 5, 2006 1:25:09 AM

Quote:
DDR3 is coming eventually a-hole.


So is GDDR4 dipsh!t which will extend the lead even further.
May 5, 2006 1:29:45 AM

Quote:
DDR3 is coming eventually a-hole.


So is GDDR4 dipsh!t which will extend the lead even further. *sigh* Guy's will you think for a second ddr3 and gdr4 is here already ok it's just that ddr is still in intel's lab cause they working on a memory controller for it and gdr4 is coming next year to graphic cards first.
May 5, 2006 1:32:37 AM

GDDR4 comes out this year.
May 5, 2006 3:51:21 AM

Quote:
DDR3 is coming eventually a-hole.


So is GDDR4 dipsh!t which will extend the lead even further.


I got a little excited. Your little doll makes me crazy. ANyway, since coProc makers are seeing the ability to do 300x the work of an Opteron, why wouldn't graphics work? Server owners will sacrifice speed for amount of RAM to get 4GB PC2100. And since speed gives bandwidth somewhat, DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.

I still wonder at the possibilty. Even if you are hung up on Intel only developments - unless it's anti-AMD. The initiative has already caught the eye of Cray so coProcs are not so silly - or was it stupid?

Does anyone have any rational input or just trolling fanboy noise?
May 5, 2006 4:01:20 AM

Quote:
DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.

Are you kidding me. Take a look at the latest graphics cards. For instance, the X1900XTX has it's GDDR3 running at 1550MHz with 49.6GB/s of memory bandwidth and it's still bandwidth starved considering it has 48 pixel shaders. The point is that a graphics card will see absolutely no benefit from 4GB of RAM. In fact, most graphics cards see no benefit from 512MB of RAM compared to 256MB except at the highest resolutions and quality settings. If graphics cards become co-processors and share system memory they will be no better than integrated graphics cards competing over system resources. A dedicated graphics card is far more appropriate. Besides, ATI and nVidia will never do this simply because that would eliminate all the manufacturers of add-in cards once expansion cards are no longer needed.
May 5, 2006 4:27:16 AM

Your idiocy amazes me.

Quote:
And since speed gives bandwidth somewhat, DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.


The x1600xt has 22gb of bandwidth and as ltcommander data pointed out the X1900XTX has 49.6GB of bandwidth.

Quote:
Even if you are hung up on Intel only developments - unless it's anti-AMD.


Intel has something new and interesting coming out, AMD doesn't. Its the way its presented that pisses me off.

Quote:
The initiative has already caught the eye of Cray so coProcs are not so silly - or was it stupid?


:roll: CPU != GPU, moron.

Quote:
Does anyone have any rational input or just trolling fanboy noise?


Christ, talk about the pot calling the kettle black or whatever that lame cliche phrase is.

BaronMatrix you're an AMD fanboy and you don't know sh!t.
May 5, 2006 1:35:15 PM

Quote:
DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.

Are you kidding me. Take a look at the latest graphics cards. For instance, the X1900XTX has it's GDDR3 running at 1550MHz with 49.6GB/s of memory bandwidth and it's still bandwidth starved considering it has 48 pixel shaders. The point is that a graphics card will see absolutely no benefit from 4GB of RAM. In fact, most graphics cards see no benefit from 512MB of RAM compared to 256MB except at the highest resolutions and quality settings. If graphics cards become co-processors and share system memory they will be no better than integrated graphics cards competing over system resources. A dedicated graphics card is far more appropriate. Besides, ATI and nVidia will never do this simply because that would eliminate all the manufacturers of add-in cards once expansion cards are no longer needed.


So I take it you <b> DON'T</b> think this is a possibility. You guys act like you're chip designers at Intel. It was a simple observation. Just like some of my others that came true even though the usual suspects disagreed. By using the 940 socket, they may be able to hang the RAM off of it. The coProcs I've seen are not as large as the slot so maybe there would be enogh room for 512MB.

I still wonder at the possibility.
May 5, 2006 1:38:54 PM

Quote:
Your idiocy amazes me.

And since speed gives bandwidth somewhat, DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.


The x1600xt has 22gb of bandwidth and as ltcommander data pointed out the X1900XTX has 49.6GB of bandwidth.

Quote:
Even if you are hung up on Intel only developments - unless it's anti-AMD.


Intel has something new and interesting coming out, AMD doesn't. Its the way its presented that pisses me off.

Quote:
The initiative has already caught the eye of Cray so coProcs are not so silly - or was it stupid?


:roll: CPU != GPU, moron.

Quote:
Does anyone have any rational input or just trolling fanboy noise?


Christ, talk about the pot calling the kettle black or whatever that lame cliche phrase is.

BaronMatrix you're an AMD fanboy and you don't know sh!t.


Listen up. coProcs ar eused to increase CPU power. The only thing that I see as an issue is connecting the video out port. They are designed to be faster than even GPUs in order to do scientific research. One of them supposedly wil get 250GFlops which is MUCH faster than a video card. Maybe you just need a hobby.
May 5, 2006 1:46:05 PM

Quote:
DDR2 1066+ would give sufficient bandwidth since I don't think games really need 25GB+ bandwidth.

Are you kidding me. Take a look at the latest graphics cards. For instance, the X1900XTX has it's GDDR3 running at 1550MHz with 49.6GB/s of memory bandwidth and it's still bandwidth starved considering it has 48 pixel shaders. The point is that a graphics card will see absolutely no benefit from 4GB of RAM. In fact, most graphics cards see no benefit from 512MB of RAM compared to 256MB except at the highest resolutions and quality settings. If graphics cards become co-processors and share system memory they will be no better than integrated graphics cards competing over system resources. A dedicated graphics card is far more appropriate. Besides, ATI and nVidia will never do this simply because that would eliminate all the manufacturers of add-in cards once expansion cards are no longer needed.



Even system RAM affects some games - BF2. Storing 5x the amount of textures seems like a good place to start. These coProcs will have their own RAM banks so it won't actually be shared as in the "classic sense." This is just a thought. Since some of you would rather call names I'll remember you in the future.

I guess that none of my other opinions went anywhere either.....NOT.




How about this for a post.........
Intel has finally made a chip as fast as the Alpha 21264. AMD has now eclipsed the 21264 though it only reached ~1200MHz and x86 clocks are above 2GHz.
May 5, 2006 8:47:29 PM

Quote:

Listen up. coProcs ar eused to increase CPU power.


Again, CPU != GPU.

Quote:
They are designed to be faster than even GPUs in order to do scientific research.


Again, completely different task.

Quote:
One of them supposedly wil get 250GFlops which is MUCH faster than a video card.


Sigh. The x1900 does far more. Link.

Quote:
Maybe you just need a hobby.


I do, shooting you down.

Quote:
Storing 5x the amount of textures seems like a good place to start.


Textures don't use that much bandwidth.

Quote:
You guys act like you're chip designers at Intel.


No, we're just not idiots and we know something.

Quote:
Just like some of my others that came true even though the usual suspects disagreed.


Example?

Its funny how you constatnly get shot down but completely ignore it and say we're wrong. :roll:
May 5, 2006 9:00:53 PM

I know that more RAM helps BF2,... but I'm not sure I even understand what BaronMatrix is trying to get at. It doesn't make much sense. Things are fine the way they are, with PCI-E.
May 5, 2006 9:02:22 PM

You create the most insipid threads... by far.
May 5, 2006 9:10:12 PM

Yeah but BF2 is so poorly written.

Quote:
but I'm not sure I even understand what BaronMatrix is trying to get at. It doesn't make much sense.


Summed up nicely.
May 5, 2006 9:18:53 PM

Quote:
Yeah but BF2 is so poorly written.

but I'm not sure I even understand what BaronMatrix is trying to get at. It doesn't make much sense.


Summed up nicely.

yes,... but wouldn't games be faster if they put more RAM on the card itself (like 1GB, I don't see a need for more than that, yet), and wrote drivers and apps to take advantage of that? Would there be a potential use for that?

I don't know much about games/programming.
May 5, 2006 9:34:40 PM

Quote:

Listen up. coProcs ar eused to increase CPU power.


Again, CPU != GPU.

Quote:
They are designed to be faster than even GPUs in order to do scientific research.


Again, completely different task.

Quote:
One of them supposedly wil get 250GFlops which is MUCH faster than a video card.


Sigh. The x1900 does far more. Link.

Quote:
Maybe you just need a hobby.


I do, shooting you down.

Quote:
Storing 5x the amount of textures seems like a good place to start.


Textures don't use that much bandwidth.

Quote:
You guys act like you're chip designers at Intel.


No, we're just not idiots and we know something.

Quote:
Just like some of my others that came true even though the usual suspects disagreed.


Example?

Its funny how you constatnly get shot down but completely ignore it and say we're wrong. :roll:


Who said don't use a GPU. I asked if it's a possibility that nVidia may want to use the cHT socket method. The point was that cHT sockets CAN provide the bandwidth of PCIEx without using the bus and they were talking about it. This would let them do it. At least for workstations with two sockets.



As far as what I know, I told you <b>idiots</b> that Intel would bleed money until next year, that AMD would release Opteron first, etc.


Again I'm sorry if you have no life and feel that you need to call names but what can I say.


The question still stands.
May 5, 2006 9:37:37 PM

Quote:
You create the most insipid threads... by far.


Well, since you're not the boss of me. EAT ME and don't come on MY threads.

Did I say EAT ME. It was a simple question. No non A-Hole answer? Go bother someone who ACTUALLY likes you.
May 5, 2006 9:47:41 PM

Quote:
and don't come on MY threads.


And I'm sure if you had the capability, you'd just delete all the posts you disagree with, wouldn't ya?

Quote:
Go bother someone who ACTUALLY likes you.


LOL. Ok...
May 5, 2006 9:54:30 PM

Quote:
and don't come on MY threads.


And I'm sure if you had the capability, you'd just delete all the posts you disagree with, wouldn't ya?

Quote:
Go bother someone who ACTUALLY likes you.


LOL. Ok...


No I wouldn't. I'd follow you around from post to post and tell really BAD jokes. Mayeb if some would have said something like, "No, I don't think so because....." but the name calling started. Sounds like some of us need some sex or something.
May 5, 2006 10:01:43 PM

This is totally off topic, but how many jobs have you had in the past 6 months? It seems like you represent a different company in your thread every 4 weeks.

Are you the one who claims to be a Dr.? Or is that someone else?
May 5, 2006 10:25:37 PM

Quote:
This is totally off topic, but how many jobs have you had in the past 6 months? It seems like you represent a different company in your thread every 4 weeks.

Are you the one who claims to be a Dr.? Or is that someone else?




It depends on what your opinion is about GPUs and CPU sockets. At least I CAN put up ANY company. SmartSoft is my baby but I consult for other companies. No I claim to be SuperGeniusGuy.
May 5, 2006 10:32:46 PM

ok wel it would be cool if you could have ramfor the chip on the motherboard that is like dddr3 and dedicated so like you arent sharing system resources.
May 5, 2006 10:38:01 PM

Quote:
I asked if it's a possibility that nVidia may want to use the cHT socket method.


And like I've said many a times now, stupid idea!

Quote:
As far as what I know, I told you idiots that Intel would bleed money until next year


What does that have to do with anything? Also Intel still makes more profit then AMD does in revenue.

Quote:
that AMD would release Opteron first, etc.


You never said anything like that.

Quote:
The question still stands.


I've already answered. Why don't you answer my points? Because you can't. STFU you stupid noob.
May 5, 2006 11:51:22 PM

Quote:
And like I've said many a times now, stupid idea!


That's not a reply or justification, it's a rant.

Quote:
What does that have to do with anything? Also Intel still makes more profit then AMD does in revenue.



It was another case where my forward thinking attitude predicted CPU happenings.



Quote:
You never said anything like that.



I have just spent a few minutes looking for my quote that said they would bring the Opterons forward and release them first. Lately it's been said that X2 would be first but Turion X2 was scheduled for 4 days from now ORIGINALLY.

Anyway, why do you insist on getting off-topic for your interpretation of a rational question? Cray feels strongly enough about coProcs for Opteron to make them standard - Google it yourself - and nVidia talked about wanting to have a mobo socket, so it is a legitimate question. Will GPUs look at cHT.

Imagine this scenario:


nVidia creates a reference design that allows for overclocking and addition of RAM or plug-in BIOS. Maybe they could use nForce (BIOS) to reroute RAM if a coProc GPU is detected - the separate banks could use a NUMA-derivative to "lock" data to the socket closest to the GPU.

A DMA channel could be used to link to a DVI or HDMI video port (maybe even the new VideoPort 1.0 from VESA).

I still think it's something that may be CONTEMPLATED by nVidia for workstations. I could be wrong. Ideas come from postulation, not capitulation.
May 6, 2006 12:57:44 AM

Quote:
That's not a reply or justification, it's a rant.


Nice selective quoting.

Quote:
Cray feels strongly enough about coProcs for Opteron to make them standard


Again, CPU != GPU. Will you acknowledge this? Or just continue to be an idiot?

Quote:
and nVidia talked about wanting to have a mobo socket, so it is a legitimate question.


So doesn't mean its going to happen.

Quote:
Will GPUs look at cHT.


NO!

Quote:
nVidia creates a reference design that allows for overclocking and addition of RAM or plug-in BIOS. Maybe they could use nForce (BIOS) to reroute RAM if a coProc GPU is detected - the separate banks could use a NUMA-derivative to "lock" data to the socket closest to the GPU.


Imagine having way less bandwidth, wow that'd be cool.

Again you've completely ignored every point raised. Will you answer any or will you continue to be an idiot?
May 6, 2006 1:44:30 AM

Quote:
That's not a reply or justification, it's a rant.


Nice selective quoting.

Quote:
Cray feels strongly enough about coProcs for Opteron to make them standard


Again, CPU != GPU. Will you acknowledge this? Or just continue to be an idiot?

Quote:
and nVidia talked about wanting to have a mobo socket, so it is a legitimate question.


So doesn't mean its going to happen.

Quote:
Will GPUs look at cHT.


NO!

Quote:
nVidia creates a reference design that allows for overclocking and addition of RAM or plug-in BIOS. Maybe they could use nForce (BIOS) to reroute RAM if a coProc GPU is detected - the separate banks could use a NUMA-derivative to "lock" data to the socket closest to the GPU.


Imagine having way less bandwidth, wow that'd be cool.

Again you've completely ignored every point raised. Will you answer any or will you continue to be an idiot?


My 7800 GT lets me play every game I like at 1280. I didn't state anything about DDR2 vs DDR3 bandwidth. AMD managed to have something for Conroe to be faster than with DDR vs DDR2, so implementation would be the key to using the available bandwidth efficiently. Conroe's efficiency comes from CACHE, not DDR2 or DDR3 or GDDR3 or GDDR4. So perhaps cache on the GPU would make the difference like L3 does for CPUs.

I believe HT has 21 GB and PCIEx has 8.4GB. The speed reflected by the link posted is INTERNAL, not the transfer available. That WOULD NOT change if the GPU were optimized to use larger banks of 1066DDR2. Maybe this could be the FIRST implementation of DDR3. Maybe it would be a good use for FB-DIMMs.

Just because you flare and flame and ask for what seems like doctorate thesis material for a simple ponderance doesn't mean I don't feel that it is possible.

GIVE UP! The emphatic no you posted sounds more like an intimidation tactic than an actual argument.

AGIAN I SAID WHO SAID GET RID OF THE GPU? An HT link is faster than PCI Ex. It is a proven technique. IT WAS A SIMPLE QUESTION. Hostility breeds hostility.
May 6, 2006 2:11:47 AM

Quote:
Conroe's efficiency comes from CACHE


Wrong, just wrong.

Quote:
not DDR2 or DDR3 or GDDR3 or GDDR4


No sh!t.

Quote:
So perhaps cache on the GPU would make the difference like L3 does for CPUs.


GPU's DO have caches.

Quote:
I believe HT has 21 GB and PCIEx has 8.4GB.


See this the problem. 21GB is less then what a x1600XT has. Do you understand this at all?

Quote:
GIVE UP! The emphatic no you posted sounds more like an intimidation tactic than an actual argument.


You are truely an idiot.

Quote:
AGIAN I SAID WHO SAID GET RID OF THE GPU?


You just then.

Quote:
An HT link is faster than PCI Ex.


So PCIe isn't a bottleneck.

Quote:
It is a proven technique.


And PCIe isnt?

Ok here it is in point form.

* HTT would not provide enough bandwidth.
* GDDR3 and GDDR4 are faster then DDR2 and DDR3.
* Sockets are inferior to being directly on the board.
* PCIe is not a bottleneck.
* It'd be more of a pain in the ass for devs.
* There are no advantages with using a socket.

Do you understand this? You also didn't answer any of my questions.
May 6, 2006 3:08:08 AM

Quote:
I know that more RAM helps BF2,... but I'm not sure I even understand what BaronMatrix is trying to get at. It doesn't make much sense. Things are fine the way they are, with PCI-E.



Do I need to use "special-bus" language? Simple. nVidia spoke about mobo sockets. AMD released cHT for coProcs. Several specialty procs have come out and been employed with 300x the power of an Opteron - RAM type notwithstanding.

1+1 = 2 for a legitimate usage of the protocol for specialized CAD workstations or medical imaging. I still wouldn't be surprised if the subject doesn't come up.
May 6, 2006 3:24:51 AM

Quote:
Wrong, just wrong.


Anandtech's comparison of Core vs. K8 would beg to differ.

Quote:
GPU's DO have caches



So they could add more with a larger socket? Take away the cache and add GDDR5 and see what happens (CPUs go splat).


Quote:
See this the problem. 21GB is less then what a x1600XT has. Do you understand this at all?



And of course 8.4GB of PCIEx is more than 21GB.


Quote:
You are truely an idiot.


At least I know that in America it's "truly."


Quote:
You just then.



I said nVidia was looking at getting rid of the board and use a mobo socket for their <b><i>GPU chip</i></b> last year and even posted a link. It would nearly impossible for them to get an extra socket on even an enthusiast mobo, but cHT <b>maybe</b> a way for them to achieve it.


Maybe I'm an idiot. Maybe I'm a pretty good designer/developer of SW. Maybe I'm a licensed MechEngr. Again, we'll see. I'm not an idiot if they don't look at it and not a genius of they do.


I'm just a guy with a PC and an opinion. Don't like it, don't sign my paycheck. Wait, I don't work for you.
May 6, 2006 3:31:25 AM

Quote:
Anandtech's comparison of Core vs. K8 would beg to differ.


No it wouldn't.

Quote:
So they could add more with a larger socket?


WTF!? Why does a larger socket (whatever the hell that is) have to do with the amount of cache a GPU can have?

Quote:
Take away the cache and add GDDR5 and see what happens (CPUs go splat).


GDDR5 exists?

Quote:
And of course 8.4GB of PCIEx is more than 21GB.


And do you realise that PCIe isn't a bottleneck?

Quote:
At least I know that in America it's "truly."


Wow, picking out the typo.

Quote:
Maybe I'm an idiot.


Now we're getting somewhere!

Quote:
Maybe I'm a pretty good designer/developer of SW.


I find this hard to believe.

Quote:
Maybe I'm a licensed MechEngr.


Maybe I'm the president.

Quote:
I'm just a guy with a PC and an opinion. Don't like it, don't sign my paycheck. Wait, I don't work for you.


:roll:

So have you realised that a CPU != GPU yet?
May 6, 2006 4:03:02 AM

The only thing i can think this would be cool in is tiny pc's where you dont have a massive crd sticking out. or laptops.
May 6, 2006 4:12:50 AM

Laptops have their GPU's directly on the board.
May 6, 2006 4:20:30 AM

Not if its an nvidia one with an mxm port.
May 6, 2006 5:02:54 AM

Anand's analysis clearly states that Conroe's cache speed helps a lot, not it's use of DDR2. You posted the link so.....


I already knew that a GPU does more functions related to NURBS (non uniform rational b-splines) and complex multi-dimensional matrices (sounds like a specialty chip to me). The biggest difference is that GPUs have to construct the actual output to the screen. It all tranlates to floating point ops so.........

Unfortunately, I work for a large consulting firm as a programmer and have at least one finished article on www.msd2d.com.........


More space = More space.......

Imagine a notebook GPU. It does have PCIEx but looks nothing like a desktop part PCB wise.


GDDR5 is a sarcastic overblowing of non-cache memory. The result would still be the same - no cache CPUGPU sucks.

I was not implying that PCIEx wasn't good enough, merely that nVidia/ATi <b> COULD POSSIBLY</b> use this since at least nVidia was talking about a GPU socket on board. Tell them it's not possible and they're fools for letting it get out that they were pusuing it since the greatest chip/infrastructure designer (some guy on a forum) says it's a dumb idea in the first place.
May 6, 2006 5:38:01 AM

Quote:
Anand's analysis clearly states that Conroe's cache speed helps a lot, not it's use of DDR2. You posted the link so.....


Would you highlight the part that says the efficiency of conroe is purely from the L2 cache. :roll:

Quote:
Imagine a notebook GPU. It does have PCIEx but looks nothing like a desktop part PCB wise.


And like I said before they're soldered on.

Quote:
I was not implying that PCIEx wasn't good enough, merely that nVidia/ATi COULD POSSIBLY use this since at least nVidia was talking about a GPU socket on board.


If they do it, it'll be for marketing purposes.

Quote:
Tell them it's not possible and they're fools for letting it get out that they were pusuing it since the greatest chip/infrastructure designer (some guy on a forum) says it's a dumb idea in the first place.


If it were such an awesome idea, they'd have done it by now.

And you've still ignored pretty much every point.

Once again, you're an idiot.
May 6, 2006 5:54:33 AM

Quote:
Anand's analysis clearly states that Conroe's cache speed helps a lot, not it's use of DDR2. You posted the link so.....


I'm gonna give you one to read for home work, now check out the part where they reduced the pipeline to 14 stages and increased the instructions to 4, thats the magic of Conroe in a nutshell. There are several other reasons it performs better than K8/P4 like executing SSE instructions in 1 cycle instead of 2.
May 6, 2006 5:58:23 AM

OK. Get off my thread. My feelings won't be hurt.
May 6, 2006 5:59:17 AM

Damn you with your....facts. :p 
May 6, 2006 6:03:36 AM

Sometimes its just hard to argue with the truth :twisted:
May 6, 2006 6:39:07 AM

I'd say its a Huge possibility and ATI already has the tech to make it work. The 360's GFX chip is basically DX 10 from what i understand and even exceeds it in some ways. The 360 has to share slow mem compared to today's GFX RAM but can do more than anything that has been released to this point on the pc(and yes i know overhead and all that bs slow a pc down but we havnt even begun to see what a fully programable GPU will do for us). It's secret? The Daughter die with 10meg's of Cache with insane bandwidth. Thats the future right there... You could throw that chip onto a hypertransport bus and the CPU and fully programmable GPU would have a jolly good time working together which has already been foretold with DX10's new physic's API and GPU's getting to the point of being able to do anything programmer's want.
!