Sign in with
Sign up | Sign in
Your question

AMDs plan?.....Glueless Chipsets and modular pixel pipes??

Last response: in CPUs
Share
July 25, 2006 12:18:46 AM

As everyone knows, I was VERY skeptical about this merger because AMD has such a close relationship with both companies.

But after I saw a diagram they posted on Anand I could understand the prupose of the merger.

I also thought about how nVidia and ATi are actually in different AMD market segments in terms of the chipset. nVidia enjoys wksta/small server chipsets along with high end SLI.
ATi is in the desktop gaming market but their chipsets are only getting good enough OVERALL to compete.

AT this point it is very important that AMD get a glueless chipset ready for K8L without relying on a "separate" company. They will need to scale to the 1000s and they would be better off doing the work in-house.

The diagram reminded me of the modular architecture AMD promoted at Analyst Day. By using cHT between the GPU and CPU it would be possible to have the pixel/vertex on the GPU with a high speed cache while the cores on the CPU handle the normal duties plus the feeding of pixel data to the pipes.

Of course they could still then put the full blown monster in a socket next to the CPU for enthusiasts and pro content people. The 4 socket mobo coul dthen have 2 GPUs and 2 K8Ls.

This is somewhat based on conjecture but a lot on the various news AMD has let out about their plans. Teh good thing is that even though they haven't merged yet, ATi can still work on AMD chipsets. The engineers may still even be able to share info.

At any rate with nVidia making the SLI chipsets and ATi settling into specialty GPU parts and high end video, there will be no difference in the amount of either sold today.

This is a bad thing for the Dell/Intel though because SLI only works on nVidia chipsets, while both the G80 and R600 IGPs will be Premium ready, while Intel is still working on GMAX3000.

Another thing this may improve is video on VMs. If they can partition the GPU the way they have the CPU it will be possible to have entire dev environments on one K8L 8 socket machine with 16 VMs and 4GPU modules. People can then logon and use a 32 bit video card instead of the emulated S3 there is now. nVidia could then license this under Torrenza and perhaps even improve it with their GPU knowledge.

Also I can see them making Sun chipsets since they will be sharing a socket. The possibilities ar eexciting to say the least.

CPU/GPU diagram

I'm sure MS would love that.
July 25, 2006 12:26:53 AM

I'll say it again AMD and ATI are going to work together to turn the ATI's R600 technology into a general purpose processing monster by the 2008-2011 timeframe to compete with the same type of technology which Intel and Sun are developing. Massively parrallel with relatively simple cores but massive processing power. CPU's like this will cure cancer.
July 25, 2006 12:38:20 AM

Quote:
I'll say it again AMD and ATI are going to work together to turn the ATI's R600 technology into a general purpose processing monster by the 2008-2011 timeframe to compete with the same type of technology which Intel and Sun are developing. Massively parrallel with relatively simple cores but massive processing power. CPU's like this will cure cancer.



No, this is about glueless chipsets in house for K8L. They could work with ATi if it was about GPUs.
Related resources
July 25, 2006 1:19:31 AM

Sry BM I meant to add to the conversation, not disagree with you. :wink:
July 25, 2006 1:22:23 AM

Im excited to see what they have instore. quad core: 2 cpu + 2 gpus in 1 chip*dreams* :roll:
July 25, 2006 1:30:03 AM

Thats all fine and dandy, but why the hell would you want your R600 using system ram across the mobo? Its gonna take something other than that to feed texture info into a 48 pipe pixel crunching monster.

I know! They should keep the CPU in a flip chip design, then put the GPU on its OWN board with its OWN ram thats actually fast enough for such a thing. OOOh OOOh. . . they could also make a special slot thats uber high speed incase the GPU has to access the system ram! Genius i tell you! Brillant!


Oh wait, I'm not the first to think of that, am i? :lol: 
July 25, 2006 1:47:38 AM

Quote:
Im excited to see what they have instore. quad core: 2 cpu + 2 gpus in 1 chip*dreams* :roll:
We're a long way from that.
Quote:
Thats all fine and dandy, but why the hell would you want your R600 using system ram across the mobo?
To increase latency, duh! :p 
July 25, 2006 1:54:44 AM

Quote:
Sry BM I meant to add to the conversation, not disagree with you. :wink:


You have an opinion, don't you? I don't expect you to agree with or be on the same page as me. I enjoy hearing opinions.
No offense meant or taken.
July 25, 2006 2:04:27 AM

Quote:
Im excited to see what they have instore. quad core: 2 cpu + 2 gpus in 1 chip*dreams* :roll:


They will have to gut the GPU to do that. It will take a year at least. The idea behind integration is a balance of power between the execution units. With DirectConnect, it is possible to share the same parallel OR serial bus so by moving execution units to the CPU it would then be possible to make pixel chips that only process DX10 in HW.

But then I have been known to be faster than the times.


MS did already make the desktop 3D so now X64 desktops NEED more power. HD will need more throughput, games will need more throughput, etc. AMD is following a very ingenious track towards a unified CPU platform based on open standards.

OEMs won't be able to resist.
July 25, 2006 2:07:55 AM

I'm always in tears reading your posts. This is one of your funniest yet.
July 25, 2006 2:18:50 AM

Quote:
Thats all fine and dandy, but why the hell would you want your R600 using system ram across the mobo? Its gonna take something other than that to feed texture info into a 48 pipe pixel crunching monster.

I know! They should keep the CPU in a flip chip design, then put the GPU on its OWN board with its OWN ram thats actually fast enough for such a thing. OOOh OOOh. . . they could also make a special slot thats uber high speed incase the GPU has to access the system ram! Genius i tell you! Brillant!


Oh wait, I'm not the first to think of that, am i? :lol: 


That's where DC and DDR3 come in. With a low latency high bandwdith connection the only question is processing power. GPU tech pumps 50+ GB/s across it's internal bus, while games do not need that much, they need low latency number crunching.

That's why CPUs run at 3GHz and GPUs run at 600MHz. GPUs are throughput bound while CPUs are IPC bound. HT3 will have 25.6GB/s PER 16bit link. Connecting 3 in parallel will give you 76.8GHz of bandwidth.

Dividing CPU functionality and GPU functionality (in the case of games) will mean that careful caching will give you 50+GB/s of pixel bandwith and 25+GB/s of CPU/IO bandwidth.

ATi has experience in memory controllers for GDDR4 so CPUs could actually use DDR3 AND DDR4 through two different ports. HT would allow GPU RAM AND CPU RAM.


Imagine glueless functionality for some new special FX movie in two to three years. I bet nVidia will shine with a socketed GPU.

I'm actually impressed with what can be done.
July 25, 2006 2:21:12 AM

Not this nonsense again.
July 25, 2006 2:38:57 AM

Quote:
That's where DC and DDR3 come in. With a low latency high bandwdith connection the only question is processing power. GPU tech pumps 50+ GB/s across it's internal bus, while games do not need that much, they need low latency number crunching.

That's not really true.

http://techreport.com/onearticle.x/10403

Quote:
Today's fastest graphics cards already have substantially more memory bandwidth than AMD's processors, so the reason to move a GPU into the CPU's cache coherency loop would presumably be to reduce latency. Yet real-time graphics performance is really dependent on bandwidth rather than latency, since memory access latency can be hidden fairly easily. GPUs hide latency by keeping many pixels in flight at once, using custom caching algorithms, and attempting to exploit graphics’ characteristic locality when accessing RAM. A Torrenza-style CPU-GPU mating would address a problem that modern GPU designs have largely solved.

Latency isn't a problem for GPUs, it's always been raw bandwidth. In that case, an HTX implementation just isn't beneficial. You can say that you can link up 3 HT3 links which is highly unlikely. That would require all motherboard makers to have triple traces between the CPU socket and the GPU socket which is costly and unlikely due to space constraints. Besides, even with triple HT3 links a CPU's IMC can't provide enough bandwidth to fill all three links anyways unless you have a nice 6 channel FB-DIMM implementation, which isn't coming to desktop anytime soon. Again bandwidth is key.

What I can agree on though, is that HTX GPUs would be a great replacement for IGPs. They would probably be slightly more expensive, but noticeably faster. HTX is viable for low-end graphics cards because those don't need as much bandwidth.
July 25, 2006 2:51:33 AM

To continue the lesson, GPU's only run @ 600MHz not because "They dont need to go faster" but because it has to keep parallel coherency across 48 cores on the die. That is no easy task.

Please stop making up this crap and spouting it off. People come here to learn, and make informed decisions on thier future hardware purchases. Not only are you confusing them with facts that run perpendicular to reality, but you are also sulling up the good name of Tom's.
July 25, 2006 12:17:27 PM

A hybrid GPU/CPU chip might work as a replacement for todays integrated graphics chips, but it wont kill stand-alone graphics cards.

HTX would be a viable replacement for PCI-e. Lower latency, direct low level access, and vastly superior bandwidth compared to PCI-e.

Sidenote: HTX is a slot, and it uses the same mechanical connector as PCI-e 16x.
a c 111 à CPUs
a b À AMD
July 25, 2006 12:47:34 PM

At first glance i saw "AMDs plan?.....Clueless Chipsets and modular pixel pipes??"

This (A hybrid GPU/CPU chip?) may be good for super low budget mid performace parts...
a b à CPUs
July 25, 2006 1:41:28 PM

I'm beginning to believe that all this attention towards the AMD/ATI merger has all the Intelebies and Conroe fanboys tights in a tizzy because it's taking the fizz out of Conroe's release hype.

Think about it, an existing company (Intel) releasing a new product (Core2) compared to a merger of two technology leaders (AMD/ATI). Who's the media gonna pay attention to? Who's stocks are gonna catch the interest of Wall Street?
July 25, 2006 3:21:16 PM

Let's not forget about the AMD/Rambus licensing deal earlier this year as well. That gives AMD and ATI products access to XDR2 and FlexIO (think of your FSB running at 8 GHz) among other things. I don't know what they have planned, but the next 5 years should be very exciting.
July 25, 2006 5:31:10 PM

Quote:
Let's not forget about the AMD/Rambus licensing deal earlier this year as well. That gives AMD and ATI products access to XDR2 and FlexIO (think of your FSB running at 8 GHz)

Interresting.

XDR2 would provide enough bandwidth to feed both the CPU and the GPU.

Perhaps having a GPU in a socket is not a bad idea after all.
July 25, 2006 8:03:43 PM

Quote:
That's where DC and DDR3 come in. With a low latency high bandwdith connection the only question is processing power. GPU tech pumps 50+ GB/s across it's internal bus, while games do not need that much, they need low latency number crunching.

That's not really true.

http://techreport.com/onearticle.x/10403

Quote:
Today's fastest graphics cards already have substantially more memory bandwidth than AMD's processors, so the reason to move a GPU into the CPU's cache coherency loop would presumably be to reduce latency. Yet real-time graphics performance is really dependent on bandwidth rather than latency, since memory access latency can be hidden fairly easily. GPUs hide latency by keeping many pixels in flight at once, using custom caching algorithms, and attempting to exploit graphics’ characteristic locality when accessing RAM. A Torrenza-style CPU-GPU mating would address a problem that modern GPU designs have largely solved.

Latency isn't a problem for GPUs, it's always been raw bandwidth. In that case, an HTX implementation just isn't beneficial. You can say that you can link up 3 HT3 links which is highly unlikely. That would require all motherboard makers to have triple traces between the CPU socket and the GPU socket which is costly and unlikely due to space constraints. Besides, even with triple HT3 links a CPU's IMC can't provide enough bandwidth to fill all three links anyways unless you have a nice 6 channel FB-DIMM implementation, which isn't coming to desktop anytime soon. Again bandwidth is key.

What I can agree on though, is that HTX GPUs would be a great replacement for IGPs. They would probably be slightly more expensive, but noticeably faster. HTX is viable for low-end graphics cards because those don't need as much bandwidth.


That's what I meant by throughput bound. They need things to move through fast with little processing. Whereas a CPU doesn't need things to move as fast but they need to process more things.

Pixel texturing is not as complex as say FP math on 10 digits.

Of ocurse this is not something I think will happen next year and it will more thna likely be on FireGL first since those can be more expensive.


The article you posted mentions how GPUs carefully cache and keep pixels in flight to hide latency. A high speed cache connected to a DDR2 dual channel at 1600MHz could definitely hide a lot of latency and allow the GPU to run full speed.

But do you think they are serious about partitioning GPUs into smaller units? The diagram on Anand's site mademe think about it. If they do it will mean one base "pixel pipe group" and additional units add more pixel processing.

Hopefully you understand now what I meant by throughput bound and processing bound.
July 25, 2006 8:22:10 PM

Quote:
At first glance i saw "AMDs plan?.....Clueless Chipsets and modular pixel pipes??"

This (A hybrid GPU/CPU chip?) may be good for super low budget mid performace parts...



Because of HT, this odesn't have to be ont he same die. The link I posted shows a diagram of different CPU/GPU configs. Someone else mentioned XDR, which has bandwidth so insane it could be used just for the GPU but would power both with 3 HT links.

Careful DDR3 caching would have the same benefit. Either way you look at it, be prepared for socketed GPUs possibly by the end of next year.

Expect a Cell-like config.
July 25, 2006 8:48:08 PM

I can see that amd is desperate. It might just help them though.
July 25, 2006 9:21:07 PM

I hope AMD/ATI make like 32 / 64 tiny cores which can be used either for in Game logic or Graphics duties :p 
hopefully this will be like 10 times better than a 7950 is now. lol
July 25, 2006 9:28:45 PM

:lol:  Give up BaronBS.
July 25, 2006 9:50:45 PM

Quote:
To continue the lesson, GPU's only run @ 600MHz not because "They dont need to go faster" but because it has to keep parallel coherency across 48 cores on the die. That is no easy task.

Please stop making up this crap and spouting it off. People come here to learn, and make informed decisions on thier future hardware purchases. Not only are you confusing them with facts that run perpendicular to reality, but you are also sulling up the good name of Tom's.


if you talked to me like that on the streets......

You guys need to really watch your mouths. They don't go faster because throughput is the key not IPC. Most of the work for GPUs is just painting pixels. There are very few FP calcs necesary to do that. Sure 48 pipes will get hot, but the limit is throughput. That's why early video cards couldn't play HD.

They can't process the sheer number of pixels to display. The pixels don't need floating point processing there are just A WHOLE LOT OF THEM.


I said AMD would be in Dell desktops. I said AMD would release K8L in 07. I said 5000+ would be the hottest thing on the market. I said the X2 3600+ is meant to replace the single core Sempron.

true I didn't think the merger would happen but nobody's perfect.

I make good predictions. If you disagree you can. If you have an opinion, state it. If you want to call people names find someone else.
July 25, 2006 9:54:53 PM

Quote:
:lol:  Give up BaronBS.

I think you're in denial. AMD invented the x86 core logic. They invented both the north and southbridge. AMD invented boolean logic and DSP. With Al Gore's direction and supervision, AMD invented the internet.
AMD is most known for its invention of sliced bread - Check the Inquirer for a link to its patent. :lol: 
July 25, 2006 10:57:12 PM

Quote:
if you talked to me like that on the streets......


Why because your gangsta crew would kick our asses? :lol:  Whats your gangsta name btw?

Quote:
I said AMD would be in Dell desktops.


Where?

Quote:
I said AMD would release K8L in 07.


For servers, yeah. No-one disagreed with that one moron.

Quote:
I said 5000+ would be the hottest thing on the market.


So hot you can't even buy it! :roll:

Quote:
I said the X2 3600+ is meant to replace the single core Sempron.


And this would be happening where?

Quote:
You guys need to really watch your mouths. They don't go faster because throughput is the key not IPC. Most of the work for GPUs is just painting pixels. There are very few FP calcs necesary to do that. Sure 48 pipes will get hot, but the limit is throughput. That's why early video cards couldn't play HD.

They can't process the sheer number of pixels to display. The pixels don't need floating point processing there are just A WHOLE LOT OF THEM.


:lol: 
July 25, 2006 11:44:52 PM

Quote:
I'll say it again AMD and ATI are going to work together to turn the ATI's R600 technology into a general purpose processing monster by the 2008-2011 timeframe to compete with the same type of technology which Intel and Sun are developing. Massively parrallel with relatively simple cores but massive processing power. CPU's like this will cure cancer.


I'm all in favor of that. The heat output and the upgrade costs of GPUs are a nuisance. I'm glad it's ATI and AMD instead of Nvidia and AMD because I like AVIVO and All in Wonder cards. I like my multimedia cards to do games well but provide all the bells and whistles for video recording, playback and editing.

I wonder what Nvidia will do in the long run? Intel has their plans to copy Sun, and will probably improve their onboard graphics only to eventually integrate it into the CPU. It would be ideal if graphics subsystems cooperate in a future PC, ie ATI in an 8 core AMD processor, Nvidia in a PCIe slot and all working together using a standard that replaces Crossfire and SLI. That way, everyone wins and everyone has choice.
July 25, 2006 11:54:04 PM

Quote:
I'll say it again AMD and ATI are going to work together to turn the ATI's R600 technology into a general purpose processing monster by the 2008-2011 timeframe to compete with the same type of technology which Intel and Sun are developing. Massively parrallel with relatively simple cores but massive processing power. CPU's like this will cure cancer.


Nah stem cell research will.
July 25, 2006 11:59:16 PM

Quote:
A hybrid GPU/CPU chip might work as a replacement for todays integrated graphics chips, but it wont kill stand-alone graphics cards.

HTX would be a viable replacement for PCI-e. Lower latency, direct low level access, and vastly superior bandwidth compared to PCI-e.

Sidenote: HTX is a slot, and it uses the same mechanical connector as PCI-e 16x.


Extra side note.

The first chips with onboard FPU weren't as fast as good ones with a coProc.

Also, HT is very complicated and there are different flavors of it. The connector separates them sort of, but Direct Connect is basically an HT flavor that is located within some "speed of light vs. resistance tolerance distance." (guess?)

Direct Connect is only used for two chips and not peripherals. HTX uses the same protocol but is tuned to not depend on proximity, hence the larger surface area and more interconnects.

Thsi won't happen for 18 months minimum, but it is an exciting idea. If you look at the link in the first post you'll see it was AMDs idea.


Cell-like.
July 26, 2006 10:13:44 AM

Quote:
To continue the lesson, GPU's only run @ 600MHz not because "They dont need to go faster" but because it has to keep parallel coherency across 48 cores on the die. That is no easy task.

Please stop making up this crap and spouting it off. People come here to learn, and make informed decisions on thier future hardware purchases. Not only are you confusing them with facts that run perpendicular to reality, but you are also sulling up the good name of Tom's.


if you talked to me like that on the streets......



You'd do what? Pee on me?
July 26, 2006 12:49:37 PM

Quote:
You have an opinion, don't you? I enjoy hearing opinions.


I think you saw the word glueless for the first time today and are desperately keen to ram it into any new technology speculation thread as many times as possible.

Quote:
if you talked to me like that on the streets......


I wouldn't talk to you. I think it is rude to interupt someone at work.
July 26, 2006 12:52:19 PM

Be would biatch slap you... his definition is to hold his hand up and say, "you didnt just say that, talk to the hand!" in the winiest girly voice evar.
July 26, 2006 12:58:41 PM

Quote:
Whats your gangsta name btw?
Dr Ruiz....homey!! :wink:
July 26, 2006 4:34:39 PM

Quote:
Also, HT is very complicated and there are different flavors of it. The connector separates them sort of, but Direct Connect is basically an HT flavor that is located within some "speed of light vs. resistance tolerance distance." (guess?)

Direct Connect is only used for two chips and not peripherals. HTX uses the same protocol but is tuned to not depend on proximity, hence the larger surface area and more interconnects.

Your point?
July 26, 2006 6:52:58 PM

From the horse's mouth, so to speak.
One reason for AMD's acquisition of ATI:

http://www.digitimes.com/systems/a20060726VL203.html

Discussing the AMD-ATI deal;
Q&A with AMD EVP Henri Richard, part two
Chris Hall,
DigiTimes.com, Taipei

Q: You're absolutely confident, then, at this stage, that you can now start to compete with Intel in terms of quality of integrated graphics?
A: Oh I think we are definitely capable of offering higher quality graphics than they can, in the integrated space.
Q: And obviously here we're talking more about a consumer segment than specialized gaming platforms?
A: Right.
Q: So have you, at AMD, looked carefully at what particular percentage or slice of the processor market you could hope to gain by offering integrated graphics?
A: Well, our objectives are not changing. We've said clearly that our target is to get a 30% revenue share of the industry by 2008. That remains our target. Based on the second quarter of this year, we seem to be well on track to reach that in the server space. We've got good momentum as well in the desktop space. We are a fairly new entrant in the mobile space, and I think that particularly in the mobility space, the ATI acquisition is going to help us accelerate in the market.
Another thing I could mention that's interesting for us is the depth and quality of industry relationships that ATI brings to the company, including relationships with OEMs that don't do business with AMD. So I see all these elements as opportunities for us to accelerate our penetration of the mobile market and to continue our aggressive new-customer acquisition strategy.
July 26, 2006 9:51:15 PM

Quote:
Most of the work for GPUs is just painting pixels. There are very few FP calcs necesary to do that. Sure 48 pipes will get hot, but the limit is throughput. That's why early video cards couldn't play HD.

They can't process the sheer number of pixels to display. The pixels don't need floating point processing there are just A WHOLE LOT OF THEM.

Can someone please confirm or refute the above statements? I'm interested in learning about this stuff but am not sure what portions can be trusted as factual (or if any can). I was under the impression that HD (highest right now is 1080p = 1920x1080) didn't play well on past generation cards simply because they didn't support H.264 decoding, meaning that the CPU had to do all the work. The number of pixels alone shouldn't be a problem because games can definately be played at higher resolutions than that (for example, an X800XT could run Q3A @ 2048x1536 but it won't improve HD playback). Or is BM actually referring to much older cards, like GeForce 256 era and older?

In any case, I was under the impression that graphics rendering was very FP intensive. I thought that was the reason why AMD created 3DNow! FP enhancements back in the day. I also thought that was why the academic community uses the X1800 gpu for FP math, because it set records for FP crunching when it came out. Or am I confused? Someone please clarify for me.



@beerandcandy - Granted that it's not a new idea, but if they pull it off and get to market first then it certainly is innovative.
July 26, 2006 10:18:39 PM

Quote:
Most of the work for GPUs is just painting pixels. There are very few FP calcs necesary to do that. Sure 48 pipes will get hot, but the limit is throughput. That's why early video cards couldn't play HD.

They can't process the sheer number of pixels to display. The pixels don't need floating point processing there are just A WHOLE LOT OF THEM.

Can someone please confirm or refute the above statements? I'm interested in learning about this stuff but am not sure what portions can be trusted as factual (or if any can). I was under the impression that HD (highest right now is 1080p = 1920x1080) didn't play well on past generation cards simply because they didn't support H.264 decoding, meaning that the CPU had to do all the work. The number of pixels alone shouldn't be a problem because games can definately be played at higher resolutions than that (for example, an X800XT could run Q3A @ 2048x1536 but it won't improve HD playback). Or is BM actually referring to much older cards, like GeForce 256 era and older?

In any case, I was under the impression that graphics rendering was very FP intensive. I thought that was the reason why AMD created 3DNow! FP enhancements back in the day. I also thought that was why the academic community uses the X1800 gpu for FP math, because it set records for FP crunching when it came out. Or am I confused? Someone please clarify for me.



@beerandcandy - Granted that it's not a new idea, but if they pull it off and get to market first then it certainly is innovative.

Thats what I want to know, Im learning too!
July 26, 2006 10:23:39 PM

Not commenting on the facts you are inquiring to, but take info from most posters with a grain of salt, and even more with others...
By following the board for a few days and reading should enlighten you on which posters have more credibility and which ones almost continually blow smoke and BS.
And ask questions, sometimes if still unclear retry question while posting some of your theories, and that might lead you closer to your answers.
July 26, 2006 10:35:51 PM

Quote:
I was under the impression that graphics rendering was very FP intensive. I thought that was the reason why AMD created 3DNow! FP enhancements back in the day. I also thought that was why the academic community uses the X1800 gpu for FP math, because it set records for FP crunching when it came out. Or am I confused? Someone please clarify for me.


http://www.jonpeddie.com/Back_Pages/2006/05-22-06_embarrassed.shtml

Quote:
A “pipe” usually consists of at least one 32-bit floating-point processor (often inaccurately expressed as “32-bit IEEE floating point,” when in fact it is merely representative of the IEEE floating-point functions called for in DirectX9 and soon 10, not true IEEE floating point functionality). Many GPU designs have multiple floating-point processors (FPPs) in one pipe, and some even have a scalar or vector processor (SIMD) as well.
July 26, 2006 10:37:21 PM

Quote:
Whats your gangsta name btw?
Dr Ruiz....homey!! :wink:


Seems like we're showing our redneck side today. Are you jealous?
July 26, 2006 10:58:51 PM

What is it with you and being stupid? And how come you can't come back to peoples points on this?
July 26, 2006 11:12:06 PM

Because even though the truth is 2 posts above this one, he'll never acknowlege it.
July 26, 2006 11:19:08 PM

Quote:
What is it with you and being stupid? And how come you can't come back to peoples points on this?



Because you're not the boss of me. And because glueless chipsets for 16Way is the reason for the purchase. The biggest reason. They also need mobile chipsets since Intel took a big dump on the desktop market with Core 2.

If you want to talk about the topic fine. If you want to fight then the Internet isn't the place for it.

Maybe you should just admit that I'm very rarely wrong, just

Faster Than The Times.


I've got a post about advanced wire work with motor curve generation.
July 26, 2006 11:30:53 PM

Quote:
I was under the impression that graphics rendering was very FP intensive. I thought that was the reason why AMD created 3DNow! FP enhancements back in the day. I also thought that was why the academic community uses the X1800 gpu for FP math, because it set records for FP crunching when it came out. Or am I confused? Someone please clarify for me.


http://www.jonpeddie.com/Back_Pages/2006/05-22-06_embarrassed.shtml

Quote:
A “pipe” usually consists of at least one 32-bit floating-point processor (often inaccurately expressed as “32-bit IEEE floating point,” when in fact it is merely representative of the IEEE floating-point functions called for in DirectX9 and soon 10, not true IEEE floating point functionality). Many GPU designs have multiple floating-point processors (FPPs) in one pipe, and some even have a scalar or vector processor (SIMD) as well.


When pixels are "painted" the only FP calculation as not a claculation but the assignment of a colr values. These color values may be added, but never multiplied. An FP is used EXACTLY BECAUSE COLOR VALUES CAN be related as a decimal. Either by percentage of RGB/CMYK or by the fact that pixels are smaller than an inch.


More complex FP calculations are done for collision and object placement than for actually painting the pixels. That's why you can have so many pixels in flight.

That's also why SIMD was invented so that you could apply one color to multiple pixels in a parallel fashion.


Does that explain it for games?

For video, there is even less actual hardcore math. If you look at what MPEG is it's just an "overblown" series of JPEGS. Just like AVI was an overblown series of BMPs, where the difference between the pixels in subsequent frames is how the algorithm functions.


Because of th sheer number of pixels in a 640x480 scene, GPUs had to improve throughput, meaning as little processing as possible.

I'll search for some more info, but the basic idea is that puxels themselves don't need that much multiplication for processing. Explain what FP processing is needed to paint a pixel red or blue or a combination of the two. That would be ADDITION which is VERY FAST.
July 27, 2006 12:50:19 AM

Quote:
To continue the lesson, GPU's only run @ 600MHz not because "They dont need to go faster" but because it has to keep parallel coherency across 48 cores on the die. That is no easy task.

Please stop making up this crap and spouting it off. People come here to learn, and make informed decisions on thier future hardware purchases. Not only are you confusing them with facts that run perpendicular to reality, but you are also sulling up the good name of Tom's.


if you talked to me like that on the streets......



You'd do what? Pee on me?

You would be infected with Baron Dumb Ass disease. It's a new STD. The symptoms include convenient memory loss, irresistible urge to start and remain in conflict, hearing voices, attaching your ego to a American Corporation, sudden bowel movements, unemployment, living in your parents bedroom because the dark scares you, enjoying backstreet boys pornography, delusions of grandeur for example; you believe you were a MS programmer, auto body mechanic, or a stock analyst just to list off a few of the symptoms.

If you believe you have 1 or more of these symptoms please seek medical attention before you develop stage 2 of the virus Baron BS. Where you develop venereal warts, smell like curie, and attempt to pee on individuals on the street, in an attempt to spread the virus further.
July 27, 2006 12:55:27 AM

LMAO, you're such a dipsh!t. I wouldn't link to your lame blog either.

Why can't you address It_commanderdatas post? Because you're wrong thats why.
July 27, 2006 12:58:20 AM

Quote:
When pixels are "painted" the only FP calculation as not a claculation but the assignment of a colr values. These color values may be added, but never multiplied. An FP is used EXACTLY BECAUSE COLOR VALUES CAN be related as a decimal. Either by percentage of RGB/CMYK or by the fact that pixels are smaller than an inch.


More complex FP calculations are done for collision and object placement than for actually painting the pixels. That's why you can have so many pixels in flight.

That's also why SIMD was invented so that you could apply one color to multiple pixels in a parallel fashion.


Does that explain it for games?

For video, there is even less actual hardcore math. If you look at what MPEG is it's just an "overblown" series of JPEGS. Just like AVI was an overblown series of BMPs, where the difference between the pixels in subsequent frames is how the algorithm functions.


Because of th sheer number of pixels in a 640x480 scene, GPUs had to improve throughput, meaning as little processing as possible.

I'll search for some more info, but the basic idea is that puxels themselves don't need that much multiplication for processing. Explain what FP processing is needed to paint a pixel red or blue or a combination of the two. That would be ADDITION which is VERY FAST.


I think you're thinking of 2d graphics, and as such I agree with you. In 3d, however, it's a different ballgame. Painting pixels on the screen is becoming the least of the GPU's duties. Pixel Shading, Procedural Geometry Deformations, Enviromental and bump-mapping, Accelerated High-order Surfaces, Reflection/Refraction calculations are all being pushed onto the GPU - not to mention physics! How are you going to interpolate vertices without floating point?

Remember that in 2d, the gpu only cares about a pixel. In 3d, the gpu thinks in terms of triangles (well, 3 vertices anyway). If you could plot a moving textured sphere, reflecting off rippling water, having 4 light sources with trees and grass waving realistcally in the wind (you can put a dead guy realistcally swinging from a noose for effect if you want. (The dead guy can be wearing a K8 t-shirt with textured fabric (Sorry, couldn't resist that one :p  ))) all with using integers, Pythagoras would be proud.
!