Sign in with
Sign up | Sign in
Your question

Why AMD had to buy ATI

Last response: in CPUs
Share
February 9, 2007 5:53:54 PM

http://www.theinquirer.net/default.aspx?article=37548


Because INTEL is going balls out with its intergrated GPU/CPU core....

Nvidia better look to change its business model

More about : amd buy ati

February 9, 2007 6:05:23 PM

While I agree that a Fusion-esque APU is the future, discrete graphics cards won't be disappearing for a long time, as long as there are gamers that will buy them and they perform better than whatever is integrated. I think the Inquirer is exaggerating things a bit here.

nVidia is far from dead.
February 9, 2007 6:10:33 PM

Quote:
While I agree that a Fusion-esque APU is the future, discrete graphics cards won't be disappearing for a long time, as long as there are gamers that will buy them and they perform better than whatever is integrated. I think the Inquirer is exaggerating things a bit here.

nVidia is far from dead.


you are assuming that fusion-type performance will not be able to compete with discrete at some point
Related resources
February 9, 2007 6:50:17 PM

Quote:
http://www.theinquirer.net/default.aspx?article=37548


Because INTEL is going balls out with its intergrated GPU/CPU core....

Nvidia better look to change its business model


Hopefully it's better then the ast GPU Intel released. I mean they're talking about 16x faster than G80.

I'd ike o see how they manage it considerng the power reqs for G80/R600.

They are saying it will be x86 "mini-CPUs" like Cell or XBox360 CPU. It will be interesing to see, but hopefully no one really thinks nVidia can be beat.
R700 is already set to be modular and nVidia will undoubtedly want to really out-distance Intel after this announcement. And considerng the time between G70 and G80, Intel will NEVER catch up.

if anything it'll be like ATi where ntel releases a chip that's faster than nVidia current and nVidia releases a new monster that lows the competitin away.

nVidia has the most experience in 3D GPUs and Intel - no matter the money they throw at it - will not be able to make up the 10+ year headstart.
February 9, 2007 7:12:41 PM

Unless they steal/hire Nvidia's employees :wink:
February 9, 2007 7:23:16 PM

sure it sounds like a monster, but how well will it really push the pixels? who knows. it sounds like it will be one helluva server add on type chip not a gpu.
February 9, 2007 7:40:51 PM

Discrete graphics have a long life ahead of them. I don't see discrete graphics going away because there is a foreseeable demand for the sheer power they put out as long as games and video gets more realistic/advanced. When we reach the point where that isn't possible and all that is left it to shrink stuff then, yes maybe Nvidia will be "in trouble" but I doubt they are that stupid. If anything, I bet nvidia helps lead us into the next revoltuion with VR or something like that.

Let's not get all bent out of shape at something that is 7+ years off in the future. 8O
February 9, 2007 7:46:20 PM

Quote:
Discrete graphics have a long life ahead of them. I don't see discrete graphics going away because there is a foreseeable demand for the sheer power they put out as long as games and video gets more realistic/advanced. When we reach the point where that isn't possible and all that is left it to shrink stuff then, yes maybe Nvidia will be "in trouble" but I doubt they are that stupid. If anything, I bet nvidia helps lead us into the next revoltuion with VR or something like that.

Let's not get all bent out of shape at something that is 7+ years off in the future. 8O


but hence the thread title... AMD did what it had to do early....to be viable in 7 years...
February 9, 2007 7:59:47 PM

Unless there is a paradigm shift in graphics processing that I'm completely missing, I don't see how from a raw performance perspective a GPU could compete while integrated into a CPU. You'd run into heat issues just like cramming two dual-cores onto one die and then trying to run at previous frequencies (almost twice the heat in the same socket space). I don't see this changing with a more advanced litho node as long as the discrete card doesn't seriously trail the CPU process.

However, I do see clearly some power efficiency, component-retasking, and manufacturing advantages from an integrated GPU that would make it suitable for budget and low-power systems.
February 9, 2007 8:03:37 PM

Quote:
Unless they steal/hire Nvidia's employees :wink:


Employees by themselves won't do it and if they(employees) steal manuf or trade secrets, Intel can be prosecuted.

They will never REALLY catch nVidia or even ATi.
February 9, 2007 9:35:13 PM

Quote:
interesting article from august

http://www.theinquirer.net/default.aspx?article=33812


Upon reading this I have concluded that it is not outside the realm of possiblility for intel to become a powerhouse in graphics. With the expertise of 3D Labs engineers and intels very deep R&D pockets, it would be very foolhardy to become complacent about intel becoming a player in the GPU market. This event was probably what triggered the eventual merger of AMD/ATI.

I wonder how much it cost intel to obtain this level of graphics expertise? I don't know, but I bet it was alot less than 5 billion dollars.
February 9, 2007 10:46:35 PM

Quote:
interesting article from august

http://www.theinquirer.net/default.aspx?article=33812


Upon reading this I have concluded that it is not outside the realm of possiblility for intel to become a powerhouse in graphics. With the expertise of 3D Labs engineers and intels very deep R&D pockets, it would be very foolhardy to become complacent about intel becoming a player in the GPU market. This event was probably what triggered the eventual merger of AMD/ATI.

I wonder how much it cost intel to obtain this level of graphics expertise? I don't know, but I bet it was alot less than 5 billion dollars.


Perhaps, but they also don't get a suite of GPUs for every segment. Intel better look out because nVidia is WAAAAAAYYYYYY ahead. nVidia is already working on a GPGPU. Yes, Intel has deep pockets but 3D Labs is known for OpenGL pro cards. I doubt they will catch nVidia.
February 9, 2007 10:49:51 PM

its still amazing to me AMD bought ATi
February 9, 2007 10:54:43 PM

so it doesn't trouble you to reference inquirer stories anymore, interesting change of heart you've had.
February 10, 2007 1:28:13 AM

Quote:
its still amazing to me AMD bought ATi


Yep.... nVidia, one would have thunk, would have been a better fit.

yeah, they might have been, but that would have required 5-6x the capital (if not more 8O )
February 10, 2007 2:04:32 AM

Quote:
Unless they steal/hire Nvidia's employees :wink:


Employees by themselves won't do it and if they(employees) steal manuf or trade secrets, Intel can be prosecuted.

They will never REALLY catch nVidia or even ATi.

That won´t do Nvidia any good though. If intel decided to go that route (and intel isn´t quite the nice guy if it comes to laws and morals) they will damage Nvidia to a degree that will make it worth it for them paying any fine. The worst case scenario would be they compete with nv, devalue them by stealing their tech and then buy them... 8O
February 10, 2007 2:36:42 AM

True. I know the difference, just showing how large nV is. There are a number of levels of cooperation, they all have their own special meaning. Why we need names for them all, I don't know. It boils down to one thing: sharing information/resources.
February 10, 2007 4:27:10 AM

Quote:
Personally, I think AMD would have been better served by merging with nVidia -- the buy-out option for ATI left them a little cash starved, and nVidia was a bit more profitable than ATI was over time...

But, water under the bridge --- the reasons why though are sane, this is the direction the industry is heading.... you just don't plop out 5 billion bucks on a whim --- there is longer term strategy going on here and it was not done on the idea "hey, let's see if we can merge a GPU/CPU" , it was most definitely done in order to secure the necessary resources to be competitive in the future.

nVidia, while reaping some benefit now, will be in some trouble I suspect in the long term.... just my hunch anyway.


So, I will speculate that nVidia will eventually buy the company with x86-license, VIA.
February 10, 2007 4:34:47 AM

Quote:
Wooowwwaaa, :)  :)  .... I had not thought of this.... interesting. I would not bet against your speculation :) 


I just had a quick look at VIA, which had less than US$1.4B cap ex. for 2006, suits best with nVidia's US$11B cap ex. :wink:
February 10, 2007 4:46:42 AM

Quote:
Unless they steal/hire Nvidia's employees :wink:


Employees by themselves won't do it and if they(employees) steal manuf or trade secrets, Intel can be prosecuted.

They will never REALLY catch nVidia or even ATi.

Only a Sith lord Deals in absolutes...
February 10, 2007 5:26:36 AM

What worries me is that AMD wants to do it on die in Q1 2009 while Intel wants to do it on core in Q2 2009, Intel will be at a big advantage over AMD if that happens, as Charlie pointed, this is a CGPU, so Intel is looking forwards to making discrete gfx obselete. The problem is that beyond gaming etc, how will 16 cores benefit us?, developers are already struggling with quad core and octo-core. AMD and Intel share the same vision for CGPU, but they are taking different approaches, not that i am a fanboy of AMD, I prefer their approach, first on die, and slowly slowly go for full integration, but who knows, AMD might change their idea of a CGPU. I previously read that with 1/4 of the raw power, a CPU can do the same work of a GPU, due to memory access and other things that are only availabe to CPUs. I think a larrabee mini-core would consist of a mini GPU+ mini CPU.
February 10, 2007 5:33:11 AM

People keep talking about the deal as if integrated gpu was what it was all about.
What about the math co-processor function of today"s gfx cards. What about having a physics core in the processor, or the drivers to use the same setup in either way?
Ati had a great chipset unit and some great coders. These people might be able to help Amd out with it's problem getting SSE working properly.
Then again, if one of the new fabs is a little slow on work, it might be handy.
Bottum line though, is that Ati is a good investment.
If Amd is ever going to catch up with Intel, they need to diversify.
a b à CPUs
February 10, 2007 5:34:49 AM

Kinda funny, Intel still leads in the Graphics marked :lol: 
February 10, 2007 6:08:17 AM

Do not mock the power of the Dark Side.
February 10, 2007 6:47:11 AM

Quote:
I was going to do that, you saved me sometime.... frankly, this is one major piece of the the puzzle.... the second??? A FAB.

This is an even bigger problem for nVidia.

It currently only has a cash of about $1B. From Form 10-K from nVidia, it earned about $300M last year. AMD's medium-sized Fab 36 has used $2.5B. Subtracting the $1B of subsidy, nVidia still cannot bulid a medium-sized fab in 2 or 3 years time.
February 10, 2007 6:57:21 AM

Quote:
While I agree that a Fusion-esque APU is the future, discrete graphics cards won't be disappearing for a long time, as long as there are gamers that will buy them and they perform better than whatever is integrated. I think the Inquirer is exaggerating things a bit here.

nVidia is far from dead.


you are assuming that fusion-type performance will not be able to compete with discrete at some point

Discrete cards are upgradable....GPU needs grow faster than CPU needs. This is the same reason why you have slots in your PC case.
February 10, 2007 8:02:01 AM

Quote:
Discrete graphics have a long life ahead of them. I don't see discrete graphics going away because there is a foreseeable demand for the sheer power they put out as long as games and video gets more realistic/advanced. When we reach the point where that isn't possible and all that is left it to shrink stuff then, yes maybe Nvidia will be "in trouble" but I doubt they are that stupid. If anything, I bet nvidia helps lead us into the next revoltuion with VR or something like that.

Let's not get all bent out of shape at something that is 7+ years off in the future. 8O


but hence the thread title... AMD did what it had to do early....to be viable in 7 years...

Does anyone really think that what is currently known to be the "personal computer market" will be even remotely recognizeable in 2014 or beyond? I, for one, will be severely disappointed if we are still tapping on QWERTY keyboards designed at the time of the US Civil War to keep fast typists from sticking their mechanical levers. I believe that making a multi-billion dollar investment at a time when your stock is being driven into the ground in the expectation of gaining a market advantage over such a ridiculously long term is a fallacious strategy.
February 12, 2007 5:12:22 AM

Ok, I have small amount of time.

"I, for one, will be severely disappointed if we are still tapping on QWERTY keyboards designed at the time of the US Civil War to keep fast typists from sticking their mechanical levers."

I would be disappointed too. I realize AMD's FUSION initiative is really focused on graphics at this point, however I believe they also have a Torrenza initiative that has a number of disparate businesses claiming interest. That led me to believe AMD could add-in a variety of coprocessors. Think for a minute ... graphics processing uses stuff from image processing which uses stuff from signal processing. Hmm, signal processing, like DSP. Not sure if AMD has contacted TI yet, or vice versa. If so, then good-bye keyboard. Catch my drift?

Hey, anybody from AMD, care to hire me yet? 8)


--M
February 12, 2007 5:49:21 AM

Ok, I'll add my two bits.

Having a marvy-do parallel computer is great. However, if you can't program the dad-burned thang it ain't gonna do nobody no dam good. Transmeta is *RE* discovering this principle [having laid off half the company recently].

Intel in some respects appears to be barking up the wrong tree. I love the idea of having lots of cores on a chip and lots and lots of threads. Kind of like a Barrel Processor [which I researched 15-20 years ago, so my memory may not be tee-totally accurate]. Fascinating.

Intel is building a fab in Israel and they also purchased an Israeli company for the express purpose of using their graphics technology. However, that was hardware technology [which was licensed from yet another company]. Not software technology.

Current GPU's are very much centered around multiply-accumulate (MAC) type operations. Not general purpose x86 type operations. MAC architecture, I believe, tries to have a processor per pixel, or at least that's the idea. Simple operations, very fast. Software wise it's a bit complicated, but GPU's have been around for over 10 years and ... they have market acceptance, yeah, that's the phrase I'm looking for. Have you ever heard of a programming language for Barrel Processors? But there are languages for shaders.

NVidia's latest architecture is much more general purpose than ever before. ATI is headed in a similar direction. !Easier to program! NVidia is also aware of the so-called physics processors out there. I'd guess you'll be able to do partical physics calculations on the next round of GPU's. So, bottom line, I think Charlie has an interesting article but don't be so sure about NVidia, nor AMD for that matter, dying just yet. [And I'd actually be surprised if Intel integrates a "GPU" onto a CPU chip. That'd be an elegant solution, which, let's be honest, isn't the Intel way].


--M
February 12, 2007 6:05:34 AM

Quote:

Current GPU's are very much centered around multiply-accumulate (MAC) type operations. Not general purpose x86 type operations.
--M


I do not see this as problem, they will simply add SSE6 or something like that to architecture. Actually, I think they will do that for general purpose CPUs sooner than this GPU goes out.

Personally, while following is just speculation, there is one aspect of this architecture that has VERY strong advantage - if they do it right, GPU code will be simply the part of regular x86 program, I mean it will be sitting in normal memory (GPU will cache it to hide differences), it will be produced like normal code, just there will be perhaps special StartGPUThread API function.

If this is true, this thing will be awesome. Forget about DirectX or OpenGL in that case. This is about ray-tracing.

Mirek
February 12, 2007 6:26:26 AM

Quote:
Ok, I have small amount of time.

"I, for one, will be severely disappointed if we are still tapping on QWERTY keyboards designed at the time of the US Civil War to keep fast typists from sticking their mechanical levers."

I would be disappointed too. I realize AMD's FUSION initiative is really focused on graphics at this point, however I believe they also have a Torrenza initiative that has a number of disparate businesses claiming interest. That led me to believe AMD could add-in a variety of coprocessors. Think for a minute ... graphics processing uses stuff from image processing which uses stuff from signal processing. Hmm, signal processing, like DSP. Not sure if AMD has contacted TI yet, or vice versa. If so, then good-bye keyboard. Catch my drift?

Hey, anybody from AMD, care to hire me yet? 8)


--M


Now there is the match made in heaven. If TI bought AMD and started implementing its DSP throughout the CPU and GPU range we'd be rockin' tonite! Where do I sign up on the waiting list for the first TI-AMD DSP CPU/GPU?

I'd hire ya. But I have no money. 8)
February 12, 2007 7:37:43 AM

Quote:


how would a dsp help?You have piqued my curiosity. :? :wink:


Check this out and see how nicely it would fit into CPU/GPU tech! 8)


you could not hit alt+tab with this. its for using 1 data stream/thread only.
instead of loading windows on your system you could load "window"
lol

Yeah, but marry it to Intel's 80 core CPU and what do you get? ZZZZZZZZZZZOOOOOOOOOOOOOOOOOOOMMMMMMMMM!!! :D 
February 12, 2007 7:42:07 AM

Hi all,

Speaking about the future is always interesting.

Here's my input:

I think that moving the graphics pipeline onto a single chip is a good idea, i mean with the prospect of having multiple cores on a single chip and the ever increasing cost multiple computer components eg. The 8800GTX,

Moving everything onto a single chip would not only save the overhead and latency associated with any fancy busses like AGP or PCI Express, but it would also lead to a more elegant solution for home users. It would also prevent incompatibility problems between components. If companies did bother to do some extensive testing they would also find an optimum point for their hardware configuration eg. 16 GB of ram is enuff for the processing power on this particular chip model.

Ultimately the computer would consist of two components: the Bigchip (Processor cores, ram, any other stuff) and the Mother board with a peripheral interface chip, so upgrading would be simple, if u want more usb or SATA ports you get a new motherboard, if you want more power u get a new processor chip (Bigchip). This would also lead to a smaller number of pins that will need to connect to the motherboard since the cpu has onboard ram and only needs to connect to the motherboard for input/output and access to storage devices.

Ultimately everyone has the a limited amount of Silicon real estate in their computer, but if you can redistribute it to make it work far more efficiently then that's obviously the best way to go. ie. If you put it all onto one big chip you save siliconused for overhead of busses and component interconnects and use it most efficiently.

It seems to me that while the past year revealed consoles that resemble computers, it seems quite appropriate that computers in future resemble consoles in the sense that they are marketed as one single unit that has each component within designed to work optimally with the rest. Consoles of the past have proven to be far more efficient than their PC equivalents for example if one tried to get a pc with 32 MB of ram to run Final Fantasy X i doubt you would have any success. Yes a PS2 has 32 MB of ram. Are PS2's just magical? I think not.

In reference to graphics processors, I think that the current architecture will be around for a while, the way that it has been setup is quite good for a graphics pipeline, i mean the cpu moves the polygons around and the graphics card keeps drawing them periodically. All the current games use it and the only reason to change would be if the integrated solution out did the current arcitecture, this is still a fair way off.

One last thing: I do not like Intel's architecture of the 16/8/4 core processor as stated in the inquirer, Many processors simply connected in a ring bus. I think that arcitecture like that of the Cell Processor is much better suited to home computers since the operating system can be loaded into a main processor that can implement management functions and can manage the other smaller co-proceseors like processing resources, easier to program for, and easier to manage many cores.

In this future respect AMD buying ATI was an incredibly smart move, especially if Intel and nVidia relations start straining, however the benefits are still a while off.
February 12, 2007 7:47:57 AM

Quote:
more like flop,80 threads would be like 1/10th of a loaded mid ranged x2core.


I'm no DSP guru, but with SIMD, VLIW, superscalar, Harvard and mod von Neumann archs, there are ways to get it to haul serious butt. Oh well, for now, it's just a TI-AMD wet dream... :D 
February 12, 2007 7:49:25 AM

Quote:
I think cell is pretty decent as well;I heard the programming is a freak of nature tho. :p  seriously I did hear it was difficult.


Personally I see DSP on the desktop in some sort of Fusion brainfart well before Cell. But that's just me.
February 12, 2007 7:55:59 AM

Quote:
I think cell is pretty decent as well;I heard the programming is a freak of nature tho. :p  seriously I did hear it was difficult.


Personally I see DSP on the desktop in some sort of Fusion brainfart well before Cell. But that's just me.

scary thing is you could be right.I wouldnt assume that its not bieng looked at.

I dunno. TI seems to be in love with placing DSP in HDTV right now. I guess that they're seeing an exploding market and want to strike while the iron's hot. However, introducing DSP into the polycore CPU-GPU fusions would really shake things up!

Quote:
more like flop,80 threads would be like 1/10th of a loaded mid ranged x2core.


I'm no DSP guru, but with SIMD, VLIW, superscalar, Harvard and mod von Neumann archs, there are ways to get it to haul serious butt. Oh well, for now, it's just a TI-AMD wet dream... :D 

:lol:  :lol:  :lol:  :lol:  dewd you are screwed up :lol:  :lol:  :lol:  I mean that in the best way possible. :p 

Hey, do you know how long I've had to work and how many advanced degrees from the University of Ardnox I've had to accumulate to be as screwed up as I am? Insanity of this level does not come easy. It takes perseverance, commitment and years of hard work! :D 
February 12, 2007 8:02:55 AM

Quote:
Hi all,

Speaking about the future is always interesting.

Here's my input:

I think that moving the graphics pipeline onto a single chip is a good idea, i mean with the prospect of having multiple cores on a single chip and the ever increasing cost multiple computer components eg. The 8800GTX,

Moving everything onto a single chip would not only save the overhead and latency associated with any fancy busses like AGP or PCI Express, but it would also lead to a more elegant solution for home users. It would also prevent incompatibility problems between components. If companies did bother to do some extensive testing they would also find an optimum point for their hardware configuration eg. 16 GB of ram is enuff for the processing power on this particular chip model.

Ultimately the computer would consist of two components: the Bigchip (Processor cores, ram, any other stuff) and the Mother board with a peripheral interface chip, so upgrading would be simple, if u want more usb or SATA ports you get a new motherboard, if you want more power u get a new processor chip (Bigchip). This would also lead to a smaller number of pins that will need to connect to the motherboard since the cpu has onboard ram and only needs to connect to the motherboard for input/output and access to storage devices.

Ultimately everyone has the a limited amount of Silicon real estate in their computer, but if you can redistribute it to make it work far more efficiently then that's obviously the best way to go. ie. If you put it all onto one big chip you save siliconused for overhead of busses and component interconnects and use it most efficiently.

It seems to me that while the past year revealed consoles that resemble computers, it seems quite appropriate that computers in future resemble consoles in the sense that they are marketed as one single unit that has each component within designed to work optimally with the rest. Consoles of the past have proven to be far more efficient than their PC equivalents for example if one tried to get a pc with 32 MB of ram to run Final Fantasy X i doubt you would have any success. Yes a PS2 has 32 MB of ram. Are PS2's just magical? I think not.

In reference to graphics processors, I think that the current architecture will be around for a while, the way that it has been setup is quite good for a graphics pipeline, i mean the cpu moves the polygons around and the graphics card keeps drawing them periodically. All the current games use it and the only reason to change would be if the integrated solution out did the current arcitecture, this is still a fair way off.

One last thing: I do not like Intel's architecture of the 16/8/4 core processor as stated in the inquirer, Many processors simply connected in a ring bus. I think that arcitecture like that of the Cell Processor is much better suited to home computers since the operating system can be loaded into a main processor that can implement management functions and can manage the other smaller co-proceseors like processing resources, easier to program for, and easier to manage many cores.

In this future respect AMD buying ATI was an incredibly smart move, especially if Intel and nVidia relations start straining, however the benefits are still a while off.


I'd love to see Bigchip and Mama in my desktop, and I agree that it is the way that the industry is going. However, I don't know if most PC buyers are ready to buy the one-size-fits-all over the modular philosophy, at least right now. PS2s get along fine on 32MB RAM since that is what the software is designed to handle. Over on the Windows (and Mac) side, we're seeing RAM requirements skyrocket. I'm looking at getting 8GB in my next system just so I can feed Photoshop 4GB and run the rest of my stuff, including OS in the other 4GB. Now it wasn't too long ago that an 8GB HD was a big un! Now I need that much in RAM. Much of the humongous RAM requirements are due to lazy coding and mediocre programming, but some just can't be helped. When I ask Photoshop to manipulate a 2.5GB file where do I expect it to keep the data? As for cell, I can't help but agree with verndewd that the writing the code for it is a bear, but then again, I'm not a programmer so I can't say first-hand. I don't see the ATI buy as being that smart. I see that Wall St. has voted with its wallet and severely punished AMD stock since the buy. And when it comes to the bottom line, nobody quite knows it like the boys on Wall St.!
February 12, 2007 8:05:32 AM

I'm not saying the Cell is perfect but if someone, anyone is looking into doing 8 and 16 and more cores they have to use some kind of management system, I think that an architecture similar to the Cell would be good in such a case since the main processor could house the operating system and would control the rest of the computer like the operating system should. It would also ease programmer workload since the operating system would have to be the only piece of software that needs to manage the multiple cores and could allocate processing resources to any software that needs it in a transparent fashion or at least as transparently as possible.
February 12, 2007 8:10:44 AM

Quote:
I'm not saying the Cell is perfect but if someone, anyone is looking into doing 8 and 16 and more cores they have to use some kind of management system, I think that an architecture similar to the Cell would be good in such a case since the main processor could house the operating system and would control the rest of the computer like the operating system should. It would also ease programmer workload since the operating system would have to be the only piece of software that needs to manage the multiple cores and could allocate processing resources to any software that needs it in a transparent fashion or at least as transparently as possible.


Again, speaking from a non-programmer's standpoint, I do wonder why with the immense investment in Cell, IBM has only really cranked out the QS20 blade at 3.2GHz and hasn't released a single real benchmark yet. Seems like Cell might not be quite ready for PC primetime yet. But then again, I'm only guessing.
February 12, 2007 8:18:01 AM

Quote:


DSP on FPGA /LPGA along side a quad cpu in the 4x4 could be interesting with an ondie gpu maybe by then an 8core cpu and an accellerator module in place of a dimm(my insanity speaking there)
its an interesing concept if it could streamlinedata to one pipeline and that pipeline was outrageously fast;essentially that would be MMM's RHT.

No one appreciates how hard you have worked to be screwed up as much as I do.I didnt think anyones bizzarro ball were bigger than mine here,not only was I wrong but you back your play with solid info more consistently.

you deserve an award i just dont want to getted banned again for naming the award. :wink: Pilots probably checking my packets for profane code as we speak :lol:  :lol: 


Hey, Pilot, when are you gonna make that #$%*ing verndewd's ban permanent? I can't stand his ugly... Oh, Hi, Verndewd. 'Zup, man? Things ok at home, buddy? :lol: 

I like the idea of having DSP on a QFX-type setup with some monster pipeline big enough to surf in. I wouldn't have a clue as to whether it's even theoretically feasible, but definitely another candidate for nocturnal emissions... of RF signals of course... :wink:

Wow to be given the honor of a bigger bizzarro ball than you is a real compliment. Thanks, dewd. I will strive to continue to be worthy of that... ball.. 8O
February 12, 2007 8:27:01 AM

Yeh, i can see that programs nowadays need the memory, and lazy coding does cause problems, but i'm pretty sure programmers are lazy whether they work for microsoft or whether they work for sony or EA or a company that makes ps2 games. Laziness = Human. Either way the ppl who design the platform are ultimately responsible for the performance.

In reference to your question about the one size fits all thingo, I didn't mean that the companies manufactures only one chip each. The fact is that today whether your just an email and internet kind of person or a HDR and 1600x1200 performance junkie, you have a platform to suit your needs, now if your an email/internet guy you don't go out and get a $400 Dual PCIex16/Full SLI motherboard, you go and get a cheap mobo and a suitably cheap processor. So in the end you are still fitting together suitable components. My point is that if you look at dell or alienware or any computer manufacturer, they have 4 catagories: Server, Enthusiast, Medium-High Performance, and Email/Internet guy.

So whether you buy the computer as one chip or as several different ones it doesn't really make that big a difference to the components you get.

Also, the trend toward pretty and fluffy operating systems cough*Vista*cough cough*OSX*cough. These OSes increasingly having 3D features will make the difference between the Email/Internet Guy' requirements and the Enthusiast's requirements quite a bit smaller.

PS. Ultimately I just want a power house HDTV that does everything, no mess, no fuss.
February 12, 2007 8:43:21 AM

Quote:
Yeh, i can see that programs nowadays need the memory, and lazy coding does cause problems, but i'm pretty sure programmers are lazy whether they work for microsoft or whether they work for sony or EA or a company that makes ps2 games. Laziness = Human. Either way the ppl who design the platform are ultimately responsible for the performance.

In reference to your question about the one size fits all thingo, I didn't mean that the companies manufactures only one chip each. The fact is that today whether your just an email and internet kind of person or a HDR and 1600x1200 performance junkie, you have a platform to suit your needs, now if your an email/internet guy you don't go out and get a $400 Dual PCIex16/Full SLI motherboard, you go and get a cheap mobo and a suitably cheap processor. So in the end you are still fitting together suitable components. My point is that if you look at dell or alienware or any computer manufacturer, they have 4 catagories: Server, Enthusiast, Medium-High Performance, and Email/Internet guy.

So whether you buy the computer as one chip or as several different ones it doesn't really make that big a difference to the components you get.

Also, the trend toward pretty and fluffy operating systems cough*Vista*cough cough*OSX*cough. These OSes increasingly having 3D features will make the difference between the Email/Internet Guy' requirements and the Enthusiast's requirements quite a bit smaller.

PS. Ultimately I just want a power house HDTV that does everything, no mess, no fuss.


Yes, it's definitely lazya$$ coders who create a lot of problems but also the insane aspect of designing software with so many features that absolutely no one ever uses is also creating the huge RAM requirements. Take a look at Office 2007 (BARF). Even when you get past the Easter Egg Blue and the greatest abomination in the history of interfaces, the Evil Ribbon, it has features that after over 20 years of using MS Word, etc. I have absolutely no idea what they do or who would do anything with them! This throw-the-kitchen-sink-in-so-that-we-can-claim-to-be-better-than-OpenOffice programming mentality is fundamentally flawed and is just contributing to flabby code that requires shovelloads of RAM.

The prob with even breaking it down into 4 Sizes Fits All, you'll have the rabid gamer who is going to not even blink at dropping $600 or more for a 12 inch monster like this:



while my requirements for my system are similar to his, CPU-wise, but need that kind of video output like I need another hole in my head. Just like the mix and match modularism is still evident in home audio even after 40 years of development, I think that it's with PCs for a while longer.

Both Vista and OSX are not welcome in my house. They can just take their DRM, Naziware and EULAs and go straight to hell without passing GO or collecting $200. For my own purposes, I'm pretty well stuck with the conventional PC paradigm for the foreseeable future. Although a large part of my PC use is in video, there is so much more that I do that I guess I'm gonna keep my monitor 10 inches from my nose for a while longer!
February 12, 2007 8:46:30 AM

Quote:
However, this would totally screw up the market. I feel more confortable having ATI/AMD + Nv + Intel... it's way better for consumers.


Can you just imagine what would happen if MS were to buy any of these three entities? That might be the day I start reviewing my Survivor tapes to find an island with lots of bananas and few mosquitoes and I'm OUTTA HERE!


Quote:
@ cappin robertCoughdyKcoughhedcough hey buddy i got yor back;ill be next door. :lol:  :lol: 


As long as you don't stick the knife that Dade got in my back, you're more than welcome! :wink:
February 12, 2007 11:22:13 AM

Quote:

Yes, it's definitely lazya$$ coders who create a lot of problems but also the insane aspect of designing software with so many features that absolutely no one ever uses is also creating the huge RAM requirements. (...)


BULLS EYE!

In a recent thread about Vista I stated the same thing and kinda got flamed for it.

Much of the problems we having is bad coding, poor techniques, lazy programming... you see the picture.

Also that's why we need so many software updates, patches, service packs.... I developers spent more time debugging, and improving actual codes and algorithms, rather than creating useless features, life would be better :) 

nicely stated,down with fluff apps and up with 3d interactive computing;hell if we are going to use cards the size of 3rd world countries they may as well do something more than give us a hyperlink to the control panel.

Well, Simonetti, let me flame you for it to make you feel at home... Simonetti, you stink, you've got cooties, you're ugly and your momma dresses you funny, and BTW, yo momma's so fat when her beeper goes off people think she's backing up.

Now that we have that outta the way... :lol: 

When you combine overwhelming flabware that has to have every imaginable (and some not imaginable) option and combine it with thousands of lines of code that do something that only one out of 10,000 users will ever actually apply, then the problems with bad coding, poor techniques and lazy programming end up handicapping the whole sw. I mean, how big is MS Word 2007? 150MB? 200MB? To write stinkin' letters? And if I wanna save my letter in PDF I gotta download some more junk?

Tomorrow MS will release 13 updates, some of them major. Every couple of Tuesdays, another couple of dozen MB get downloaded into my PC which are nothing more than bandaids ontop of bandages ontop of scars that shouldn't have been there in the first place.

Verndewd, I can hardly wait for the DX11 cards. They'll likely be the size and electricity draw of a 1957 Univac!!!! 8O

In the famous words of Susan Powter (remember her?) STOP THE MADNESS!!!!
!