We've already talked to product managers representing the graphics industry. But what about the motherboard folks? We are back with ten more unidentified R&D insiders. The platform-oriented industry weighs in on Intel's, AMD's, and Nvidia's prospects.
If you read our recent Graphics Card Survey, then you already know another battle in the graphics war is looming. In fact, Nvidia recently fired off yet another salvo.
With 2011 right around the corner, an even more influential shootout is about to happen, as AMD and Intel both bring new weapons to the front line in the form of CPU/GPU hybrids. However, our first survey back in August was a bit one-sided, because it only sought out the voice of video card makers. Hybrids make this a topic for two industries. What about the motherboard guys?
So, while we were putting the call out to experts in the graphics business, we were also making the rounds on the motherboard side. Though, we want to point out that we have other reasons for wanting a second opinion. If you look at the company structure for the tier-one and -two motherboard manufacturers, those companies selling motherboards and graphic cards have completely separate divisions. And while they do collaborate on some marketing and technical aspects, they usually are left to their own devices and operate independently of one another. After all, the technical people in the motherboard division have different goals and agendas. The worries and problems on one side don’t translate well to the other. What does the motherboard team care if their GPU-obsessed colleagues can’t find the right balance of performance to heat?
We think it is important to get the whole story. That is what a survey should be about. There is nothing wrong if the responses turn out the same. A universal answer means there is a universal opinion. However, for those people who actually dig a little deeper, it becomes apparent that there is a little more “meat on the bones.” Similar answers are often similar for different reasons, and it is the reasoning we find important. “Yes” and “No” answers don’t sate our appetite, simply because there is no context for understanding. This is why we will always try and solicit additional comments on all of our questions.
Background
We should make clear these are not marketing representatives sent to evangelize certain agendas. If they are, they’re pulling double duty as product managers. The primary duty of public relations is to get good press, and sometimes it is hard to get those folks out of that mode without having to resort to alcohol (Chris and I are both in agreement that it would probably be unwise to do so, anyway).
We specifically chose to talk to people in charge of the technical aspect of their company’s motherboard business. Depending on the organization, we carefully selected GMs, VPs, heads of departments, and R&D engineers. It is important to note that these are people from headquarters, meaning they bring us their ideas from a global perspective.
There were no barriers in our quest. If we needed to use another language to find the people we wanted, we used it (that’s the beauty of working for a global media company). Distance did not deter us, and if you saw our international phone bill, you’d understand the time we dedicated to this project. No stone was left unturned to find the people we needed. To our participants out there, we extend our most gracious thanks and sincerest apologies for the constant pestering.
Ultimately, we see this as a way to bring a better sense of industry dialog, answer a lot of your questions, end a lot of speculation, and provide insights on current and upcoming industry trends.
If you factor the fact that getting a fusion of cpu/gpu will cost a bit more than a simple cpu, if you plan on doing any gaming at all, why not invest an extra 30$ or so (over the cost of cpu/gpu fusion, not just cpu) and get something that will game like twice as well and likely have support for more monitors to boot?
Edit: Although after the slow release of Fermi, I bet everyone's wondering what exactly is in store for Nvidia in the near future; like this article says, there seems to be a lot of ambivalence on the subject.
Unless you need more memory, or are adding numbers more than over 2 billion, there's absolutely no point in it. 8-bit to 16-bit was huge, since adding over 128 is pretty common. 16-bit to 32-bit was huge, because segments were a pain in the neck, and 32-bit mode essentially removed that. Plus, adding over 32K isn't that uncommon. 64-bit mode adds some registers, and things like that, but even with that, often times is slower than 32-bit coding.
SSE and SSE2 would be better comparisons. Four years after they were introduced, they had pretty good support.
It's hard to imagine discrete graphic cards lasting indefinitely. They will more likely go the way of the math co-processor, but not in the near future. Low latency should make a big difference, but I would guess it might not happen unless Intel introduces a uniform instruction set, or basically adds it to the processor/GPU complex, for graphics cards, which would allow for greater compiler efficiency, and stronger integration. I'm a little surprised they haven't attempted to, but that would leave NVIDIA out in the cold, and maybe there are non-technical reasons they haven't done that yet.
A lot of scientific software vendors I have communicated with about this sort of thing actually have been hesitant to code for CUDA because until the release of the Fermi cards, the floating-point support in CUDA was only single-precision floating point. They were *very* excited about the hardware releases at SIGGRAPH...
Until then the performance of the processors with integrated GPU will be pretty much the same as platforms with integrated graphics as the bottleneck will still be RAM latency and bandwidth.
From what I have read AMD's Llano hybrid gpu is about the equal to a 5570. Llano by next year has no chance of killing sales of $50+ discrete solutions. I think they hybrids will have little effect on discrete solutions and your $150+ is off. The only thing hybrid means is potentially more CPU performance when a discrete is used. Another difference will be unlike motherboard integrated GPU's going to waste the hybrids will use the integrated GPU for other tasks.
No. There are [at least] two reasons that come to my mind. The first is heat. It is hard to dissipate that much heat in such a small area. Look at how huge both graphics card and CPU coolers already are, even the stock ones.
The second is defect rate in manufacturing. As the die gets bigger, the chances of a defect grow, and it's either a geometric or exponential growth. The yields would be so low as to make the "good" dies prohibitively expensive.
If you scale either of those down enough to overcome these problems, you end up with something too weak to be useful.
Although the reasoning around this is mostly sound, I'd say your price point is off. Make that $100+ discrete solutions. A typical home user will be quite satisfied with HD5570-level performance, even able to play many games using lowered settings and/or resolution. As economic realities cause people to choose to do more with less, they will realize that this level of performance will do quite nicely for them. A $50 discrete card doesn't add a whole lot, but $100 very definitely does, and might be the jump that becomes worth taking.
Moreover, my concern about integrated graphics is this: given that ALL cpu's will have it, and it won't match the performance of high end GPU's - it's going to drive up costs for everyone buying the new generation of cpu's. And afaik, there's not going to be any alternative.
I've seen the feature touted before, but it doesn't appear to have caught on.
I am sure there is a consortium and standard on Solid Drives but its non for profit unlike nvidia architecture or intel design.
I think this resembles "no one will need more than 6xxKB of memory"
(talking about the upcoming corei and atom igp's.
They may prove a significant increase on app responsiveness!