"Advanced Micro Devices' ATI graphics chip unit doesn't want to build "huge" chips like rival Nvidia, an executive says.
But an Nvidia exec says smaller isn't always better or more efficient.
Such statements will help define how the two chip giants do battle at the high end of the graphics chip market in the coming years.
One of the largest graphics chips yet will be Nvidia's upcoming high-end GTX 280. This is the kind of chip that high-end gaming enthusiasts crave. But great performance often means a large transistor count. And the GTX 280 is expected to have both.
Here is an Asus board using AMD-ATI 3870 X2 that will be superseded by the new X2 board.
(Credit: Asus)AMD, of course, also intends to deliver extreme graphics technology with its upcoming X2, a follow-on to the current 3870 X2 series. And AMD wants to be clear: its strategy is fundamentally different than Nvidia's.
"We took two chips and put it on one board (X2). By doing that we have a smaller chip that is much more power efficient," said Matt Skynner, vice president of marketing for the graphics products group at AMD.
"We believe this is a much stronger strategy than going for a huge, monolithic chip that is very expensive and eats a lot of power and really can only be used for a small portion of the market," he said. "Scaling that large chip down into the performance segment doesn't make sense--because of the power and because of the size."
Skynner said that AMD tries to design GPUs (graphics processing units) for the mainstream segment of the market, then ratchet up performance by adding GPUs rather than designing one large, very-high-performance chip.
Nvidia's "strategy is to design for the highest performance at all cost. And we believe designing for the sweet spot and then leveraging for the extreme enthusiast market with multiple GPUs is the preferred approach," Skynner said.
This applies to memory too. AMD thinks support for technologies like GDDR5 memory is another way to deliver good performance at a reasonable cost. "You don't need a huge chip with a huge data path to get the bandwidth. You can utilize a technology like GDDR5 to get that bandwidth," Skynner said.
Nvidia tends to favor very-fast, single-chip solutions.
Nvidia, of course, has a different take on why it chooses to develop big, fast chips.
"If you take two chips and put them together, you then have to add a bridge chip that allows the two chips to talk to each other...And you can't gang the memory together," said Ujesh Desai, general manager for GeForce products at Nvidia.
"So when you add it all up, you now have the power of two GPUs, the power of the bridge chip, and the power that all of that additional memory consumes. That's why it's too simplistic of an argument to say that two smaller chips is always more efficient."
Desai takes this argument a bit further. "They don't have the money to invest in high-end GPUs anymore. At the high end, there is no prize for second place. If you're going to invest a half-billion dollars--which is what it takes to develop a new enthusiast-level GPU--you have to know you're going to win. You either do it to win, or you don't invest the money."
(Note: Nvidia does offer GeForce 9800 GX2 technology but the GX2 uses a dual-board design--two 9800 chips, one on each board--rather than putting two chips on a single board as with AMD's Radeon HD 3870 X2.)"
Exactly. If that happens nVidia is in for a surprise, and us a treat. Not only that, but Ill go a step further. Intel will eventually get into this market, as it shows higher potential than cpu growth, at least until we can find a way to run multithread for a 32 core cpu. Anyways, Intel isnt going for the huge monolithic chip either, and since thats where the majority (vast) is made, then nVidia would only be doing itself a good service by investing in this direction as well they can before the coming of Intels solutions. If the shared memory will work, and its needed every bit as much as multithreading at this point, we may see a different approach from nVidia in the future
Of course IGP solutions share memory, however until a technology like Fusion is introduced, access of that memory on current architecture is painfully (and uselessly to the medium to higher end) slow.
However the architecture of an IGP can obviously cope with contention for the memory (from the CPU), it can cope with ring-fencing off memory to use for itself (and visa versa for the CPU).
Indeed, all (that I'm aware of anyway) of the issues regarding shared memory are already solved by an IGP.
Strange then that Nvidia claim that sharing the memory is a problem... sure, it will add a bit of complexity to the GPU memory controller, maybe even compromising useable bandwidth by extra address strings - but if their are 2 coherent links from each GPU core to each "stick" of RAM, then your doubling the total bandwidth (of the communication between mem controller and memory) by accessing shared mem.
Can't find your answer ? Ask !
Read discussions in other Graphics & Displays categories