ASUS Creates Upgradeable Graphics Cards
Taipei (Taiwan) - The vision of upgradeable graphic cards goes back to the late 1990s, when Micron Technology was experimenting with removable sockets. In 2006, both MSI and Gigabyte showcased upgradeable graphic cards, but their concepts, which were based on GeForce Go MXM boards, never took off. Earlier this year, Asus introduced a single board with three MXM slots for ATI Mobility Radeon 3850 or 3870 cards (upgradeable with future parts), and has now unveiled its single-MXM product.

Called Splendid HD 3850M, this card doesn’t look like anything special, until you remove the dual-slot cooler. What you can see then is a MXM card with a RV670 chip and 512 MB of memory attached to the PCB that contains the Splendid HD video processor: The video processor features 12-bit gamma correction, 7-region color enhancement and dynamic contrast engine.
The Graphics chip is clocked at 668 MHz while the 512 MB GDDR3 memory operates at 828 MHz DDR (1.65 GT/s). According to Asus, this MXM card will score around 600 3DMarks (3DMark06) more than ATI’s own reference design. But what makes the different, is the fact that this product is significantly shorter than the Radeon 3850 or 3870 ATI reference design.

The Asus Trinity card has three MXM slots. The company is currently selling the card with three modules based on the Radeon HD 3850.
Thanks to a modular design, you will be able to upgrade to upcoming MXM modules, including ATI’s RV770 and RV870 chips (Radeon HD 4800, 5800 series). Interestingly, there should be no issue to put a Nvidia-GPU based MXM module onto this card, since there is no limiting logic.
Using this design, you can imagine a future where users will upgrade their graphics experience simply by buying a small module. If you would have to buy just the GPU and memory, this approach would actually lead to less money being spent, since you don’t need to buy the complete card over and over again.
This new line of products appears to be much more than an engineering exercise. We hope to see future designs incorporating HDMI-in on graphics cards too, just like on the much anticipated professional sound card, Xonar AV1 .
Asus is now on track of doing something new, something that can put them clearly ahead of the competition.
Anything that would make Graphics Cards cheaper gets an A+ in my book.
We will have to wait and see if this product takes off or not.
I like that is is small, I have 2x 8800GTX cards and they take up so much room. My wife has a 8800GTS 512 and it takes up even more room.
That's the stupidest thing I've ever heard. An MXM module IS a complete card! THIS WILL NOT SAVE THE END USER MONEY!
You won't be able to. According to news reports ATI will make ~10 units. It's like asking when you can buy a Ferrari Enzo FXX, never going to happen, unless you have 'connections', or are on a very short hand-picked list.
15+ years of usage, my first ASUS board is still in working condition.
ever since then, every mobo and graphic cards that i bought is from ASUS, almost 0 problem. almost.
still, ASUS is an A+ brand. strong brand which i will recommend without embarrassment to myself.
The gamer wants mass processing power. The artist wants lots of memory. The game developer wants both. so we dont need 3 cards, we need 1 card with options.
Id love to see this take off, although the initial outlay for the upgradable card will probably run quite a bit.
http://www.tomshardware.com/forum/248601-33-cheaper-crossfire-build
It basicaly addressed what ASUS is doing now, and was wondering why it wasn't being done
laptop graphics are usualy several months behind and more expensive
that's just the thing.... a gpu doesn't do anything but interface with gddr and ship information off to the motherboard to my understanding... so technically they could install something to scale power input per installed gpu/memor, and just put an inordinate amount of lanes from the memory to the gpu... (ever notice there's more memory chips the wider the memory interface for the same amount of memory?) basically making it upgradeable without worrying about buss speeds... just increase the lanes from gddr to gpu and there's no problem provided it's still on pci-e 1.0 or 2.0... as for smaller 'processes' and 'socket compatibility'... gpu's don't really have the same problems that cpu's do... they're made for one specific purpose, and they do it well... something like this will only not be seen on shelves because it would provide less instant profit, and alienate even more mainstream consumers from upgrading their graphics card... because then they'd have to know how to apply thermal paste and a heat sink OOOOOOOO omg noooo!!!! haha. :-p
why is it that i can pull out my 65w cpu and dump in a 125w cpu and have no problems? why can't that work on another board that functions more indipendantly than any other piece of the pc? oh yeah... that's cuz you don't realize that...hmmm pci-e interface can carry up to 75 watts... so if it consumes 125 watts, where do the rest come from? oh yeah, the secondary input for power on the back of your card... gpu's can scale power just as well if not better than cpu's. arg... people keep forgetting that depending on how you look at it, a gpu has a ridiculous advantage on a cpu... IF they weren't apples to oranges, i'd say gpu's win out... But what the hell do i know?
It would be even nicer if they managed to make memory sockets so the memory could be upgraded or more could be added at a later date.
I think its all about money, the gfx card manufacturers make money on every bit of the card, the board, the memory, the connectors, the power transforming circuitry, and the GPU. Its in their interest to sell the whole enchilada over and over again.
It would be even nicer if they managed to make memory sockets so the memory could be upgraded or more could be added at a later date.
I think its all about money, the gfx card manufacturers make money on every bit of the card, the board, the memory, the connectors, the power transforming circuitry, and the GPU. Its in their interest to sell the whole enchilada over and over again.
It's because memory modules that motherboards use are 64-bit bus width, because the memory controller address each stick as a bank, instead of each memory chip individually. That would have a massive effect on 3D performance if we went back to 64-bit bus width, however, I understand your point, and I'm sure this could be implemented (with a increase in access time, because of the increase in trace lengths) with a different module design. Conversely, memory prices for DDR2 have plummeted, and are so commoditized the decrease in part price would be neglegible compared to the cost of R&D and production cost of the PCBs (number/length of traces, and additional socket).