Sign in with
Sign up | Sign in
Your question

multi-GPU race...is it just me who isn't impressed?

Last response: in Graphics & Displays
Share
January 15, 2008 5:57:35 PM

So 2008 is starting off with a big multi-GPU race between red and green. Word…I am all for better performing video cards but I am a little disappointed and here is why.

First, I fail to recognize this back peddling strategy of slapping GPU die shrinks into old cards as innovative technology. Granted, it IS yielding better performance…this version 2.0, new and improved sort of thing seems more and more like a marketing gimmick than a new technology. These are like pseudo-advances in technology. A little new, a little old. If they significantly upped the clocks and shaders AND had a die shrink…now that is something worth calling a new generation or advanced GPU technology…but that isn’t the case.

Second, the main point I am getting to is the multi-GPU solution. I get the feeling that multi-core GPU advances are being impeded and could be up and running by now if these companies weren’t making marketing pit stop at the multi-GPU idea. (well, in the R&D labs they are probably up and running) Granted, yet again, they do yield performance gains…I am just not impressed with this “new and improved” marketing strategy. You thought 1 GPU was sweet…now 2!! Ok, cool, so why not make all graphic cards with two GPUs now? They revert back to selling single GPU cards…then the next “breakthrough” is a dual GPU version. Is that impressive? Honestly, I don’t see it. I know that they want to market low and mid range cards too but “low” and “mid” are all relative terms to what is on the shelf at the moment.

It all reminds me of the Mach razors…now…3 blades!!! WOW. New and improved…4 blades for closer smoother shaves… HOLY HECK BATMAN. You thought 4 blades was amazing…now 5!! Ah…ok I see where this is headed. On a business standpoint…I see why they are doing this but is this really the best technology they have to offer or are they holding out just to increase profits? Do you feel nVIDIA and ATI are in a marketing race and not a technology race? Don’t get me wrong, I do enjoy the performance gains with each version 2.0 that comes along but I hesitate to buy into the technology because it seems like a trail a bread crumbs being thrown out to us that is leading to something great that is already in existence.
January 15, 2008 6:08:00 PM

I personally think 3 blades is better then 2 :D  But 5 is just too much.

Multi GPU is the future, as it becomes harder to increase clock speeds perhaps. Driver support for things like SLI and Xfire have a long ways to go though from what it sounds. I don't think it's a question of "is it a good idea" rather "can they get the drivers right?".
Look at what multi CPU's has done for computers! Very nice improvement IMO, as Intel and AMD reached a barrier is clock speeds. The future thus most likely belongs to Multi GPU solutions as well, but at this point it is still in it's infancy. =\

Of course it'd be nice if they could make up their minds already.... Even the GX2 doesn't sound like a very good entry into the multi GPU arena. Bah well, the G100 and Rv700 should prove a more worthy addition to the multi GPU concept, should they both indeed launch as 2xGPU solutions.

a c 130 U Graphics card
January 15, 2008 6:45:15 PM

It all wont matter a jot when they get the fusion/nehalem type CPU's working properly.
Sure its gona be a while but when they get to a point when 8/16/32 cores are on the dies and they are all seperatly programable then you wont need the ultra crushing cards which we are all wondering where they have got to.
Mactronix
Related resources
a b U Graphics card
January 15, 2008 7:27:11 PM

Multi-core GPUs are the future, not multi-GPU.
a b U Graphics card
January 15, 2008 8:40:57 PM

That is very true randomizer, the multi pcb idea will go nowhere. ATI has the right idea.
January 15, 2008 9:10:46 PM

I thought GPU was the core? :??: 
January 15, 2008 9:31:14 PM

randomizer said:
Multi-core GPUs are the future, not multi-GPU.

I agree. Perhaps even multi-(small-seperate)core single die will work for a while.
We will see if the r700/g100 can add 100% more performance with each new core. Rather than a 30% gain with each core using an internal PCIe bridge.
GPUs are already parallel. The problem was making them modular so the high-end GPUs don't have 1 super-sized chip.
The high-end can have 8 cores, mainstream can have 4 cores, low-end having 2 cores, integrated will have 1 core. Something like this.
January 15, 2008 9:40:53 PM

gamebro said:
I personally think 3 blades is better then 2 :D  But 5 is just too much.



OMG... I thought it was over kill, but 5 blades really is better ! :)  for real...

But yah marketing is the name of the game, in the end they don’t care to push technology, they wanna make money and what ever will make them more is what they will do.
January 15, 2008 9:47:26 PM

Didn't the same thing happen with CPU's? Nothing different really!!! Technology has limits. A company also has to maintain an income stream so they do what they can within the limits that constraiin them. One of the things I have noticed is that with the stagnation of performance they "revert" to advances in being able to do the same with less power. Then the next step is to have better performance with less power.

People who went out and bought a 1200 Watt Power Supply expecting they would need it with the projected path of power requirements now may find that with the new equipment they can use a 750 Watt PSU with new equipment and have power to spare.

The ANTEC P190 Case comes with 2 PSU's a 650 for the MB/CPU etc and a 500 for fans etc. If you piurchased that box you could get away with using just the 650 with a Q6600 and the new AMD/ATI 3870x2. That is progress.
January 15, 2008 10:17:50 PM

randomizer said:
Multi-core GPUs are the future, not multi-GPU.

I just looked GPU up in wiki:
Quote:
A GPU can sit on top of a video card, or it can be integrated directly into the motherboard.

This indicates that GPU is the core of the video card. So wouldn't multi-Gpu be correct?
a b U Graphics card
January 15, 2008 10:21:44 PM

Is it me or is it a trend??? Quad core CPUs and dual GPUs?

Let the Multi Core Wars Begin!
January 15, 2008 11:40:21 PM

Shadow703793 said:
Is it me or is it a trend??? Quad core CPUs and dual GPUs?

Let the Multi Core Wars Begin!

I actually think it will scale from 1-2-4-8-or 16 adding cores faster than on CPUs.

Also find combo Chips (Fusion) that may have 2 CPU cores and 4 GPU cores on 1 die.
January 15, 2008 11:57:30 PM

Massively parallel processing is the fastest way to go, but not the cheapest. Mutiple GPUs (Crossfire/SLI) have been put to work in extremely demanding applications (other than gaming) which much success.

As for the market race to sell a new generation of incremental improvements every 6-12 months, it's all BS.
January 16, 2008 12:00:34 AM

Well to counter your point, if your disappointed in multi GPUs than i guess your still using single core CPUs? More GPU's and cpu's are the future no doubt as stated above. we wont be going back to single core untill a whole new cpu is designed.
a c 355 U Graphics card
January 16, 2008 12:03:10 AM

Unless I'm playing at 2560 x 1600, I'll take a single powerful card.
January 16, 2008 1:05:07 AM

jaguarskx,

generally I would agree, but if that single GPU card is say $100 or so more than a multi.... and performs on par or slower.... the obvious choice would be the multi core IMO.
January 16, 2008 1:31:21 AM

I'm unimpressed in the same sense as with this multicore cpu crap. Just give us BDT processors and be done with it. 5000 fold performance increase without this complicated multicore programming.
January 16, 2008 1:52:50 AM

multi-core is the future, yes.
multi-core, multi-die/pcb is the now because it's easier and cheaper.

and cheap will always trump everything in any market.

once multi-gpu (pcb or die) cards become mainstream, you'll really see the top end take off for multi-core graphics processing.
January 16, 2008 2:19:05 AM

Evilonigiri said:
So wouldn't multi-Gpu be correct?


No. Multi GPU inplies 2 GPU chips. Multi-core GPU implies one chip with more than one core.

The difference is that there will still be x amount of logical processors either way, but we think it's not truly high-tech until there is only one physical processor with x logical processors.
January 16, 2008 2:19:55 AM

If we ever make a 3 dimensional cpu or gpu than we will be making true innovation. God knows what kind of processing power we will have available. We are suppose to reach our limit come either 2012 or 2015 i cant remember according to some theory. Its on wikipiedia.
a b U Graphics card
January 16, 2008 2:41:05 AM

randomizer said:
Multi-core GPUs are the future, not multi-GPU.


Yeah but the step to multi-core/die is traditional multi-GPU.

Right now it's not practical to go with multi-core without laying some groundwork for parallelism it would be hard to jump to multi-core, even if it were on a single package/socket.
a b U Graphics card
January 16, 2008 2:54:27 AM

Can Not said:
No. Multi GPU inplies 2 GPU chips. Multi-core GPU implies one chip with more than one core.

The difference is that there will still be x amount of logical processors either way, but we think it's not truly high-tech until there is only one physical processor with x logical processors.


But they were already like that in the past, and the future isn't 1 physical processor with multiple logical processors because that doesn't offer an improvement or solution to the die size & yield issues that are driving the move towards a more modular future.

The future is multi-dies (which may have more than one inner group of units) that allows you to offer solutions like 1 die for one class and 2+ dies for a power class above it.

With the current rumours/theories about the R700/RV770, if it was a single die dual core solution then easily produced products would be achieveable as follows (fictional #s) with only one part;

HD4800 = 2+ dual core RV770s either on one package or one PCB
HD4600 = 1 dual core RV770
HD4300 = 1 crippled RV770 with only 1 functional core achieved through bin'ed parts unable to meet HD4600 levels.

However in order for that future to work both AMD and nV need to improve their ability to scale and omptimize code for setups that do not share schedulers/dispatchers and other components that make their current multi-processor single die solutions work now.

IMO the future will also involve a return to a solution like the NV-IO because that way you can create a single back end for any solution you chose and not have to duplicate that hardware internally for each chip.
a b U Graphics card
January 16, 2008 3:00:07 AM

grieve said:
OMG... I thought it was over kill, but 5 blades really is better ! :)  for real...


Is there even a 5 blade razor?

The Shick has 4 and my Fusion has 6, but I love it. [:thegreatgrapeape:6]
Now 7 THAT would be crazy !! :pt1cable: 
January 16, 2008 3:21:31 AM

jaguarskx said:
Unless I'm playing at 2560 x 1600, I'll take a single powerful card.

Yeah....my two year old 7900GT still runs everything just fine for me and the res I use.
January 16, 2008 9:47:40 AM

enewmen said:
I agree. Perhaps even multi-(small-seperate)core single die will work for a while.
We will see if the r700/g100 can add 100% more performance with each new core. Rather than a 30% gain with each core using an internal PCIe bridge.
GPUs are already parallel. The problem was making them modular so the high-end GPUs don't have 1 super-sized chip.
The high-end can have 8 cores, mainstream can have 4 cores, low-end having 2 cores, integrated will have 1 core. Something like this.


If you had like hyper transport between cores on a multi-core gpu, and the cores were explicitly designed to work with others then theres hope in it, not our present system of crossfire or sli "right well if you do that, i'll get on with this and well meet again in one frames time" bumbling and time wasting all the way. Your right gpus are parallel, and i think the present system of getting them to work together is a bit of a 'round the houses' type way of doing it. It seems they are melded together rather than being intended from the very outset of their design to work closely together.
January 16, 2008 10:32:11 AM

OMG guys!!!!
Multi-gpu is the future and you know it. I meen, ATI is down to 55nm process and going dual GPU architecture is the most logical thing to do!! What the heck do you expect??? We have 45nm CPU's for Christ sake!!!! What more do you ppl want?? 32nm GPU's?? It just won't happen soon so you all got to live with the dual GPU options from ATI and NVidia.
The only problem I see though, is that back in the days of the 7950GX2, that card had tons of issues!!!
The delema!!! What to buy????? Should I go with slap-stick Nvidia 2 pcb stuck together or the true monolithic dual gpu in on pcb??
...that kinda reminds me of something...
a b U Graphics card
January 16, 2008 10:52:57 AM

It seems sort of silly to go beyond two GPU's. Why you ask? Well as I see it, having a two card setup is an attempt to squeeze the most from the last/present generation hardware. Let me expand on that. The GPU itself is parallel by nature. The stream processors (as we've come to know them) are parallel. They are programmed for a purpose and fed data by the GPU thread arbiter. Each new generation of GPU adds more stream processors, nVidia has 128 in the GTX and the HD2900XT (and the new HX3000 series also) has 320. Of course the number of ROP's plays a significant role as well, as ATI is finding out. Adding cards/GPU's does not scale as well as adding stream processors / ROP's withing a single GPU. The future lies in increasing the number of stream processors / ROP's in the GPU's themselves, not to continually adding more cards with diminishing rates of return. This is all without discussing the appalling game compatibility with either CrossFire and SLI.

So the future lies in a single GPU's increased parallelism, not adding more cards or GPU's on a card. Once you add external buses, latency becomes a significant factor, not to mention the software overhead needed to manage a multi-card or multi-GPU setup. I think that 2 card solutions will always have a niche, in that they extend the performance capabilities (bound by compatibility) of the present and last generation of GPU's. It also allows for the extension of a few elite e-penis'.
January 16, 2008 12:02:40 PM

TheGreatGrapeApe said:
Is there even a 5 blade razor?

The Shick has 4 and my Fusion has 6, but I love it. [:thegreatgrapeape:6]
Now 7 THAT would be crazy !! :pt1cable: 


Well, technically the Fusion has 5 that you actually shave with, the trimmer thingy on the back shouldn't count towards the total IMO.
January 16, 2008 12:08:23 PM

techgeek said:
It seems sort of silly to go beyond two GPU's. Why you ask? Well as I see it, having a two card setup is an attempt to squeeze the most from the last/present generation hardware. Let me expand on that. The GPU itself is parallel by nature. The stream processors (as we've come to know them) are parallel. They are programmed for a purpose and fed data by the GPU thread arbiter. Each new generation of GPU adds more stream processors, nVidia has 128 in the GTX and the HD2900XT (and the new HX3000 series also) has 320. Of course the number of ROP's plays a significant role as well, as ATI is finding out. Adding cards/GPU's does not scale as well as adding stream processors / ROP's withing a single GPU. The future lies in increasing the number of stream processors / ROP's in the GPU's themselves, not to continually adding more cards with diminishing rates of return. This is all without discussing the appalling game compatibility with either CrossFire and SLI.

So the future lies in a single GPU's increased parallelism, not adding more cards or GPU's on a card. Once you add external buses, latency becomes a significant factor, not to mention the software overhead needed to manage a multi-card or multi-GPU setup. I think that 2 card solutions will always have a niche, in that they extend the performance capabilities (bound by compatibility) of the present and last generation of GPU's. It also allows for the extension of a few elite e-penis'.


I see where you're going with this. Personally, I have nothing against SLI/Xfire, but it is a kind of band-aid solution to the problem. I think the key point you touched on is diminishing returns. Don't we all remember the exitement with the 7950GX2 and the promise of quad-SLI? And the rather dismal benchmarks and driver difficulty? I can't imagine things will change much with current generation tech, though I'm willing to be wrong and surprised.
January 16, 2008 12:31:18 PM

I agree with Randomizer. Multi Core GPU's is the way forward. Shorter and fatter pipelines... :sol:  Slapping two die's on the same PCB don't cut it for me, and that's bcos I'm also one for technology progression through thorough and evolving engineering. That's also why I don't like the VW Polo GTi - it's a Polo with Golf 4 GTi engine... woooop di dooo.

I'm particularly fond of incremental revision changes on a new generation GPU and it yields alot of insight for upcoming generations or iterations
January 16, 2008 2:13:10 PM

I love the razor thread that's striped into the GPU thread. It's kinda like a Raid 0 thread. Mach 5 is awesome.
January 16, 2008 2:34:26 PM

cb62fcni said:
I think the key point you touched on is diminishing returns. Don't we all remember the exitement with the 7950GX2 and the promise of quad-SLI? And the rather dismal benchmarks and driver difficulty?


Yeah that is the general point. Dismal performance gains when something better could be available if there was a real technology race instead of a marketing race. But yeah…that’s business and especially when there are only two competing companies where one has such a strong grasp on the market at the moment.

I didn't bring up xFire and SLI because that is an entirely different can of worms. Quad-SLI...yeah a good idea in theory but look how successful that ended up. Will the performance gains with tri-SLI be worth it? Will 3 blades be better than two? Haha, I love the razor discussion btw. I actually have an electric so I lost track of how many blades and what pulsing frequency they are up to these days!

There seems to be a hierarchy of efficiency and gains....first multi-core, second multi-GPU, and last, dual card (SLI/xFire). It is well known through CPU technology that stacking the cores is more efficient than having them side by side so there is no doubt that the GPU caravan is headed down the same road as the CPU. It seems like engineers can cut out the middle man (multi-GPU) and go for developing multi-core and SLI/xFire.

But yeah, point taken…baby steps… can’t just jump out with a quad-core GPU tomorrow being on the current architecture.




January 16, 2008 3:50:10 PM

Interesting and thought-provoking post, Spinacheater. I think there's already a resistance to multiple GPU's and SLI / Crossfire. Most people on this forum and that I know only have one GPU and prefer that over having only a 30% gain through SLI/Xfire.

That being said, I think Red and Green can offer better technology and still make more money. They just charge more for the new technology as they always have. So in a sense, I disagree that marketing and technology can't mix. They're both interested in better technology to get a leg up on their competition. If there was a better solution out there, believe me, they would be touting it and at a higher price.

January 16, 2008 3:56:06 PM

randomizer said:
Multi-core GPUs are the future, not multi-GPU.


Doesn't the PS3 even have an 8 core GPU/CPU? It's what I heard, it will be the future I think too.
January 16, 2008 4:02:13 PM

TheGreatGrapeApe said:
Is there even a 5 blade razor?

The Shick has 4 and my Fusion has 6, but I love it. [:thegreatgrapeape:6]
Now 7 THAT would be crazy !! :pt1cable: 


LoL they will have it sometime in the future where all you would have to do to shave your entire cheek is put the razor on your cheek and move it down 1/2" and your all done.
January 16, 2008 4:14:44 PM

I think there definitely needs to be a dramatic shift towards the 'shorter, fatter' pipeline on a single gpu. The japanese are planning on having a 33 megapixel broadcast _standard_ in place by 2015. That's a lot of pixels to process, and joe-schmoe consumer isn't going to spend money on multiple cards to do it. They'll expect it done on a single card.
January 16, 2008 4:17:43 PM

bildo123 said:
LoL they will have it sometime in the future where all you would have to do to shave your entire cheek is put the razor on your cheek and move it down 1/2" and your all done.


Lasers will automatically target individual hairs and blast them off your face. :na: 
January 16, 2008 4:32:21 PM

It doesn't make any sense whatsoever to put 2 GPU cores on the same die. Maybe going with a MCM type deal I could see for yeild / binning / price issues, but there's no reason to ever put 2 GPU's on one physical die.
January 16, 2008 4:42:19 PM

I believe it will be more logical when they return to the days of discrete 3D Accelerator boards. That way either onboard or add-in 2D boards will handle the displays then additional boards which can be SLI/Crossfired can be added with less redundant technology and less power consumption. Having separate R&D for the "display adaptors" and "3d Accelerators" should make things cheaper in the long run, especially since 2D should plateau once again especially with digital displays being the norm.
January 16, 2008 4:49:15 PM

BTW, the whole idea of calling for a "multi core" gpu doesn't make a heck of a lot of sense, as it would involve plenty of redundant circuits and require about equal engineering as simply making the same gpu more parallel. Currently GPUs are massively parrallel and will continue down that route. It is more efficient & cheaper to work within a single core and simply add more ROPS, Shaders, ETC. You can already see the difference within a single generation, IE 8600 vs 8800.
January 16, 2008 6:16:07 PM

pongrules said:
If there was a better solution out there, believe me, they would be touting it and at a higher price.


I feel that would be the case for ATI, but not with nVidia. This goes back to the fact that nVidia has held the tip top GPU torch for over a year now. On a business standpoint...if you hold the top product, there is no point to throw something new out on the market that competes if your competition can't answer.

I guess the alternative, like you say, is to put it out to market anyway and inflate the prices.

I am all for xFire and SLI...I am rooting for the software engineers to snap that puppy in place and kick out some wowing efficiencies. In the mean time, I will skip buying the tri-SLI package until it comes with complimentary KY lube.

January 16, 2008 6:56:35 PM

bildo123 said:
Doesn't the PS3 even have an 8 core GPU/CPU? It's what I heard, it will be the future I think too.


You're thinking about the Cell processor, which has 7 SPE's and a "main" core to manage things. The actual graphics processor is an Nvidia G71.

I don't actually know what type of processing graphics generation is, but the Cell is specialized to single precision floating point calculations. I think graphics is double precision.

With recent process shrinks I would expect to see dual core style GPU's instead of this space-hogging dual GPU business. Should be a lot cheaper if they'll get around to it.
January 16, 2008 9:36:08 PM

bildo123 said:
Doesn't the PS3 even have an 8 core GPU/CPU? It's what I heard, it will be the future I think too.

I heard this to. I also heard the Cell CPU is dog-slow, even slower then a Pentium 4.
The Cell should be more scalable, but I havn't seen this (or any multi-cell computer)
January 17, 2008 5:12:54 PM

enewmen said:
I heard this to. I also heard the Cell CPU is dog-slow, even slower then a Pentium 4.
The Cell should be more scalable, but I havn't seen this (or any multi-cell computer)


It's not slow, it's just basically a big FPU and lacks branch prediction. This makes it great at cranking out heavy math, but not so much at common PC-type applcations.
January 17, 2008 5:16:14 PM

SpinachEater said:
I feel that would be the case for ATI, but not with nVidia. This goes back to the fact that nVidia has held the tip top GPU torch for over a year now. On a business standpoint...if you hold the top product, there is no point to throw something new out on the market that competes if your competition can't answer.

I guess the alternative, like you say, is to put it out to market anyway and inflate the prices.

I am all for xFire and SLI...I am rooting for the software engineers to snap that puppy in place and kick out some wowing efficiencies. In the mean time, I will skip buying the tri-SLI package until it comes with complimentary KY lube.


Well, the problems inherent with an SLI/xfire approach are pretty big. First, you're doubling power consumption without coming close to doubling performance. Why doesn't the performance double? Overhead, the same reason why quad-sli was effectively crippled from the get-go. I don't know, I just don't see a solution short-term, aside from just throwing more transistors at the problem.
January 17, 2008 5:27:31 PM

Further driver developments and developer support to a much higher degree. SLI scaling in crysis needs to become like 80% plus before it could be thought of as an all round good workable upgrade path. Why replace your old gpu? when you can by another one and gain 80% nearly across the board! unfortunately noone can say that right now. SLI scaling in crysis is sub 50% at best, why is this soo important though? (im saying it is thats why lol) because needs to be good in the one game that needs it most. Im rather dissappointed that neither ati or nvidia have really cracked this so far. Ive recently changed my mind and now I think the 3870 x2, fully fledged tri sli and quad crossfire wont scale very well/live up to expectations. :( 
January 17, 2008 7:18:56 PM

Cell processors are actually superior. they are SERVER cpus. (RISC). At the least, DOUBLE the speed of any x86 cpu on the market.
http://en.wikipedia.org/wiki/Cell_microprocessor

Problem is, companies like the code of x86 not risc. And, of course, IBM first shipping the cheapo x86 in place of the risc cpu they were thinking about, changed the planet.

btw, lol i love the stupid comment multi gpu are useless. So what is crossfire and sli?
a b U Graphics card
January 17, 2008 7:46:31 PM

techgeek said:
Of course the number of ROP's plays a significant role as well, as ATI is finding out. Adding cards/GPU's does not scale as well as adding stream processors / ROP's withing a single GPU. The future lies in increasing the number of stream processors / ROP's in the GPU's themselves, not to continually adding more cards with diminishing rates of return. This is all without discussing the appalling game compatibility with either CrossFire and SLI.


Actually the future is less ROP bound because the move is away from dedicated AA hardware in the ROPs towards shader based AA and thus the ROPs would satisfy themselves with primarily output based work, which is why I suspect the future is an NV-IO style daughter chip with the ROPs, TMDS, RAMDACs, VIVO, etc. and then the lego addition of cores to allow for a more modular design. Check the scaling and performance of the G8x when it can't use it's dedicated ROP hardware and then see what you find for the scaling of both.

Quote:
So the future lies in a single GPU's increased parallelism, not adding more cards or GPU's on a card. Once you add external buses, latency becomes a significant factor, not to mention the software overhead needed to manage a multi-card or multi-GPU setup.


You miss the direction they're already talking about and already went through in the CPU market, multiple GPUs on a single package (ie socket) where there is little/no latency and the driver development is easier since there is no need to have card to card communication with the information copied from VRAM, they share all the same VRAM, and only need to cross-communicate buffers. This also save you the added cost of extra VRAM. Right now even the X2/Gemini cards have 2X the VRAM than they can use because they need a copy for each VPU.

Wire traces are still an issue though as the processes shrink and by 32nm you're going to be talking about extremely small wire

IMO the future of the multi-core/die depends alot on their success of being able to modularize the process, because if it isn't cheaper and easier to scale using a single base unit, then there won't be much advantage over a more traditional higher transistor count single die solution.
a b U Graphics card
January 17, 2008 7:52:50 PM

cb62fcni said:
Well, technically the Fusion has 5 that you actually shave with, the trimmer thingy on the back shouldn't count towards the total IMO.


Yes I know, I was making fun, but really I LOVE the 6th blade because those of us with trimmed beards, goatees or short sideburns prefer the precision of that back blade. I use both sides of my Fusion every time I use it. And like I said, I LUVs it, pure Genious from Gillette, and replces my Mach3 which replace my Sensor Excel..... all the way back to my first single blade BIC. Also I find the Mach 3 lasted longer before needing replacement than the Sensor, and I find the Fusion needs replacement even less often because the blades share the workload and last longer.

Speaking of which I used a single blade BIC over the Xmas holidays... and yeah 5-6 Blades IS better than 1 , no razor burn which is great for this ski-hill wind-burnt face! [:mousemonkey]

Anywhoo, maybe we should propose adding a 'GPU/VPU - RAZOR' subsection. :sol: 
!