Sign in with
Sign up | Sign in
Your question
Closed

Why can’t GPUs have two cores on one card instance of having two cards

Tags:
  • Graphics Cards
  • GPUs
  • Crossfire
  • Graphics
Last response: in Graphics & Displays
Share
September 16, 2010 2:47:16 AM

I’ve been thinking why do we need to two video cards that have ATI’s crossfire and NVIDIA’s SLI? I mean couldn’t the people at ATI and NVIDIA put two cores on one video card and sell that? If you want to answer a tread that I made a week ago here the link, please answer on that tread only:

http://www.tomshardware.com/forum/forum2.php?config=tom...

More about : gpus cores card instance cards

a c 1415 U Graphics card
September 16, 2010 2:57:55 AM

We have the HD5970,HD4870X2, HD4850x2, GTX295, etc just to name a few that have 2 GPU´s on one card.
a c 365 U Graphics card
September 16, 2010 2:59:33 AM

I believe he is asking about a multi-core GPU chip.

Related resources
a c 273 U Graphics card
September 16, 2010 3:10:49 AM

ITsonic said:
I’ve been thinking why do we need to two video cards that have ATI’s crossfire and NVIDIA’s SLI? I mean couldn’t the people at ATI and NVIDIA put two cores on one video card and sell that? If you want to answer a tread that I made a week ago here the link, please answer on that tread only:

http://www.tomshardware.com/forum/forum2.php?config=tom...

First off what do the two threads have to do with each other? And a GTX480 has over 400 cores and a 5970 has two GPU's each one of which has multiple "cores", so what's the point of this thread?
a b U Graphics card
September 16, 2010 8:21:37 AM

I think rolli answered what ITsonic meant to ask. Obviously, he meant "cores" in the CPU, not GPU, sense.
a b U Graphics card
September 16, 2010 5:12:32 PM

GPU's are massivly multi-core to begin with. If by "Core", he means an entire processing unit, you'd have to almost double the size of the chip, hence the two physical GPU solutions.
a b U Graphics card
September 16, 2010 5:18:11 PM

Yes but answer in his other thread, lol
September 17, 2010 1:32:59 AM

jaguarskx said:
I believe he is asking about a multi-core GPU chip.


Yes I am thank you that what I’m saying

Mousemonkey said:
First off what do the two threads have to do with each other? And a GTX480 has over 400 cores and a 5970 has two GPU's each one of which has multiple "cores", so what's the point of this thread?


The other has nothing to do with this tread I just want someone to help me out with this problem so I just put it on this tread so the problem might be look at.

gamerk316 said:
GPU's are massivly multi-core to begin with. If by "Core", he means an entire processing unit, you'd have to almost double the size of the chip, hence the two physical GPU solutions.


NO no no not that type of core but your right the GPU would have to almost double in size.
a c 273 U Graphics card
September 17, 2010 1:44:17 AM

So as your other thread makes little to no sense no matter how many times one reads through it do you want it deleted?
a c 173 U Graphics card
September 17, 2010 2:11:57 AM

The problem with this has always been the mirroring of data between each gpu. You do not gain full benefits of having more than one gpu in your system but only gain the benefits of processor only. The only way to get around this under current API is to create a shared cache between each gpu as a common pool while leaving other memory for each gpu as a dedicated cache or pool. The mirrored data is Texture and frame data no matter what is finalized by the gpu that is operating the primary display in most modes. The unique data is frame data allowing for the increase of performance beyond conventional setups. Believe it or not this isn't new at all and has been around for years. 3DFX did this and it scaled far better than any thing out there today. That dedicated cache is a small amount of memory dedicated for textures while frame data isn't shared allowing for the doubling of memory minus texture. The later models had a unified memory setup but in that total memory on that card a certain amount was reserved for textures only but was dynamic ware it was stored vs dedicated storage. Second the frame data was unique till the frame was to be finalized. SLI back then was basically short for scan line interface. Each line of pixels was drawn by a gpu, one does odds and the other does evens. In two way the work load was split between both gpus evenly and their clocks were synchronized as well the completed frame data. In later models that being the VSA100 it was configurable up to 32 gpus. This was a popular solution for high resolution simulators for military and aviation training ware both image quality and performance was paramount. Today the max is 4 gpus for both Nvidia and ATI due to limitations of API. Under DX you cannot have unique data except for in the gpu cores every thing else is mirrored. This creates overhead for the bus and the cpu being forced to duplicate data so that both or all 3 or 4 gpus have the same data. The completed frame is then sent over the bridge to the primary gpu for display. In order to improve over this bottleneck is provide a faster means of mirroring data or creating a dedicated pool of necessary data while allowing for the storage of unique data outside the gpu. The limitations of having this is the common cache will be small and costly. It can not be moved off the cards or the impact will be severe on the total system performance so the natural solution would be to either change the api or improve the process of mirroring while reducing overhead at the same time. The advantage of having a local pool is already having the textures saved on the card and leaving the cpu to only be providing geometry data. That is what made 3DFX fast besides their API that was even more efficient than openGL. Takes a less powerful gpu to render the same scene vs the competition. There is one trick they can use in modern gpus is tile based rendering. Why is simple don't bother was it not in view as it saves on resources taking a less powerful to gpu render the same scene as the competition. imagine playing the latest game on an IGP while the other guy has to sink in $500+ on a high end card. ATI and Nvidia made sure that tile based rendering was dead to eliminate future competition that can endanger their future market. So were are stuck with conventional rendering models for the foreseeable future. Raytracing isn't going to improve things very much.

September 18, 2010 3:12:04 AM

Mousemonkey said:
So as your other thread makes little to no sense no matter how many times one reads through it do you want it deleted?


Ok I'll brake it down for you My dell Mobo couldn't see the segate or maxtor hard drives it would see one but not the other. I Called seagate they said that I needed to update my Bios to make a long story short my mom bought my a new barebone case I plug bothof them up and the problem still here and I don't what to do now. :cry: 

nforce4max said:
The problem with this has always been the mirroring of data between each gpu. You do not gain full benefits of having more than one gpu in your system but only gain the benefits of processor only. The only way to get around this under current API is to create a shared cache between each gpu as a common pool while leaving other memory for each gpu as a dedicated cache or pool. The mirrored data is Texture and frame data no matter what is finalized by the gpu that is operating the primary display in most modes. The unique data is frame data allowing for the increase of performance beyond conventional setups. Believe it or not this isn't new at all and has been around for years. 3DFX did this and it scaled far better than any thing out there today. That dedicated cache is a small amount of memory dedicated for textures while frame data isn't shared allowing for the doubling of memory minus texture. The later models had a unified memory setup but in that total memory on that card a certain amount was reserved for textures only but was dynamic ware it was stored vs dedicated storage. Second the frame data was unique till the frame was to be finalized. SLI back then was basically short for scan line interface. Each line of pixels was drawn by a gpu, one does odds and the other does evens. In two way the work load was split between both gpus evenly and their clocks were synchronized as well the completed frame data. In later models that being the VSA100 it was configurable up to 32 gpus. This was a popular solution for high resolution simulators for military and aviation training ware both image quality and performance was paramount. Today the max is 4 gpus for both Nvidia and ATI due to limitations of API. Under DX you cannot have unique data except for in the gpu cores every thing else is mirrored. This creates overhead for the bus and the cpu being forced to duplicate data so that both or all 3 or 4 gpus have the same data. The completed frame is then sent over the bridge to the primary gpu for display. In order to improve over this bottleneck is provide a faster means of mirroring data or creating a dedicated pool of necessary data while allowing for the storage of unique data outside the gpu. The limitations of having this is the common cache will be small and costly. It can not be moved off the cards or the impact will be severe on the total system performance so the natural solution would be to either change the api or improve the process of mirroring while reducing overhead at the same time. The advantage of having a local pool is already having the textures saved on the card and leaving the cpu to only be providing geometry data. That is what made 3DFX fast besides their API that was even more efficient than openGL. Takes a less powerful gpu to render the same scene vs the competition. There is one trick they can use in modern gpus is tile based rendering. Why is simple don't bother was it not in view as it saves on resources taking a less powerful to gpu render the same scene as the competition. imagine playing the latest game on an IGP while the other guy has to sink in $500+ on a high end card. ATI and Nvidia made sure that tile based rendering was dead to eliminate future competition that can endanger their future market. So were are stuck with conventional rendering models for the foreseeable future. Raytracing isn't going to improve things very much.


So in other word ATI and Nvidia push out 3DFX because they endangering their future market of SLI and crossfirex. 3DFX GPUs with tile based rendering and API 3DFX had an advantage. The abstraction layer in their cards saved game developers a great deal of programming effort and gain flexibility by writing their 3D rendering code once, for a single API, and the abstraction layer allows it to run on hardware from multiple manufacturers. When ATI and Nvidia saw this they were like Hell no we can’t have that. so they pushed 3DFX out of the GPU business to stop them from competing with them or anyone else for that matter.
a c 273 U Graphics card
September 18, 2010 3:15:28 AM

3DFX was bought by Nvidia, it wasn't pushed out as much as swallowed up. And there is still no reason for me to have put this response in your other thread.
a c 173 U Graphics card
September 18, 2010 3:46:02 AM

At the time of the two ATI had the Rage Fury Max that was all fail and no game. Nividia had nothing of a dual gpu solution while Matrox tried several times to retake market share. 3DFX wasn't tile based rendering as that was PowerVR advantage. They are little known and working samples are hard to find but they were around in the mainstream consumer market for a few years. 3DFX was the largest threat to their market share and PowerVR was just easy pickings. 3DFX however didn't have as much raw power to their advantage and they were late to the market with the VSA100 along with basic features such as 32bit color. Nvidia started to gain the upper hand after the TNT2 had been replaced with the Geforce 256 DDR. The SDR still performed well vs others. ATI over all only survived due to OEM marketing in volume shipments. 3DFX made one critical mistake with the purchase of STB and all is history. The V56k never met production as side from only 200+ prototypes and much fewer still being functional. Quantum 3D moved over to Nvidia and now only services the military industrial complex and limited commercial products. The Rampage that being the last project that they were working on till the last minute. Had it entered the market it would have been a very memorable card but did not support Glide and was pure openGL and DX card. Single, dual, and quad gpu configurations were planned along with a "Sage" chip that never was sampled. Two main batches were produced with the secondary arrived after 3DFX had gone under from TSMC.

Over all a two player market only yields like performing cards.
September 22, 2010 3:08:21 AM

Mousemonkey said:
3DFX was bought by Nvidia, it wasn't pushed out as much as swallowed up. And there is still no reason for me to have put this response in your other thread.

Ok fine you can answer it on but make sure you add this is for the Seagate doesn’t see the my Maxtor hard dive tread ok

nforce4max said:
At the time of the two ATI had the Rage Fury Max that was all fail and no game. Nividia had nothing of a dual gpu solution while Matrox tried several times to retake market share. 3DFX wasn't tile based rendering as that was PowerVR advantage. They are little known and working samples are hard to find but they were around in the mainstream consumer market for a few years. 3DFX was the largest threat to their market share and PowerVR was just easy pickings. 3DFX however didn't have as much raw power to their advantage and they were late to the market with the VSA100 along with basic features such as 32bit color. Nvidia started to gain the upper hand after the TNT2 had been replaced with the Geforce 256 DDR. The SDR still performed well vs others. ATI over all only survived due to OEM marketing in volume shipments. 3DFX made one critical mistake with the purchase of STB and all is history. The V56k never met production as side from only 200+ prototypes and much fewer still being functional. Quantum 3D moved over to Nvidia and now only services the military industrial complex and limited commercial products. The Rampage that being the last project that they were working on till the last minute. Had it entered the market it would have been a very memorable card but did not support Glide and was pure openGL and DX card. Single, dual, and quad gpu configurations were planned along with a "Sage" chip that never was sampled. Two main batches were produced with the secondary arrived after 3DFX had gone under from TSMC.

Over all a two player market only yields like performing cards.


So why do the we need two GPUs? My only guess is that they did it for money.
a c 273 U Graphics card
September 22, 2010 3:38:22 AM

ITsonic said:
Ok fine you can answer it on but make sure you add this is for the Seagate doesn’t see the my Maxtor hard dive tread ok

That makes absolutely no sense whatsoever.
a c 173 U Graphics card
September 22, 2010 1:39:51 PM

As I have said you only gain the benefits of processor in crossfire and sli at a price. It is only there for those who wish to make the most of their first card or to max out an existing game that wouldn't be playable on a single high end gpu. That is the purpose of dual, tri, and quad gpu configurations.
September 26, 2010 12:54:15 AM

Mousemonkey said:
That makes absolutely no sense whatsoever.

ok fine just leave it on here and i'll look at it.

nforce4max said:
As I have said you only gain the benefits of processor in crossfire and sli at a price. It is only there for those who wish to make the most of their first card or to max out an existing game that wouldn't be playable on a single high end gpu. That is the purpose of dual, tri, and quad gpu configurations.


What do you mean by "at a price?"
September 26, 2010 12:55:36 AM

My GTX 460 has 336 cores.
a c 173 U Graphics card
September 26, 2010 1:28:11 AM

ITsonic said:
ok fine just leave it on here and i'll look at it.



What do you mean by "at a price?"


Price meaning such things as more cpu lag, much more power consumption and heat dissipation, and not scaling 100%.
September 27, 2010 3:29:12 AM

fatedcloud said:
My GTX 460 has 336 cores.


:o  How many GPUs do you have man.
a c 273 U Graphics card
September 27, 2010 3:32:03 AM

ITsonic said:
:o  How many GPUs do you have man.

That's just one GPU. :heink: 
September 27, 2010 3:33:46 AM

nforce4max said:
Price meaning such things as more cpu lag, much more power consumption and heat dissipation, and not scaling 100%.

It's because of the two cards right? That's why most PSU have to be Crossfirex and SLI Ready.
a c 173 U Graphics card
September 27, 2010 5:43:41 PM

I can tell you every thing but with out understanding it would be pointless to continue. You need to research on your own.
September 29, 2010 4:25:28 PM

Mousemonkey said:
That's just one GPU. :heink: 

Oh ok.

ITsonic said:
It's because of the two cards right? That's why most PSU have to be Crossfirex and SLI Ready.


OK I see But I still think that GPUs should double it size a little bit.
October 2, 2010 2:39:03 AM

What do you mean BUMP the tread? Anyway I didn’t complete my thoughts together the reason why I said GPU size should be doubled is because it may be able to make the job easier also it like it if they split the in GPU in two & put xxx cores in one half and xxx cores in the other.
a c 1415 U Graphics card
October 2, 2010 3:03:09 AM

We going around in circles now?
October 2, 2010 3:37:11 AM

rolli59 said:
We going around in circles now?


No we are going around in squares silly.
a c 173 U Graphics card
October 2, 2010 3:42:43 PM

Randomacts said:
No we are going around in squares silly.


I thought that it was more like metatron's cubes or golden spirals :whistle: 
October 4, 2010 12:00:27 PM

Where not running a mertatron.
a b U Graphics card
October 4, 2010 1:36:57 PM

ITsonic said:
Where not running a mertatron.


You need to take a few minutes to think through what you are trying to say before typing and your spelling needs to be checked. Overall review this thread is quite complicated and you have not been clear on what it is you want us to answer. So far I have determined the following:

You believe that GPUs should be larger: This is determined by the pace of technology, GPUs will only grow in size if technology dictates that it should. There are billions of transistors inside a GPU making up many cores. The more transistors and cores that a GPU contains the more complex and difficult it is to develop and the more chance for failure that exists. If you build a GPU with 5 transistors there is a good chance they will all work perfectly right off the bat. If you build a GPU with 5 billion transistors there is a good chance there will be a problem with at least 1 which could make it completely inoperable. The amount of operational chips vs failed chips is called yield and on new chips the yield is low because of how complex (big) GPUs are.

You believe that SLI/Crossfire should be on one board: This exists in many different cards as mentioned (5970, 295, etc.) and increased the cost. People don't always want to spend $600+ on a card, some only want to spend $100 and thus many different verities of cards need to exist to suit all the different markets. For myself I purchased on card and then decided at another time that I wanted to increase performance so I bought a second. If I had of bought one card and then had to replace that card with a dual GPU card like the 5970 I would have opted not to as I had already invested in my original card and felt the extra performance would not be enough to offset the extra cost.

Spend some time, look up things on the web about GPUs then asked some informed questions so we are not having to answer questions that don't make sense. There are quite a few nice people out there willing to take the time and try and break it down but it would be much easier on them if you had some background knowledge to begin with.
a c 273 U Graphics card
October 4, 2010 1:39:51 PM

This topic has been closed by Mousemonkey
!