ATI Hyperthreading rumour

ecar016

Distinguished
Mar 3, 2002
144
0
18,680
Has anyone heard the rumour that ATI has a cross-patent agreement with intel.....Intel is to utilize some of ATI's graphics technology to improve it's integrated graphics performance and retain market share when ATI comes out with it's chipset in volume. ATI is to recieve a derivitive of Intel's hyperthreading technology. The idea is for ATI to release a new add card that will support multiple displays by having a hyperthreading GPU on one card.

I've not seen any online info to substanciate this rumour....
any thoughts????

EC


<font color=red> Quantum Computers! - very interesting </font color=red>
 

sargeduck

Distinguished
Aug 27, 2002
407
0
18,780
Havn't heard anything like that....
If this is true, it could be good/bad.
Good becuase ATI gets more resources/technology to make killer graphics cards/motherboards. Bad becuase this means we'll be seeing more companies snuggling up close. Case in point, Nvidia/AMD are getting mighty close togeather. If ATI/Intel follow suit, this might not be good for consumers.

*New's Flash!* ATI has released their new card which is 10x better than Nvidia's offering. Unfortunately, you have to use it on an Intel motherboard which costs 5x more than an AMD mb.
Granted, this is a worst case scenario, but with Nvidia snuggling up to AMD, and Nvidia getting into bed with EA, I don't know.......

No matter where you go, there you are.
 

Col_Kiwi

Distinguished
Aug 8, 2002
429
0
18,780
I don't see how hyperthreading or anything that is a part of it could possibly be useful in graphics chipsets.

Hyperthreading involves allowing the OS to assign two threads (threads are a part of the OS and don't exist at all in gpu bios/setup afaik) to the cpu at once, and the cpu jumps back and forth between processing the two threads, leaving each one as it reaches a memory wait state and moving to the other for a bit, increasing efficiency. Traditional cpus only handled a single thread at a time.

As for multiple displays, that's not really related.. current graphics cards can do that just fine, the single gpu is plenty adequate, they just set up two ramdacs.

-Col.Kiwi
 

Gastrian

Distinguished
May 26, 2003
169
0
18,680
Didn't ATI use a Dual-GPU before in the RageMaxx card and wasn't the performance increase not worth the extra money ATI spent on the 2nd GPU?
 

Col_Kiwi

Distinguished
Aug 8, 2002
429
0
18,780
Yes and yes :smile:

Are you trying to draw a parallel to this somehow though? I'm not sure if you are and don't see the connection if there is one.

-Col.Kiwi
 

Gastrian

Distinguished
May 26, 2003
169
0
18,680
The extra threads in HyperThreading were meant to simulate a dual-processor setup. If you look at the post screen on many MBs it shows to Intel processors.

If a HyperThreading type setting were to be applied to a GPU it would only try and act like 2 GPUs and ATI have already learned that this does not work.
 

Col_Kiwi

Distinguished
Aug 8, 2002
429
0
18,780
True, but hyperthreading and multithreading work different ways. However, you've got a good point about them being a similar advantage.

-Col.Kiwi
 

eden

Champion
GPUs are probably the most active processing units out there. There is little bubble moments in the pipeline. Games stream like there is no tommorow. CPUs got tons of things to do and choke.

HT would not even work as the GPUs are already functioning at max usually. Although their real efficiency is questionable and I still think it's driver related.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
 

coolsquirtle

Distinguished
Jan 4, 2003
2,717
0
20,780
ATi with hyperthreading....... all i can say it ROFL :D

what's next? nVidia with Hypertransport?

the fanATics here are soo blinded they actually believe this can happen. all i can say is ROFL

and did u guys know, IBM is going to team up with nVidia and the new deep blue is going to have a nVidia GeForce FX 10000Ultra

Proud Owner the Block Heater
120% nVidia Fanboy
PROUD OWNER OF THE GEFORCE FX 5900ULTRA <-- I wish this was me
I'd get a nVidia GeForce FX 5900Ultra... if THEY WOULD CHANGE THAT #()#@ HSF
 
Just think of it like this...

If it weren't for them, would you REALLY need to post on this forum? LOL!!!

<font color=blue> Ok, so you have to put your "2 cents" in, but its value is only "A penny's worth". Who gets that extra penny? </font color=blue>
 

davepermen

Distinguished
Sep 7, 2002
386
0
18,780
actually, for all the ones crying against hyperthreading in here, it _could_ get rather useful!
remember that ps3.0 and vs3.0 are about equal in technology/featureset?
well, it would be great, if you could share the resources of both, not? transistor-resources, that is. and to do so, you would need some sort of hyperthreading. not that you would call it that, but you'd need it.
we'll see..
but yes, i don't want to see intel and ati too close myself, too. as i don't actually want an nforce3 to run an athlon64, but what are my choises?

"take a look around" - limp bizkit

www.google.com
 

endyen

Splendid
On the other hand, if Ati could do for intel chipsets, what Nvidia did for Amd, who would bitch. Unless of coarse, thier video cards did a header into the john like Nvidia's did.
 

eden

Champion
The question is just how complex is the pipeline and how deep is it, and how many are there, and how many are used on average per clock?

I am betting modern GPUs are able to max them 90% of the time, making HT rather pointless.

Pentium 4s utilize at most 2.5 out of 6 IPCs and Athlon XPs churn like 4 out of the 9 units, which is rather LOW. HT would do its best on K7-K8 anyday.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
 

davepermen

Distinguished
Sep 7, 2002
386
0
18,780
current gpu's are never 90% if you think about one thing:
they are eighter:
bandwith limited (agp)
transform limited (vertex shaders)
fillrate limited (pixel shaders)

and most today are fillrate limited (dx9 stuff i mean), a.k.a pixelshaders are the bottleneck.

in this case, an imaginary r300ht could just schedule more of its shaders to the pixelprocessor => we could get more pixelshading capability. so it could simply balance out what needs more resources, pixel or vertexshading, and uses it according to that.

this dream is a dream of me, because if it would get real, it would mean that i could use all say 8vs + 8ps to do ps only, as thats all i need for raytracing. this would mean that the vertex-processor gets into "idle mode", while the pixel-processor would catch all the 16 shading units for itself, to do pixelshading.

this would mean, such a gpu could speed up to 200% of a non-ht-version. this, the theoretical max. of course about not reachable in realworld situations as other limits will [-peep-] the perfomance up.

but this should give an idea.

and yes, the vertexshaders of todays hw are there and feel rather unused. compared to the pixelshaders. and as vs3.0 and ps3.0 are technically the same, it _could_ get done.. who knows?

and it would help to move away finally from rastericers, on to raytracers, wich would run on such a system at "200% of the rastericer speed". or so. at least sounds great for marketing purposes:D

"take a look around" - limp bizkit

www.google.com
 

Col_Kiwi

Distinguished
Aug 8, 2002
429
0
18,780
in this case, an imaginary r300ht could just schedule more of its shaders to the pixelprocessor => we could get more pixelshading capability. so it could simply balance out what needs more resources, pixel or vertexshading, and uses it according to that.

Interesting thought, but although sort of similar, that's not really hyperthreading.

-Col.Kiwi
 

eden

Champion
"Hyper Rendering"
Oh yeah!

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
 

coolsquirtle

Distinguished
Jan 4, 2003
2,717
0
20,780
the thing is. I find absolutly no reason why nVidia couldn't do something like this though~~~ they could probably even develop "hyperrendering" before ATi can, cause THEY ARE BIGGER, yes they are fellow fanATics, so if ATi do start developing "hyperrendering" nVidia will come up with something to compete against it.

anyways~~~ that's just my fanboyism~~~ lolz

Proud Owner the Block Heater
120% nVidia Fanboy
PROUD OWNER OF THE GEFORCE FX 5900ULTRA <-- I wish this was me
I'd get a nVidia GeForce FX 5900Ultra... if THEY WOULD CHANGE THAT #()#@ HSF
 

davepermen

Distinguished
Sep 7, 2002
386
0
18,780
yes it is hyperthreading. it is simply having say 16 pipelines, wich get scheduled for two processors, depending on wich are free and wich are not. a gpu essencially has 2 threads internally, the vertexprocessing one, and the pixelprocessing one. hyperthreading means sharing hwresources of two different threads. wich would result in about that.

and yes, its a very interesting thought. because half of the gpu is idleing half its time currently, yes. so you're using about 75% in a general dx9 game.

"take a look around" - limp bizkit

www.google.com
 

davepermen

Distinguished
Sep 7, 2002
386
0
18,780
said that, i would prefer to have a card with no hw-vertexshading, but all those transistors reused for more pixelshading pipelines. with a ht p4, i could do the whole vs very fast anyways, and non-blocking for the application.

and pixelshading is THE main bottleneck today.

"take a look around" - limp bizkit

www.google.com