Sign in with
Sign up | Sign in
Your question

ATI Hyperthreading rumour

Last response: in Graphics & Displays
Share
September 5, 2003 2:51:55 PM

Has anyone heard the rumour that ATI has a cross-patent agreement with intel.....Intel is to utilize some of ATI's graphics technology to improve it's integrated graphics performance and retain market share when ATI comes out with it's chipset in volume. ATI is to recieve a derivitive of Intel's hyperthreading technology. The idea is for ATI to release a new add card that will support multiple displays by having a hyperthreading GPU on one card.

I've not seen any online info to substanciate this rumour....
any thoughts????

EC


<font color=red> Quantum Computers! - very interesting </font color=red>
September 5, 2003 7:13:32 PM

bump,
anyone??

<font color=red> Quantum Computers! - very interesting </font color=red>
September 5, 2003 8:11:33 PM

Havn't heard anything like that....
If this is true, it could be good/bad.
Good becuase ATI gets more resources/technology to make killer graphics cards/motherboards. Bad becuase this means we'll be seeing more companies snuggling up close. Case in point, Nvidia/AMD are getting mighty close togeather. If ATI/Intel follow suit, this might not be good for consumers.

*New's Flash!* ATI has released their new card which is 10x better than Nvidia's offering. Unfortunately, you have to use it on an Intel motherboard which costs 5x more than an AMD mb.
Granted, this is a worst case scenario, but with Nvidia snuggling up to AMD, and Nvidia getting into bed with EA, I don't know.......

No matter where you go, there you are.
Related resources
September 5, 2003 9:39:54 PM

I don't see how hyperthreading or anything that is a part of it could possibly be useful in graphics chipsets.

Hyperthreading involves allowing the OS to assign two threads (threads are a part of the OS and don't exist at all in gpu bios/setup afaik) to the cpu at once, and the cpu jumps back and forth between processing the two threads, leaving each one as it reaches a memory wait state and moving to the other for a bit, increasing efficiency. Traditional cpus only handled a single thread at a time.

As for multiple displays, that's not really related.. current graphics cards can do that just fine, the single gpu is plenty adequate, they just set up two ramdacs.

-Col.Kiwi
September 5, 2003 9:46:13 PM

Didn't ATI use a Dual-GPU before in the RageMaxx card and wasn't the performance increase not worth the extra money ATI spent on the 2nd GPU?
September 5, 2003 9:47:39 PM

Yes and yes :smile:

Are you trying to draw a parallel to this somehow though? I'm not sure if you are and don't see the connection if there is one.

-Col.Kiwi
September 5, 2003 10:01:06 PM

The extra threads in HyperThreading were meant to simulate a dual-processor setup. If you look at the post screen on many MBs it shows to Intel processors.

If a HyperThreading type setting were to be applied to a GPU it would only try and act like 2 GPUs and ATI have already learned that this does not work.
September 5, 2003 10:09:05 PM

True, but hyperthreading and multithreading work different ways. However, you've got a good point about them being a similar advantage.

-Col.Kiwi
September 5, 2003 10:14:09 PM

or in respect to GPUs the lack-of.
September 5, 2003 10:18:49 PM

touche :D 

-Col.Kiwi
September 6, 2003 2:11:56 AM

Ok, your sig makes me ask, what do you know about quantum comps?

The one and only "Monstrous BULLgarian!"
September 6, 2003 3:13:36 AM

GPUs are probably the most active processing units out there. There is little bubble moments in the pipeline. Games stream like there is no tommorow. CPUs got tons of things to do and choke.

HT would not even work as the GPUs are already functioning at max usually. Although their real efficiency is questionable and I still think it's driver related.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 6, 2003 4:33:11 AM

ATi with hyperthreading....... all i can say it ROFL :D 

what's next? nVidia with Hypertransport?

the fanATics here are soo blinded they actually believe this can happen. all i can say is ROFL

and did u guys know, IBM is going to team up with nVidia and the new deep blue is going to have a nVidia GeForce FX 10000Ultra

Proud Owner the Block Heater
120% nVidia Fanboy
PROUD OWNER OF THE GEFORCE FX 5900ULTRA <-- I wish this was me
I'd get a nVidia GeForce FX 5900Ultra... if THEY WOULD CHANGE THAT #()#@ HSF
September 6, 2003 6:19:34 AM

(shakes head) damn nerds

I help because you suck.
September 6, 2003 9:10:19 AM

Just think of it like this...

If it weren't for them, would you REALLY need to post on this forum? LOL!!!

<font color=blue> Ok, so you have to put your "2 cents" in, but its value is only "A penny's worth". Who gets that extra penny? </font color=blue>
September 6, 2003 9:39:33 AM

actually, for all the ones crying against hyperthreading in here, it _could_ get rather useful!
remember that ps3.0 and vs3.0 are about equal in technology/featureset?
well, it would be great, if you could share the resources of both, not? transistor-resources, that is. and to do so, you would need some sort of hyperthreading. not that you would call it that, but you'd need it.
we'll see..
but yes, i don't want to see intel and ati too close myself, too. as i don't actually want an nforce3 to run an athlon64, but what are my choises?

"take a look around" - limp bizkit

www.google.com
September 6, 2003 10:42:02 AM

On the other hand, if Ati could do for intel chipsets, what Nvidia did for Amd, who would bitch. Unless of coarse, thier video cards did a header into the john like Nvidia's did.
September 6, 2003 6:27:52 PM

The question is just how complex is the pipeline and how deep is it, and how many are there, and how many are used on average per clock?

I am betting modern GPUs are able to max them 90% of the time, making HT rather pointless.

Pentium 4s utilize at most 2.5 out of 6 IPCs and Athlon XPs churn like 4 out of the 9 units, which is rather LOW. HT would do its best on K7-K8 anyday.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 7, 2003 2:58:07 AM

current gpu's are never 90% if you think about one thing:
they are eighter:
bandwith limited (agp)
transform limited (vertex shaders)
fillrate limited (pixel shaders)

and most today are fillrate limited (dx9 stuff i mean), a.k.a pixelshaders are the bottleneck.

in this case, an imaginary r300ht could just schedule more of its shaders to the pixelprocessor => we could get more pixelshading capability. so it could simply balance out what needs more resources, pixel or vertexshading, and uses it according to that.

this dream is a dream of me, because if it would get real, it would mean that i could use all say 8vs + 8ps to do ps only, as thats all i need for raytracing. this would mean that the vertex-processor gets into "idle mode", while the pixel-processor would catch all the 16 shading units for itself, to do pixelshading.

this would mean, such a gpu could speed up to 200% of a non-ht-version. this, the theoretical max. of course about not reachable in realworld situations as other limits will [-peep-] the perfomance up.

but this should give an idea.

and yes, the vertexshaders of todays hw are there and feel rather unused. compared to the pixelshaders. and as vs3.0 and ps3.0 are technically the same, it _could_ get done.. who knows?

and it would help to move away finally from rastericers, on to raytracers, wich would run on such a system at "200% of the rastericer speed". or so. at least sounds great for marketing purposes:D 

"take a look around" - limp bizkit

www.google.com
September 7, 2003 3:20:28 AM

Quote:
in this case, an imaginary r300ht could just schedule more of its shaders to the pixelprocessor => we could get more pixelshading capability. so it could simply balance out what needs more resources, pixel or vertexshading, and uses it according to that.


Interesting thought, but although sort of similar, that's not really hyperthreading.

-Col.Kiwi
September 7, 2003 5:22:27 AM

"Hyper Rendering"
Oh yeah!

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 7, 2003 6:14:06 AM

the thing is. I find absolutly no reason why nVidia couldn't do something like this though~~~ they could probably even develop "hyperrendering" before ATi can, cause THEY ARE BIGGER, yes they are fellow fanATics, so if ATi do start developing "hyperrendering" nVidia will come up with something to compete against it.

anyways~~~ that's just my fanboyism~~~ lolz

Proud Owner the Block Heater
120% nVidia Fanboy
PROUD OWNER OF THE GEFORCE FX 5900ULTRA <-- I wish this was me
I'd get a nVidia GeForce FX 5900Ultra... if THEY WOULD CHANGE THAT #()#@ HSF
September 7, 2003 12:10:17 PM

yes it is hyperthreading. it is simply having say 16 pipelines, wich get scheduled for two processors, depending on wich are free and wich are not. a gpu essencially has 2 threads internally, the vertexprocessing one, and the pixelprocessing one. hyperthreading means sharing hwresources of two different threads. wich would result in about that.

and yes, its a very interesting thought. because half of the gpu is idleing half its time currently, yes. so you're using about 75% in a general dx9 game.

"take a look around" - limp bizkit

www.google.com
September 7, 2003 12:11:28 PM

said that, i would prefer to have a card with no hw-vertexshading, but all those transistors reused for more pixelshading pipelines. with a ht p4, i could do the whole vs very fast anyways, and non-blocking for the application.

and pixelshading is THE main bottleneck today.

"take a look around" - limp bizkit

www.google.com
September 7, 2003 12:12:38 PM

oh, and, btw. its just a rumour.

for now..

you haven't heard anything, okay? :D 

"take a look around" - limp bizkit

www.google.com
September 7, 2003 5:21:21 PM

Your example shares the pipelines between threads as needed. Hyperthreading in the P4 switches the pipeline between threads when a thread is in memory-wait.

Similar, but different.

-Col.Kiwi
September 7, 2003 9:35:30 PM

difference only at start.

on a gpu, we essencially have two processors now, and hyperthreading would merge them.

on a cpu, we had only one processor, and hyperthreading did "split" it.

"take a look around" - limp bizkit

www.google.com
September 7, 2003 10:08:52 PM

my intention was to elaborate on the reality of the rumour. The potential technical benifits would be left to the engineers and programmers just as hyperthreading only has benifts when software is optimized for the hardware. With the upcomming pci-x to increase bandwidth, and intel tooting the next generation platform it would make sense fot ati or another graphics company to toote the next generation hyperthreading gpu. If anything, it would be a major marketing/PR campaign. As for what I know about quanum computers....not much, but i'm fasinated by the idea of a biological supercomputer. With what I know of physics....we already know its possible. it's just a matter of time.

but as for the immediate short term. I see an intel/ati alliance quite possible. since the tech bubble even large companies are looking for the leg up.... gone are the days of corporate takeover.... everyone is partnering now.

ec


<font color=red> Quantum Computers! - very interesting </font color=red>
September 7, 2003 10:50:19 PM

hmmm Intel+ATi VS IBM+nVidia+AMD

this could get interesting
but of course the real loser will be TSMC :D  cause if they do go head in head IBM will manufacture AMD and nVidia chips and Intel will manufacture ATi chips TSMC will lose their two bigass costumers :D 



Proud Owner the Block Heater
120% nVidia Fanboy
PROUD OWNER OF THE GEFORCE FX 5900ULTRA <-- I wish this was me
I'd get a nVidia GeForce FX 5900Ultra... if THEY WOULD CHANGE THAT #()#@ HSF
September 8, 2003 3:05:23 PM

Quote:
The potential technical benifits would be left to the engineers and programmers just as hyperthreading only has benifts when software is optimized for the hardware

you obviously never where able to work with a ht p4? i am working on one now, and i can tell you it helps in every situation.

"take a look around" - limp bizkit

www.google.com
September 8, 2003 3:11:27 PM

I thought you had a Celery?
Any P4 before the 2.4C has no HT enabled. Didja upgrade?

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 8, 2003 8:47:29 PM

actually, no....i have little exp with ht p4. My old system is a p3 666. stupid slot 1, otherwise i would of upgraded. anyhow, I when thru troubles with that system with the mth recall, remember???

Ideally, i prefer to build systems for business use. for my customers that is....about 100+ systems. But the only time I can experiment is when I decide to build for myself. HT for a GPU sounds exciting and potentially viable....from a lamens point of view. Troublshooting a system keeps getting tougher and tougher. An extreemly complicated vid. card would be of interest to someone of business background such as myself.

Which by the way, this thread is very confusing....are u saying that HT GPU's would not be benefitial??

EC


<font color=red> Quantum Computers! - very interesting </font color=red>
September 8, 2003 9:08:39 PM

Depends on utlisation. Dual GPUs are not benificial. ATI and 3DFX found this out. But from the other posts it appears "HyperRendering" would be a completely different kettle of fish to HT.
September 8, 2003 11:16:19 PM

Thats not true Gastrian. Just because in those situations it may not have been of too much benefit, dosn't mean that it couldn't be... of benefit.

It in fact was beneficial to the 3dfx cards, there were other reasons they didn't suceed. Think about it, the GPUs were interlaced, each one rendered half the screen. Bad news is they had to share memory:(  Multiple GPUs, just like multiple CPUs have lots of potential.

"Mice eat cheese." - Modest Mouse

"Every Day is the Right Day." -Pink Floyd
September 9, 2003 2:26:11 AM

LOL, so funny. I actually pioneered a new word, "Hyper Rendering". Was just inventing crap ya know heh!

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 9, 2003 5:21:59 AM

And a new use for your word.
e.g.

"Eden is hyperrendering out of his ass bigtime!"
or
"Eden is full of hyperrender" :smile:

<b>I am not a AMD fanboy.
I am not a Via fanboy.
I am not a ATI fanboy.
I AM a performance fanboy.
And a low price fanboy. :smile:
Regards,
Mr no integrity coward.</b>
September 9, 2003 7:30:35 AM

its my current work-pc. not home-pc. there its still my celeron2gig. and next will be athlon64

"take a look around" - limp bizkit

www.google.com
September 9, 2003 7:32:45 AM

Quote:
Which by the way, this thread is very confusing....are u saying that HT GPU's would not be benefitial??


i say it could get useful for vs3.0/ps3.0 gpu's, wich could like that share their pipelines and reuse them for both vs and ps, depends on the workload of each one. or disable some pipes all together if not really needed (cpu is bottleneck) to save energy..

we'll see

"take a look around" - limp bizkit

www.google.com
September 9, 2003 8:49:06 PM

Willamette, but didnt the price to put the 2 chips on the 3DFX card become unviable considering the performance increase they had.

Eden, I knew you made that term up but it's a good term. TM it and sell it too carmack or someone. Hyper Rendering is the term that no one truly understand but sounds so good they must have it. A lot like Hyper-Threading.
September 9, 2003 10:15:46 PM

"Hyper Rendering" returns one hit in google. and looks fun, that document. and has nothing to do with what eden wants it to get used for.

"take a look around" - limp bizkit

www.google.com
September 11, 2003 3:20:07 AM

Dave, considering the Prescott will have even more interesting multimedia aspects that you yourself admit are great (SSE2), why would a programmer of your kind not turn to that or consider it as well as the A64?

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 11, 2003 4:26:09 PM

because first: the athlon64 gives much more.. 64bit processing is great, ya know.. i can feel free! :D 

because second: the athlon64 gives much more.. registers.. wich is great for programming, makes life much more easy, espencially mapping ps2.0/3.0 directly to sse is very easy thanks to it

because third: its a very well overall chip, means it has no lazy components in that don't perform nice, and others that are ultraoptimized

because fourth: the surroundings are well done as well, means memory - access per clock is high, and memory - delay compared to clockspeed is small, both of wich are very important and the reason why an athlon64 at X GHz is as fast as a P4 at 2X GHz in quite a lot "real" benches.

because fifth: prescott doesn't add much additional sse, and its proprietary. the athlon64 has sse just the way the p4 has, so i can code for most processors out there (using sse means athlonXP,athlon64,pentium3,pentium4.. wich is what most people have today).

because sixt: prescott is p4 design. means very fast if you can map your code specifically to p4 design. very slow else. athlon64 is just fast, no mather what code you feed it. no need to do that lowlevel optimisations, means i can develop faster, more easy, and more simple, and much more readable and bether structured, and it will still be rocking fast..

seventh: why not? never had an amd, i'd like to feel one this time.. and it will be my only nvidia product i'll ever buy.. the nforce3:D 

"take a look around" - limp bizkit

www.google.com
September 11, 2003 6:44:08 PM

No one mantioned in this thread the fact that "DUAL GPU" did exists...

Do you remember that it was possible to hook 2 Voodoo based card together to render more stuff! I think that each card was rendering half the line of the resulting image. I don't think scheme would work well with today'S feature (ANISO/FSAA), but this concept was actually working and this gave good performance at that time.

--
Would you buy a GPS enabled soap bar?
September 11, 2003 7:43:04 PM

We did mention dual-gpus though. The ATI Rage FuryMAxx and 3DFX cards did use them but for the price you bought them at they didn't offer a whole lot performance more than a single GPU set-up.
September 12, 2003 12:31:17 AM

Ya I can see your POV and yeah it's not a bad thing to try somethin' new. However the best A64 FXs will not be as good as P4s in multimedia mind you, not to mention the SSE2 current implementation is shoddy relatively. Here on Extremetech <A HREF="http://www.extremetech.com/print_article/0,3998,a=59324..." target="_new">http://www.extremetech.com/print_article/0,3998,a=59324...;/A> the Opteron 2GHZ rules in games, but absolutely dies out in Multimedia. Take 3DS Max and POV-RAY for example.

Quote:
and its proprietary

I've read on it, I believe only PNI is proprietary, however that's logical, it's "Prescott New Instructions".

Bah anyways, we'll just have to wait and see if it really lives up to expectations in graphics programming.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 12, 2003 9:13:22 AM

i don't care about sse2, as its only 64bit precicion floatingpoint (2 in parallel), wich i don't use. i use sse wich is 32bit floatingpoint, 4 in parallel (wich is useful as i have always 3d-coordinates, or 4d-coordinates => 3 or 4 dimensions in parallel)

it doesn't rock in multimedia, wich is mainly decompression/compression of data. sure it doesn't, as ultahighpipelined p4 can then (and about only then) run at full performance.
it does rock in any other application, as there general codeflow is not as streamlined and like that the pipeline of each and every p4 [-peep-] up. thats why normal apps don't run _that_ well on a p4. branching is expensive.
a raytracer has to run fast, but its not a multimedia application, but merely a game-application. lots of branching, not much parallel excecution actually, and definitely not _that_ streamlined.

just look how well all old amd cpu's run realstorm. athlon64 ROCKS in realstorm! realstorm is a raytracer. wich is what i code, too..

"take a look around" - limp bizkit

www.google.com
September 13, 2003 2:10:12 AM

Over in the CPU forum we are wondering what's up with this insane gaming performance boost it brings. I speculated that Hyper Transport might be doing what PCI-Express would do eventually with what you said about faster CPU to GPU data.

Couldja speculate?

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
!