Sign in with
Sign up | Sign in
Your question
Closed

Can Hyperthread slow down the execution of a single thread?

Tags:
Last response: in CPUs
Share
January 26, 2011 2:11:42 AM

Hello,

If a system has only one active thread at the moment, can hyperthread slow down the execution of the thread? No other threads contend for the CPU resource with the thread.

Thanks,
a b à CPUs
January 26, 2011 5:52:07 AM

icoming said:
Hello,

If a system has only one active thread at the moment, can hyperthread slow down the execution of the thread? No other threads contend for the CPU resource with the thread.

Thanks,


The problem with that logic is that at any given time the OS has many threads running.
I just used task manager and with just two IEs,Msoutlook,docking program I am running roughly 850 threads.
Dadiggles link was very interesting but back from 2002.
Cache thrashing has been greatly reduced nowadays due to cache queing found on modern hperthreading processors in I-series.
Also most modern software nowadays is coded more efficiently for multithreading than in the past so there is very little performance lost if any at all.
Also if you are using Windows 7 the thread ordering scheduler is so much better than in the past.
With modern software and Windows 7 it is definitly better to have HT turned on.
The idea that HT decreases performance goes back ten years ago when software devs
were not properly coding multithreaded apps.
Apps like Photoshop, Maya etc love multiple cores and HT logical cores.
Rendering programs particulary love them.
Look at Cinebench 11.5 rendering bench software and you will see that HT cpus dominate.
And referring to your original post an OS is running 100s of threads at a time.
a b à CPUs
January 26, 2011 12:18:55 PM

The only way it would really slow down is if you were running two threads, and the scheduler was dumb enough to run both on the same logical core.
Related resources
a b à CPUs
January 26, 2011 1:04:30 PM

There was an issue with XP where that could happen and I believe with SP3 it was resolved.
I still recommend on XP that setting affinities for certain programs ie encoding programs is a good idea but with Win7 and also newer architecture on modern HT processors that is really no longer an issue.
It is funny that my old Dual Xeon P4 based SMP rig with HT will bench close to my COre2Duo on rendering.
If the program is running multiple thread that utilize different resources on the CPU then it is very efficient but if the threads are trying to use same resources than the gains achieved are minimal.
Overall running HT on modern CPUs with modern software will almost always show a gain in performance.
HT keeps the CPU more fully occupied with less down cycles.
A nice I7 with HT being used is a beautiful thing.
Especially how well the OS responds while running multiple apps.
Though I only know this in theory since I am running a Core2Duo :( 
It really depends on usage of computer.
In gaming computers it is not really crucial but in workstations running rendering/encoding software it is a huge performance increase.
There is a freeware program called SeeSaw that in XP was very useful especially for SMP rigs but now with Win7 I havent found a need for it.
BTW does anybody feel that Win7 is kinda like Vista with a SP3?
It is funny that I was in the hospital for my wifes operation and all the sytems there even though designed for Vista were all running XP!
Even now I am hearing that Windows 8 is going to borrow heavily from Mac OSs.
I feel like M$ is getting stagnant and not responding to the latest trends,
They should be looking at the hackers who are hacking Kinect and see what technologies they can buy out.
It is like they have run into a tech plateau and are not respondin well to the changes in tech that are happening.
Between Kinect and touchscreen systems combined with advanced voice recognition they should be coming up with something radical.
Win7 is an awesome OS but not any kind of radical departure.
IMO faste processors are not the answer.
It seems extreme multicore sytems with improve arch with well written multithreaded programs is the way to go.
Amd might be behind Intel in performance for now but their focus on hexacore chips is really the way to go.
For the mainstream market incredibly high cpu speeds are nice but software development utlilizing many physical/logical cores utlizing more on die instructions sets and larger caches really is the more logical way.
Dont get me wrong getting over 4ghz on air with some cpus is nice but I would
rather have a slower 6 core for what I do.
I mean if you are a gamer than just get a E8400 water cool it and get it to 5ghz and slap on a GTX 580/hd 6970 and your fine.
But if you want to encode/do office work and also play a game than a 6 core setup is ideal.
I come from the old days where you had to basically only had to run 1 or 2 programs at a time.
Now on my lame C2D 2.4 I can have multiple IEs open,outlook open,photoshop,publisher and word plus do other apps and stil l experience allmost no lag.
I am a big fan of multiple extended displays and multitask between them.
Really five years ago SMP cpus could hancle alot of that back them just a litlle be slower.
This thing is now that the future looks like everybody will have "web appliances" which will primarily access cloud computing resources over the web.
But what it really comes down to is being able to handle the most threads at one time efficiently.
More physical/logical cores the better.
Software devs are starting to code more and more heavily even in games.
There is a heat disappation (sp) barrier that cpus hit that using multicore setups arer the way around this obstacle.
Sorry for the book :) 

January 26, 2011 1:43:08 PM

thank you for your replies, actually my original question is just whether or not just one thread runs much more slowly on a logical processor (when HT is enabled) or on the processor (without HT enabled) assuming there is only one active thread for running in the system. I guess the answer is it might be slower (since some resources in the processor is partitioned), but there shouldn't be much difference.
a b à CPUs
January 26, 2011 2:03:55 PM

well the thread scheduler is always biased towards the physical core.
it is just that there are so many threads (look under task manager-performance)
that it not like one thread ones at at time
also it really comes down to whether the thread will share cpu resources with other threads.
to answer the question is definitly better for a thread to run on a physical processor than a HT logical one.
But having the HT enabled will utilize the CPU much more efficiently.
Back in the Pentium 4 days it made a big difference in mutlitasking.
January 26, 2011 11:02:37 PM

I'm running old sckool HT on a socket 478 P4 Prescott 3.2ghz, and I've often wondered about the HT vs No HT. I'll run my system for a few months with it on, then turn it off, then back on. I see more threads get 99% of one logical core and take longer to execute than with HT off. I guess the system as a whole runs better with HT on, but on the off chance that a single thread gets going on a power trip, it seems to take longer to clear. Task manager reports that 50% of the processor is being used, one at 100% an the other 0%. But is the processor really only running that thread half speed?

I know when they went to netburst they shortened the pipeline and that caused other improvements, probably with this same situation as well. I've been wondering how a 2600k vs 2500k would do in comparison to my experiences with basically intel's first attempt with HT.
a b à CPUs
January 26, 2011 11:47:24 PM

to quote myself
"Cache thrashing has been greatly reduced nowadays due to cache queing found on modern hperthreading processors in I-series.
Also most modern software nowadays is coded more efficiently for multithreading than in the past so there is very little performance lost if any at all.
Also if you are using Windows 7 the thread ordering scheduler is so much better than in the past."
the article Dadiggle listed is very informative and I DID read the whole article.
On todays newer CPUs cache thrashing has been greatly reduced with the improved HT
Also programs written in the last five years especially enterprise level like Photoshop,Maya etc are much more efficient with multithreading.
Since the early days of HT software developers have had to revise the way they
write code primarily because of the prevalence of HT and multicore chips.
If a program has multiple threads using the same resources then it can cause a problem between a physical and HT logical core.
Also Netburst used an incredibly long pipeline (31 stages?) and with going to the Core series they reduced the pipeline which is partially responsible for greater efficiency.
February 3, 2011 1:59:33 PM

someone19 said:
I'm running old sckool HT on a socket 478 P4 Prescott 3.2ghz, and I've often wondered about the HT vs No HT. I'll run my system for a few months with it on, then turn it off, then back on. I see more threads get 99% of one logical core and take longer to execute than with HT off. I guess the system as a whole runs better with HT on, but on the off chance that a single thread gets going on a power trip, it seems to take longer to clear. Task manager reports that 50% of the processor is being used, one at 100% an the other 0%. But is the processor really only running that thread half speed?

I know when they went to netburst they shortened the pipeline and that caused other improvements, probably with this same situation as well. I've been wondering how a 2600k vs 2500k would do in comparison to my experiences with basically intel's first attempt with HT.


My understanding is that the thread might run more slowly, but it should be able to use all ALUs according to that article. I think the CPU usage reported by the task manager is very misleading when HT is enabled.
February 5, 2011 11:48:50 PM

Best answer selected by icoming.
!