is the future multi-threaded? or high-single threaded performance?

kebbz

Honorable
Jul 27, 2012
212
0
10,680
Hey guys i'd like to know your take on this.

what is the future in computer applications (Gaming, editing ETC...) cpu work loads?

I'd like to know. Why would devs create softwares/games to run more efficiently on multi core cpus?
 
Because (IIRC) power consumption per core increases with the square of clock speed. Meaning that increasing the clock frequency makes a huge amount more heat. Adding more cores increases that heat linearly. As such, chip manufacturers are pushing multiple threads to stay in power envelopes.

Because future chips are going to have more threads, but not necessarily far better single threaded performance, you're going to limit how fast you can do something if you only do it on one thread. And people are more likely to buy software that runs faster.
 


Computer Engineer here,

I'm a bit too tired to write out a long winded explanation, but I'll give you the rundown.

On PCs the software designed by the members of the software industry tends to evolve around whatever hardware the computer hardware industry is willing to throw at them. This is a natural consequence of the many to one relationship between software entities and hardware entities. There are hundreds of large software studios around the world (including thousands of smaller ones) that write software for PCs, but all of them write software that is designed for the x86 platform.

It would be impossible for Intel and AMD to tailor their designs to particular software vendors. Some software vendors such as Microsoft, Apple, and RedHat do have quite a bit of sway with the hardware designers but most are stuck making do with whatever capabilities Intel and AMD are willing to send their way.

In this way, the PC computing segment is very different than the mainframe computing segment. Mainframes are often regarded as a relic of the mid 20th century, but they're still in use today and in fact have seen a resurgence in popularity as the world becomes increasingly connected. Unlike the approach taken by AMD and Intel which requires that software be designed around the confines of the hardware, IBM designs their hardware to meet the needs of the software. This is a result of the much smaller number of developers in that particular market segment. In fact, this sounds a lot like... game consoles!

So, with that little bit of background knowledge in mind, it may be clear that PC software developers will simply design their software around whatever hardware they have available. If studio A and studio B both offer competing products, consumers will gravitate towards the software product that offers the best monetary value, and that will usually be the software the performs the best. The software that performs the best will be the software that takes best advantage of the hardware that it has to work with. The hardware that it has to work with will be whatever is most cost effective for the hardware designers to produce.

Designing hardware is hard, really hard. However, once some particular piece of hardware has been designed it's rather trivial to simply copy and paste it all over the place and then pull it all together with interconnect busses. This is why Intel and AMD offer the same architecture in 2, 4, 6, 8, 10, 12, 14, and 16 core variants with 2, 3, or 4 memory channels and 4, 6, 8, 10, 12, 15, 20, 30 MiB of L3 cache, etc... From the mid 1990s through mid 2000s the size of transistors didn't allow for this to occur. Creating large, multi-core microprocessors resulted in massive chips with huge amounts of power dissipation and high failure rates. It was a better strategy to redesign the microarchitecture than to duplicate it; the performance gains were acceptable and it kept the product within acceptable parameters.

Over time the law of diminishing returns has kicked in on refining microarchitectures. Branch prediction is great, cache misses are low, DRAM bandwidth is not an issue, execution units are easily kept busy via CMT/SMT. Anyone who has paid attention to Intel's benchmarks since 2009 will have noticed only marginal IPC improvements. So, all major manufacturers have shifted gears away from shrinking gains in the vertical direction, and are instead working on the horizontal direction. Intel in particular has put a huge amount of effort into cutting power consumption and increasing transistor density such that adding more and more IA-32 cores easily masks the lack of improvement on the IA-32 cores themselves.

The end result is that yes, the software industry is heading towards highly concurrent workloads and has been for some time. Sequential workloads will still exist, but the performance will be bounded in comparison. Software typically lags between 3 and 5 years behind hardware. We started seeing desktop CPUs capable of high concurrency around 2007, with games in particular starting to take real advantage of it about 2-3 years ago. Games are far from the only demanding type of application though. Many others including web servers, databases, multimedia editing tools, EDA tools, and simulators are all showing great improvements in handling concurrent workloads and much of this is thanks to their simply having more horizontal resources to work with.
 

RobCrezz

Expert
Ambassador
Both are important IMO. As things go forward there will be more multi threading, however high single thread performance will always be important. Intel could make a CPU that has 50 cores using the same die space as a regular i7, but the single thread performance would be terrible.
 
@Pinhedd: I think that is a long winded explanation.

Single threaded performance is only really important for poorly coded and legacy software. With the exception of stuff like MP3 encoding, but you could do multiple tracks simultaneously even if you can't do a single song separately.

@Cons29: That's a bit of an over-simplified explanation, IMO. And somewhat incorrect.
 

St0rm_KILL3r

Honorable
Sep 14, 2013
1,086
0
11,960
I agree with you all. Right now single threaded performance will be better, even on 8 threaded games. But it's after 1-2 years when you will experience some difference with 8 core cpu's with even low power which will do as good as intel's stronger quad cores (probably better). I am not saying that, but I've read it a lot of times. And will it will be of profit to buy cheaper 8 core cpu which will perform as good as somewhat expensive intel's quad core cpu's in gaming.
 
Not as much of an expert as Pinhedd on the topic but I had the same basic understanding - the efficiency of quad cores (on the intel side) are about as good as they'll get for consumer prices (if I understood correctly), and multi-cores may be becoming more common. Software will generally be designed around the hardware available, which, especially with the advent of the new consoles octo-cores, lazy ports to PC may be better with more cores.

Once a mainly multi-core model has been adopted then there will probably once again be work done on the efficiency and trying to squeeze the most amount of power from those cores.

Just my 2c. Seems like a whole lot of speculation, we'll see what happens in the next few years - The top end intel i5 quad cores are so much stronger than the FX cores (83xx) that they'll perform similarly even when all of the cores are utilised, at least in games. In productivity apps the disparity sways in the favour of the FX though.

I expect intel's next gen of CPUs to be multi-core but much stronger and more efficient than AMD's current ones. Which would maintain their lead and market share, which I'm sure they won't give up, even if the enthusiast crowd is a small portion of the population.