kebbz :
Hey guys i'd like to know your take on this.
what is the future in computer applications (Gaming, editing ETC...) cpu work loads?
I'd like to know. Why would devs create softwares/games to run more efficiently on multi core cpus?
Computer Engineer here,
I'm a bit too tired to write out a long winded explanation, but I'll give you the rundown.
On PCs the software designed by the members of the software industry tends to evolve around whatever hardware the computer hardware industry is willing to throw at them. This is a natural consequence of the many to one relationship between software entities and hardware entities. There are hundreds of large software studios around the world (including thousands of smaller ones) that write software for PCs, but all of them write software that is designed for the x86 platform.
It would be impossible for Intel and AMD to tailor their designs to particular software vendors. Some software vendors such as Microsoft, Apple, and RedHat do have quite a bit of sway with the hardware designers but most are stuck making do with whatever capabilities Intel and AMD are willing to send their way.
In this way, the PC computing segment is very different than the mainframe computing segment. Mainframes are often regarded as a relic of the mid 20th century, but they're still in use today and in fact have seen a resurgence in popularity as the world becomes increasingly connected. Unlike the approach taken by AMD and Intel which requires that software be designed around the confines of the hardware, IBM designs their hardware to meet the needs of the software. This is a result of the much smaller number of developers in that particular market segment. In fact, this sounds a lot like... game consoles!
So, with that little bit of background knowledge in mind, it may be clear that PC software developers will simply design their software around whatever hardware they have available. If studio A and studio B both offer competing products, consumers will gravitate towards the software product that offers the best monetary value, and that will usually be the software the performs the best. The software that performs the best will be the software that takes best advantage of the hardware that it has to work with. The hardware that it has to work with will be whatever is most cost effective for the hardware designers to produce.
Designing hardware is hard, really hard. However, once some particular piece of hardware has been designed it's rather trivial to simply copy and paste it all over the place and then pull it all together with interconnect busses. This is why Intel and AMD offer the same architecture in 2, 4, 6, 8, 10, 12, 14, and 16 core variants with 2, 3, or 4 memory channels and 4, 6, 8, 10, 12, 15, 20, 30 MiB of L3 cache, etc... From the mid 1990s through mid 2000s the size of transistors didn't allow for this to occur. Creating large, multi-core microprocessors resulted in massive chips with huge amounts of power dissipation and high failure rates. It was a better strategy to redesign the microarchitecture than to duplicate it; the performance gains were acceptable and it kept the product within acceptable parameters.
Over time the law of diminishing returns has kicked in on refining microarchitectures. Branch prediction is great, cache misses are low, DRAM bandwidth is not an issue, execution units are easily kept busy via CMT/SMT. Anyone who has paid attention to Intel's benchmarks since 2009 will have noticed only marginal IPC improvements. So, all major manufacturers have shifted gears away from shrinking gains in the vertical direction, and are instead working on the horizontal direction. Intel in particular has put a huge amount of effort into cutting power consumption and increasing transistor density such that adding more and more IA-32 cores easily masks the lack of improvement on the IA-32 cores themselves.
The end result is that yes, the software industry is heading towards highly concurrent workloads and has been for some time. Sequential workloads will still exist, but the performance will be bounded in comparison. Software typically lags between 3 and 5 years behind hardware. We started seeing desktop CPUs capable of high concurrency around 2007, with games in particular starting to take real advantage of it about 2-3 years ago. Games are far from the only demanding type of application though. Many others including web servers, databases, multimedia editing tools, EDA tools, and simulators are all showing great improvements in handling concurrent workloads and much of this is thanks to their simply having more horizontal resources to work with.