What is most important for cpu today?

what is most important in cpu architecture

  • architecture/bus width/cache size/instruction/data per cycle/instruction set

    Votes: 10 50.0%
  • float point number

    Votes: 0 0.0%
  • clock speed

    Votes: 3 15.0%
  • fsb bandwidth/turbo bandwidth/input&output bus speed

    Votes: 2 10.0%
  • stage pipeline number

    Votes: 0 0.0%
  • multicore/multi thread/cloud/virtual simulation

    Votes: 4 20.0%
  • modulize/shaderize

    Votes: 0 0.0%
  • fusion/soc

    Votes: 1 5.0%

  • Total voters
    20

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790
can anyone predict the future processor will be? more core? higher clock speed? or strong ipc and single thread performance? or more instruction set being create in the future(ex: sse5/6/7...) or become gpu like design(like bulldozer...)

please discuss
 

amdfangirl

Expert
Ambassador
With constraints with power and die size with die shrinkage not always reducing power consumption, the push is towards more energy efficient design which takes advantage of higher efficiency to boost performance.
 
The first, and its a no brainer:

1) cores do NOT scale well with current programing languages (everything still gets compiled to ASM, which is simply a program listing with line lables and goto statements; not exactly very parrallel, is it?)

2) floating point performance, frankly, isn't THAT important for the general user

3) clock speed is pointless if ICP is low

4) IO Bus' will ALWAYS be far slower then the rest of the PC (nevermind the inherent locking when the hardware they talk to is accessed)

A good architecture with the ability to execute instructions at an efficent speed will always win out in the end. (Then again, we're stuck with X86, which has to be one of the worst instruction sets I've ever worked with...)
 

N.Broekhuijsen

Distinguished
Jun 17, 2009
3,098
0
20,860
I can predict what future CPU's are gonna be like: AWESOME!!!!!!! I WANT IT NAO!!

Cheerz :sol:

On a more serious answer, probably near future is gonna be focussed on performance per clock (whatever it's called), to get most performance out of as little Ghz as possible. We've reached the Ghz wall a couple of years ago, and now we just need to start getting more out of it. Also we will probably be seeing more and more cores in the near future.

In the long term we are probably looking at more energy efficient designs, and couple months back I read something on Carbon based CPU's instead of Silicon. Supposedly they are going to be a lot smaller, and able to reach much higher frequencies.
 

amdfangirl

Expert
Ambassador


This.

There's a finite amount of cores we can have before nothing can take advantage of it save a few programs (case in point Phenom II X6 in gaming)
 

N.Broekhuijsen

Distinguished
Jun 17, 2009
3,098
0
20,860

Exactly how I love rubbing in my friends faces that my 965 games better then their 1065 CPU's.... :lol:
we tested it and it's true. suckaaaas! :p

Cheerz :sol:
 
To be fair, a few games do scale really well (Bad Company 2 is a perfect example).

I blame a lot of the lack of decent threading for multiple CPU's on a few factors, all CPU independent:
1: Creating a thread is CPU heavy
2: Few programming languages are designed with multiple CPU interaction in mind, and not optimized as such
3: Windows' scheduler, for memory/performance reasons, typically likes to allocate different threads from the same process to the same CPU/core (and since EVERYTHING in windows inherits from a few .sys and DLL files that are active at windows startup, the scheduler thus schedules most every thread for the first CPU/core avaliable)

So the OS, memory management model, programming languages, and finally, the programmer are the biggest reasons massivly parrallel CPU's won't take off.
 
Not sure what "modulize/shaderize" means, unless the CPU is big enough to shade you from the hot summer sun :p. In which case the ability to dispense plenty of cold beer upon demand should also be on the list :D.

However the future does seem to be incorporating the GPU into the CPU, at least for the tablet/phone/mobile market. As for GPGPU stuff, too early to tell IMO.
 
Not really on the list, I guess it would fit in with the architecture point. But the features, like Intel's Turbo Boost and Hyperthreading are really awesome. HT isn't the best, but it's good. Turbo Boost is where it's at tho. Would be awesome to be able to custom OC the turbo multipliers haha.
 

CsG_kieran_2

Distinguished
Nov 17, 2010
321
0
18,790
floating points as an option was a weird one if we are talking about the average consumer laptop/desktop. We don't all own an IBM Roadrunner lol
 

amdfangirl

Expert
Ambassador


Thing is, games will only scale to a point of lessening return.
 

cheesesubs

Distinguished
Oct 8, 2009
459
0
18,790


wh called shaderize is because current bulldozer design is to divide a "core" into several small computing unit(like shader unit that's on amd's gpu) and modulize these small "unit" with share l1 cache rather than intel's traditional inter l1/l2 cahce that dedicate on each core. such parallelize design can allow module to do multi thread more efficient than intel's bigger core as intel's core will be conflict with windows scheduler when thread increase while bulldozer module will be effectless with scheduler...though single thread will be more terrible since it's smaller unit can perform well..that's the tradeoff between serialize(intel) and parallelize(amd).

however the manufacture cost will be alot less than current phenom II and amd seem to put all the bet on future fusion and cloud computing. using small "computing unit" against intel's larger core just what they had done with nvidia recently.....(5d stream unit vs single cuda..)
 

joefriday

Distinguished
Feb 24, 2006
2,105
0
19,810
the future of the cpu is the GPU. There will be more gpu hardware acceleration of most of the "strenuous" cpu chores, rendering the cpu less significant. It's already taken place for web 2.0 with hardware blue-ray and flash acceleration, making even my current sempron LE1100 able to watch 1080P HD video using just an 8400GS. GPU acceleration will be the future after this core race ends in the next year or two.
 


Not just games, anything that isn't 100% fully parallel will eventually see deminishing returns. And again, due to Window's very design, nothing is 100% parallel.

And again, people need to realize everything that goes on when a thread is actually created; its a massive CPU operation, complete with multiple locks on the system kernal. From a performance standpoint, while doing more threading is good for multiple-core systems, there is an overhead for creating those threads in teh first place, and for the majority of people still with duo's...and at the end of teh day, its FAR more attractive to optimize for the masses, not the few.

Hence why I was never thrilled with i7/i5; I never saw the performance benifit to justify moving from a C2Q platform.
 


OK, gotcha now. Probably should write the choice as being between Intel's SMT vs. AMD's CMT. IIRC AMD's version of stronger multitasking will be important in highly threaded situations where each thread doesn't have a lot of empty clock cycles (so-called strong thread). Probably see this kind of scenario in a server workload, more than a desktop or gaming situation.

But I still like my idea of the CPU dispensing cold beer :D..
 


I agree, but IIRC that's mostly true for highly parallel, non-branching or out-of-order workloads like the graphics rendering that you mentioned. In a sense, that's what SSE and AVX on the CPU does - give it the ability to process a large number of parallel instructions simultaneously in one or two clock cycles. Which is why certain encryption stuff on Sandy Bridge is something like 10x 0r 20x faster than on Westmere which doesn't support it. But for a lot of branchy, OoO x86 code which is not parallel, I don't see the GPU helping much - sorta like the FPU not being too useful for games which are still mostly integer code I think.