Sign in with
Sign up | Sign in
Your question

What is most important for cpu today?

Tags:
Last response: in CPUs
Share

what is most important in cpu architecture

Total: 22 votes (3 blank votes)

  • architecture/bus width/cache size/instruction/data per cycle/instruction set
  • 52 %
  • float point number
  • 0 %
  • clock speed
  • 11 %
  • fsb bandwidth/turbo bandwidth/input&output bus speed
  • 11 %
  • stage pipeline number
  • 0 %
  • multicore/multi thread/cloud/virtual simulation
  • 21 %
  • modulize/shaderize
  • 0 %
  • fusion/soc
  • 7 %
December 7, 2010 7:53:06 AM

can anyone predict the future processor will be? more core? higher clock speed? or strong ipc and single thread performance? or more instruction set being create in the future(ex: sse5/6/7...) or become gpu like design(like bulldozer...)

please discuss

More about : important cpu today

a b à CPUs
December 7, 2010 10:42:12 AM

With constraints with power and die size with die shrinkage not always reducing power consumption, the push is towards more energy efficient design which takes advantage of higher efficiency to boost performance.
a b à CPUs
December 7, 2010 11:00:11 AM

The first, and its a no brainer:

1) cores do NOT scale well with current programing languages (everything still gets compiled to ASM, which is simply a program listing with line lables and goto statements; not exactly very parrallel, is it?)

2) floating point performance, frankly, isn't THAT important for the general user

3) clock speed is pointless if ICP is low

4) IO Bus' will ALWAYS be far slower then the rest of the PC (nevermind the inherent locking when the hardware they talk to is accessed)

A good architecture with the ability to execute instructions at an efficent speed will always win out in the end. (Then again, we're stuck with X86, which has to be one of the worst instruction sets I've ever worked with...)
Related resources
a b à CPUs
December 7, 2010 11:03:41 AM

I can predict what future CPU's are gonna be like: AWESOME!!!!!!! I WANT IT NAO!!

Cheerz :sol: 

On a more serious answer, probably near future is gonna be focussed on performance per clock (whatever it's called), to get most performance out of as little Ghz as possible. We've reached the Ghz wall a couple of years ago, and now we just need to start getting more out of it. Also we will probably be seeing more and more cores in the near future.

In the long term we are probably looking at more energy efficient designs, and couple months back I read something on Carbon based CPU's instead of Silicon. Supposedly they are going to be a lot smaller, and able to reach much higher frequencies.
a b à CPUs
December 7, 2010 11:06:38 AM

gamerk316 said:

A good architecture with the ability to execute instructions at an efficent speed will always win out in the end. (Then again, we're stuck with X86, which has to be one of the worst instruction sets I've ever worked with...)


This.

There's a finite amount of cores we can have before nothing can take advantage of it save a few programs (case in point Phenom II X6 in gaming)
a b à CPUs
December 7, 2010 11:16:42 AM

amdfangirl said:
This.

There's a finite amount of cores we can have before nothing can take advantage of it save a few programs (case in point Phenom II X6 in gaming)

Exactly how I love rubbing in my friends faces that my 965 games better then their 1065 CPU's.... :lol: 
we tested it and it's true. suckaaaas! :p 

Cheerz :sol: 
a b à CPUs
December 7, 2010 11:25:30 AM

Just take a look at the Phenom II X3 vs Phenom II X4 @ the same speed in gaming, the difference (save rts like SC2) show negliable difference between the X3 and X4.
a b à CPUs
December 7, 2010 12:47:22 PM

To be fair, a few games do scale really well (Bad Company 2 is a perfect example).

I blame a lot of the lack of decent threading for multiple CPU's on a few factors, all CPU independent:
1: Creating a thread is CPU heavy
2: Few programming languages are designed with multiple CPU interaction in mind, and not optimized as such
3: Windows' scheduler, for memory/performance reasons, typically likes to allocate different threads from the same process to the same CPU/core (and since EVERYTHING in windows inherits from a few .sys and DLL files that are active at windows startup, the scheduler thus schedules most every thread for the first CPU/core avaliable)

So the OS, memory management model, programming languages, and finally, the programmer are the biggest reasons massivly parrallel CPU's won't take off.
a b à CPUs
December 7, 2010 2:56:51 PM

Not sure what "modulize/shaderize" means, unless the CPU is big enough to shade you from the hot summer sun :p . In which case the ability to dispense plenty of cold beer upon demand should also be on the list :D .

However the future does seem to be incorporating the GPU into the CPU, at least for the tablet/phone/mobile market. As for GPGPU stuff, too early to tell IMO.
a b à CPUs
December 7, 2010 3:21:05 PM

Not really on the list, I guess it would fit in with the architecture point. But the features, like Intel's Turbo Boost and Hyperthreading are really awesome. HT isn't the best, but it's good. Turbo Boost is where it's at tho. Would be awesome to be able to custom OC the turbo multipliers haha.
December 7, 2010 7:15:13 PM

floating points as an option was a weird one if we are talking about the average consumer laptop/desktop. We don't all own an IBM Roadrunner lol
a b à CPUs
December 7, 2010 8:12:48 PM

gamerk316 said:
To be fair, a few games do scale really well (Bad Company 2 is a perfect example).

I blame a lot of the lack of decent threading for multiple CPU's on a few factors, all CPU independent:
1: Creating a thread is CPU heavy
2: Few programming languages are designed with multiple CPU interaction in mind, and not optimized as such
3: Windows' scheduler, for memory/performance reasons, typically likes to allocate different threads from the same process to the same CPU/core (and since EVERYTHING in windows inherits from a few .sys and DLL files that are active at windows startup, the scheduler thus schedules most every thread for the first CPU/core avaliable)

So the OS, memory management model, programming languages, and finally, the programmer are the biggest reasons massivly parrallel CPU's won't take off.


Thing is, games will only scale to a point of lessening return.
December 7, 2010 8:21:42 PM

fazers_on_stun said:
Not sure what "modulize/shaderize" means, unless the CPU is big enough to shade you from the hot summer sun :p . In which case the ability to dispense plenty of cold beer upon demand should also be on the list :D .

However the future does seem to be incorporating the GPU into the CPU, at least for the tablet/phone/mobile market. As for GPGPU stuff, too early to tell IMO.


wh called shaderize is because current bulldozer design is to divide a "core" into several small computing unit(like shader unit that's on amd's gpu) and modulize these small "unit" with share l1 cache rather than intel's traditional inter l1/l2 cahce that dedicate on each core. such parallelize design can allow module to do multi thread more efficient than intel's bigger core as intel's core will be conflict with windows scheduler when thread increase while bulldozer module will be effectless with scheduler...though single thread will be more terrible since it's smaller unit can perform well..that's the tradeoff between serialize(intel) and parallelize(amd).

however the manufacture cost will be alot less than current phenom II and amd seem to put all the bet on future fusion and cloud computing. using small "computing unit" against intel's larger core just what they had done with nvidia recently.....(5d stream unit vs single cuda..)
December 8, 2010 6:35:39 AM

the future of the cpu is the GPU. There will be more gpu hardware acceleration of most of the "strenuous" cpu chores, rendering the cpu less significant. It's already taken place for web 2.0 with hardware blue-ray and flash acceleration, making even my current sempron LE1100 able to watch 1080P HD video using just an 8400GS. GPU acceleration will be the future after this core race ends in the next year or two.
a b à CPUs
December 8, 2010 10:57:18 AM

amdfangirl said:
Thing is, games will only scale to a point of lessening return.


Not just games, anything that isn't 100% fully parallel will eventually see deminishing returns. And again, due to Window's very design, nothing is 100% parallel.

And again, people need to realize everything that goes on when a thread is actually created; its a massive CPU operation, complete with multiple locks on the system kernal. From a performance standpoint, while doing more threading is good for multiple-core systems, there is an overhead for creating those threads in teh first place, and for the majority of people still with duo's...and at the end of teh day, its FAR more attractive to optimize for the masses, not the few.

Hence why I was never thrilled with i7/i5; I never saw the performance benifit to justify moving from a C2Q platform.
a b à CPUs
December 8, 2010 2:52:51 PM

cheesesubs said:
wh called shaderize is because current bulldozer design is to divide a "core" into several small computing unit(like shader unit that's on amd's gpu) and modulize these small "unit" with share l1 cache rather than intel's traditional inter l1/l2 cahce that dedicate on each core. such parallelize design can allow module to do multi thread more efficient than intel's bigger core as intel's core will be conflict with windows scheduler when thread increase while bulldozer module will be effectless with scheduler...though single thread will be more terrible since it's smaller unit can perform well..that's the tradeoff between serialize(intel) and parallelize(amd).

however the manufacture cost will be alot less than current phenom II and amd seem to put all the bet on future fusion and cloud computing. using small "computing unit" against intel's larger core just what they had done with nvidia recently.....(5d stream unit vs single cuda..)


OK, gotcha now. Probably should write the choice as being between Intel's SMT vs. AMD's CMT. IIRC AMD's version of stronger multitasking will be important in highly threaded situations where each thread doesn't have a lot of empty clock cycles (so-called strong thread). Probably see this kind of scenario in a server workload, more than a desktop or gaming situation.

But I still like my idea of the CPU dispensing cold beer :D ..
a b à CPUs
December 8, 2010 3:06:15 PM

joefriday said:
the future of the cpu is the GPU. There will be more gpu hardware acceleration of most of the "strenuous" cpu chores, rendering the cpu less significant. It's already taken place for web 2.0 with hardware blue-ray and flash acceleration, making even my current sempron LE1100 able to watch 1080P HD video using just an 8400GS. GPU acceleration will be the future after this core race ends in the next year or two.


I agree, but IIRC that's mostly true for highly parallel, non-branching or out-of-order workloads like the graphics rendering that you mentioned. In a sense, that's what SSE and AVX on the CPU does - give it the ability to process a large number of parallel instructions simultaneously in one or two clock cycles. Which is why certain encryption stuff on Sandy Bridge is something like 10x 0r 20x faster than on Westmere which doesn't support it. But for a lot of branchy, OoO x86 code which is not parallel, I don't see the GPU helping much - sorta like the FPU not being too useful for games which are still mostly integer code I think.
!