GPU's instead of CPU's

novasoft

Distinguished
May 8, 2008
45
0
18,530
Hi,

A couple of days ago, i thought about GPGPU..
Are GPU's today better in calcuating stuff then a CPU. If so, why not use a GPU as a "CPU" instead... Shouldn't it be better?
Sure, the GPGPU programming is harder, but you will get pretty much more performance then a regular CPU.. For exampel, Nvidias CUDA - PhysX, a regular CPU have around 1 - 5 FPS (not sure if that is right) and with a GPU it will be much more..
Hmmm.. Can a GPU handle the tasks like a CPU?.. Damn.. Im confused and tired, what a great combination :sarcastic:.
Oh well... If anyone understands a little about what i wrote and can answer I'll be thankfull ^^...

I should go and have a nappy time now, so G'Night everybody! ;D

I understand them who didn't understood this text, it was weird i know.. Haha.. xD.. Sorry.. :p
 

nirvana21

Distinguished
May 22, 2008
33
0
18,530
The CPU can do more specific calculations that a GPU cannot do. GPUs can handle lots of parallel processes which makes them ideal for some processes. Ultimately you will always need a CPU and GPU or some combination of both.
 

godmode

Distinguished
Aug 27, 2008
69
0
18,630
not vaporware. the chip is comming out. http://www.crn.com/hardware/210602586

AMD's rebranding effort seems to be leading up to the release of the chip maker's future Fusion Architecture processors and platforms. These products will feature tighter integration of central processors and graphics processors, including chips that will feature both compute engines on a single piece of silicon that AMD says it will ship in 2009.

 

copasetic

Distinguished
Jun 9, 2008
218
0
18,680
I think once we hit the lower limit of manufacturing (IBM showed off a 22nm process earlier this week, I don't think it'll get much lower than that) we'll start to see a move away from a CPU toward a cluster of chips all optimized to do different tasks. One for floating point operations, one for ray tracing, one for pixel shading, and a whole other set for more everyday things like instruction processing and memory fetching. Fusion is the first step toward this kind of set up.

Problem is, like always, heat. It'll take more than heat sinks to cool 10 chips in an area not much bigger than a modern CPU. And the fact that you're talking about rethinking the whole software/hardware interface.
 

San Pedro

Distinguished
Jul 16, 2007
1,286
12
19,295
^ There's still 21.9999999999nm of room before they run out. I'm sure they will continue until cpus are formed at the molecular level. It's just a matter of time and money.
 


Yeah right, because time and money can change the very nature of the universe right?
If you have no clue about the basic physics of the chip making process, or even what size a molecule actually is then perhaps you shouldn't comment.
 
G

Guest

Guest
you can only make something so small before it isn't SOMETHING anymore, then its just a random combination of atoms

like you make it so small that its 1000 atoms wide... thats all nice and well... but how can you have circuits that actually mean something... it wouldn't be a processor any more

at that point however we might know more about quarks and stuff like that that make up atoms, so it may be possible, but I doubt it will even be thought of for the next couple decades, or even centuries
 
like you make it so small that its 1000 atoms wide... thats all nice and well... but how can you have circuits that actually mean something... it wouldn't be a processor any more

Actually an atom of silicon is about .2 nanometers in diameter so with the 22nm process IBM claims to have come up we are already down to a scale of approximately just 100 atoms thick.
 
A gpu could actually do everything a cpu does, but itd be so redundant it wouldnt matter. By the time you wrote all of the code, then the gpu went thru it in parallel,the cpu would have it done in no time. Laymans terms, if its short and repetitive, gpus do well. If it calls for alot of branching and is even remotely complex, gpus fail. At least thats how I understand it. Larrabee will suposedly cross this barier using alot of cache, and keeping redundancy way down, but it run slowwer, but supposedly wider. Again, thats how I understand it. I may be wrong, and feel free to correct me, as I like to learn like the next guy
 

dagger

Splendid
Mar 23, 2008
5,624
0
25,780


That's exactly right. If a gpu has 200 stream processors, think of it more like 200 very simple cores, compared to a quad core cpu's 4 large ones. It's not exactly right physically, but logically, that's basically it.
 
I think Ive read that Larrabee does like 4 shader ops per core, as theyre described by Intel. However, theres something like 32 per actual core, and its all done in DP, plus theres low or no redundancy, as it only has 1 pass, and caches the rest as it goes. The numbers arent factual, as I dont remember them. But it appears from what Ive read, Larrabee currently has more "shader units/capability" than anything current do to its setup, and even tho its slower in process, elimanating the exrta pass(es) catches it back up, reducing latency. The coding wont be as simple as Intel claims, it will be a large chip, as itll need alot of cache along with those, say 32, cores, and its compiler is going to be hot, and have to be faster than anything weve seen thus far. And by the time it arrives, by the end of 09 say, itll most likely be behind ATI and nVidia.
 

dagger

Splendid
Mar 23, 2008
5,624
0
25,780
Wow, that's certainly... different. :na:

I wonder how well it will actually run graphics. Wouldn't the coding required be too different for it to work on existing games?
 
I dont get alot of what Ive read, tho from what I sorta do, its x86, so itll all work, more like cisq . Most of what I read is over my head, but supposedly, once done its done. I wish I could splain it better heheh. But therell be librarys etc to use in common language, and wont have to be redone in HW all the time, tho SW will, but again, its common language thats supposedly easy. Im also getting the feeling that theyll want low fps, say locked at 30 like consoles, using motion blur etc. Im wondering how thatll effect FPS players tho.
 
Even from what Ive read, they dont get it, tho what was written is most probable, but like anything, unless youre on the Larrabee team, you wont know exactly how its to work, just like nVidia or ATI, tho of course with ATI and nVidia we have a history and actual HW