Is it possible a CPU for 3D Rendering

srgess

Distinguished
Jan 13, 2007
556
0
18,990
I questioned myself today why a cpu or on a dual Cpu system one cpu could not only doing what a video card is doing ? With special ram seperatly from the main ram would be used and can be upgraded to have more dedicated ram for video. Its probably already have been thanked and why its not used ?
 

darious00777

Distinguished
Dec 15, 2006
687
0
18,990
I asked this question myself not to long ago, and 3DMark06 answered my question. By 2.4 gigahertz Core2 with 2 megs of Cache couldn't render at a stead 1 frame per second for it's test. My low-end PCI-Express graphics card could have rendered the same test at hundreds of frames per second. Intel's 80 core processor running at 7-8 gighertzs would equal about what my graphics card does for graphics. That's about what it seems, anyway. Could be way off on that last point.

Still, graphics card will be more powerful then your CPU will by a huge margin, least for the next five years or so. Don't know how AMD's fuzion is going to work out yet.
 

darkguset

Distinguished
Aug 17, 2006
1,140
0
19,460
ASICs are much faster than general computing integrated circuits for the same reason a builder can build a house faster than a car mechanic. Each one is specialised on his job and can do it much faster. A scientist (general applications) could probably do anything but very slowly because he has to teach himself to do it (that is your CPU). Think of it like that.
 
Comparing a CPU to a GPU is like comparing our brains to simple calculators, our brains can figure complex taska like smell, ballance, sight, etc yet we cant calculate 43534 x 455 + 5 x 3.332 in a split second
 

SSS_DDK

Distinguished
Jan 28, 2007
136
0
18,680
Your CPU can render 3D, just it cannot do it fast enough. Before being known as graphic cards, these chips were known as 3D accelerators. Which is what they are. They allow the execution of large instructions in few clock cycles, where it take a CPU hundreds of cycles to do so. When ur CPU handles this code, it has to do long (128, 256 etc.. bit calculations) which even with SSE are very hard to get through the complex structure of a CPU.
 

leo2kp

Distinguished
Yeah GPUs have a specific instruction-set and architecture designed for 3D rendering. CPUs simply have a broader range of things to worry about. This is called RISC, I think, and you see it in a lot of places such as encryption cards in servers, and sound cards.

I believe the architecture itself also limits the speed. While it's possible to get a C2D to hit 3.4GHz on air, I think the reason we don't see 3GHz GPUs is because of how they're designed, and AFIK there isn't much room to improve that either, i.e. GPUs will always be slower clock-wise because of the job they do.

Anyone have anymore insight on that?
 

slicessoul

Distinguished
Apr 18, 2006
771
0
18,980
Comparing a CPU to a GPU is like comparing our brains to simple calculators, our brains can figure complex taska like smell, ballance, sight, etc yet we cant calculate 43534 x 455 + 5 x 3.332 in a split second

I'm confused...

Is it mean that CPU is the biggest bottlenecks for the GPU ?
You got C2D which can render 4-5 fps but you got 8800 which can render 100 fps, does it mean data flow aren't through CPU ?
What does CPU do when you play games then ?
 

SSS_DDK

Distinguished
Jan 28, 2007
136
0
18,680
Comparing a CPU to a GPU is like comparing our brains to simple calculators, our brains can figure complex taska like smell, ballance, sight, etc yet we cant calculate 43534 x 455 + 5 x 3.332 in a split second

I'm confused...

Is it mean that CPU is the biggest bottlenecks for the GPU ?
You got C2D which can render 4-5 fps but you got 8800 which can render 100 fps, does it mean data flow aren't through CPU ?
What does CPU do when you play games then ?
What it does is handle game logic, not 3D instructions, though it needs to sometimes emulate missing functionality.
 

agfarrugia

Distinguished
May 7, 2007
5
0
18,510
GPU's are designed for graphic's calculations, CPU are general purpose. CPU can do the work of a CPU but there a CPU may take 10,20,50 clock cycles to do 1 GPU operation, 1 GPU will do it in 1 clock cycle.

The amount of number crunching a GPU is far greater than a CPU. I CPU is just more flexiable
 

slicessoul

Distinguished
Apr 18, 2006
771
0
18,980
Can we say that GPU are more advance than CPU ?
If you say yes, let's take a look on the fastest GPU and on the fastest CPU. With 700$ you got 8800 GTX 768MB DDR3 RAM, while Quadcore X6700 cost 900$ and you don't get RAM yet. Where GPU manufacturers can cut their production cost ?
 
The Cell processor is a step towards creating a hybrid CPU/GPU; it is very good at computing loads of floating point operations very fast - and does away with a graphics card.
However, most gaming logics require short, branched operations that such a processor doesn't handle well.
As such, the very ancient idea of a core CPU targeted at computing integers with a FP coprocessor is coming back today as graphic cards with programmable shaders (which aim at computing large amounts of FP data as fast as possible).
'Current' processors have an integrated FPU; however, it is much less powerful than a graphic card's dedicated GPU, making software rendering of a scene much slower on those. It doesn't mean processors can't render a scene, it just means they need MUCH more time.
Note: softwares like 3D studio used to be CPU-only. Even now a 3D card is used for pre-rendering, not for final rendering. This is due to the various optimizations used in the card and in the driver that make several computations approximative and a bit unreliable.
Professional graphic cards in fact come with special drivers that make such optimizations inactive and stuff like OpenGL texture clamping more precise - so as to do away with software rendering as often as possible.
 

SSS_DDK

Distinguished
Jan 28, 2007
136
0
18,680
Can we say that GPU are more advance than CPU ?
If you say yes, let's take a look on the fastest GPU and on the fastest CPU. With 700$ you got 8800 GTX 768MB DDR3 RAM, while Quadcore X6700 cost 900$ and you don't get RAM yet. Where GPU manufacturers can cut their production cost ?
no you can't...In your cell phone, there is a special CPU that specializes in digital transmission. It runs at a lot more Hz than your desktop CPU. that doesn't make it more advanced. It just makes it different. A GPU is a streaming processor. It took it a while to be able to handle branching (conditional loops) and when it did, we ended up with a huge latency on the G80. CPUs generally sacifice absolute performance with backward compatibility and ease of programming (ever tried to calculate a GPU?, even with C-like code!!!).
Maybe we'll get a hybrid CPU/GPU in the future (AMD FUSION??? or INTEL's next move-forgot the name!!! ?) Right now, what you pay for in a CPU is mostly SRAM not DRAM (one SRAM byte uses 8 transistors while a DRAM byte would only need one) which is one reason why a GPU is cheaper (!!!!?????) than a CPU.
 

m25

Distinguished
May 23, 2006
2,363
0
19,780
I questioned myself today why a cpu or on a dual Cpu system one cpu could not only doing what a video card is doing ? With special ram seperatly from the main ram would be used and can be upgraded to have more dedicated ram for video. Its probably already have been thanked and why its not used ?
The inverse is happening; the CPU is getting more and more work delegated to the GPU and use it as a FP accelerator.
 

srgess

Distinguished
Jan 13, 2007
556
0
18,990
could they make a cpu specialy for gpu instruction, the big point would be to easy replace the core and ram.

i guess YES, and the name would be..... GPU Oh, we already have it :lol:

We have PCIe card that you need to change the whole card if you want to upgrade, i know there some onboard video card but its far ahead from top performance.
 
Can we say that GPU are more advance than CPU ?
If you say yes, let's take a look on the fastest GPU and on the fastest CPU. With 700$ you got 8800 GTX 768MB DDR3 RAM, while Quadcore X6700 cost 900$ and you don't get RAM yet. Where GPU manufacturers can cut their production cost ?

They cant be compared like that