Is the GPU centric computer the future?

dwight_looi

Distinguished
Mar 8, 2002
5
0
18,510
I was reading about the new Video card architectures and specs when it dawned on me that, perhaps, today's personal computer should be built around a GPU rather than a CPU! After all, both the GPU and the CPU are big 25+ million transistor devices fabrication using the latest >0.25 micron processes.

(1) Think about it, the kind of application that a typical home or office user runs does heavy duty processing only when dealing with graphics and sound processing. I mean if you take out 3D games, photo-editing, playing back compressed movies or sound files, and the like, all you are left with is the word processor, the spread sheet and perhaps the web browser. And even for these, if we can be assured that everything to do with displaying the GUI is taken care of, all you are left with is simple, not particularly intense computations that 386 or 030 level CPU power is quite sufficient for!

(2) Hence, I was force to question as to why the computer is still built around the CPU with a GPU on some graphics cad sitting multiple bridges away on the slow PCI bus to support it. Why not the other way around? Why not have a GPU in the Geforce4 class, build the whole architecture around it and throw the CPU out. Everything on the computer supports the video chip. 3D and 2D graphics will be blisteringly fast. The chip handles JPEG, MPEG, PNG, MP3 and other graphics/sound compression/decompression at the hardware level. It also, supports matrix manipulation, 3D transforms, 2D scaling and rotation, etc at the hardware level. On the rare occassion that general computing is required it has the supplementary instruction set of a 286 class CPU running at a decently high clock speed (compared to the 286 that is).

(3) Instead of having the a big, complex, general purpose CPU powerhouse like the Itanium or Hammer, why not have a powerful GPU as the core of the system and have the general purpose instructions given a paltry 1 million transistors, one instruction pipe, non-superscalar, attention as a supporting part of the graphics chip? Won't this put performance where it counts?

(4) Also, instead of having to upgrade the CPU and the video card to get better performance, the user simply upgrades the CGPU!

<P ID="edit"><FONT SIZE=-1><EM>Edited by dwight_looi on 03/08/02 05:21 PM.</EM></FONT></P>
 

Schmide

Distinguished
Aug 2, 2001
1,442
0
19,280
I got one word for you. “Physics”

If you want good games with realistic actions within them, you have to have a processor that can do these calculations for you.

All errors are undocumented features waiting to be discovered.
 

AMD_Man

Splendid
Jul 3, 2001
7,376
2
25,780
I got one word for you. “Physics”
Why can't a GPU be designed to also handle physics? Imagine it! That would be amazing. Unless you're designing a game or program that requires physics that doesn't apply in reality, the GPU can be integrated with the ability to execute certain physics algorthims.

In the future (like with T&L) a programmable Physics unit may be designed, but currently all games aim to apply realistic physics rather than custom game specific physics.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
 

IIB

Distinguished
Dec 2, 2001
417
0
18,780
however - a PC is powered by a native instruction language that take cares of the PC architecture (I/O, memory etc...) and controls and manages all of your PC devices (including the Graphic card).

this is why a CPU is the center of the PC - the CPU and the Chip-Set are the only components who can manage the computer.

you would need to emmbed such features into a GPU in-order to make it the "center" of your computer - but it will no longer be a GPU... it will be a CPU + GPU...

This post is best viewed with common sense enabled
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
I was reading about the new Video card architectures and specs when it dawned on me that, perhaps, today's personal computer should be built around a GPU rather than a CPU!
This is exactly what nVidia's marketing department is attempting to push on the industry. They would love to be the center of the universe.


After all, both the GPU and the CPU are big 25+ million transistor devices fabrication using the latest >0.25 micron processes.
The current Pentium 4 uses a 0.13 micron process. Next year we move to 90nm (0.09 micron) process.


(1) Think about it, the kind of application that a typical home or office user runs does heavy duty processing only when dealing with graphics and sound processing. I mean if you take out 3D games, photo-editing, playing back compressed movies or sound files, and the like, all you are left with is the word processor, the spread sheet and perhaps the web browser.
3D games can be processor intensive due to AI routines, input device processing, sound processing, etc. Photo-editing only uses the video card for displaying the end-results. The actual transformations and other features are all done through the CPU. Playing compressed movies requires the CPU to decompress the stream at a high rate of speed. The video card only does one thing, but it does it well. It renders and fills polygons. The CPU does everything else in the system. The CPU certainly handles a great deal more than the video card on a regular basis.


(2) Hence, I was force to question as to why the computer is still built around the CPU with a GPU on some graphics cad sitting multiple bridges away on the slow PCI bus to support it. Why not the other way around? Why not have a GPU in the Geforce4 class, build the whole architecture around it and throw the CPU out.
Because then your computer would not be able to do anything. ;) Sure the video card knows how to push polygons to the screen. But it is the CPU that tells it which polygons and of what shape. The video card is the servant (albeit a fast servant.) The CPU is the master.


(3) Instead of having the a big, complex, general purpose CPU powerhouse like the Itanium or Hammer, why not have a powerful GPU as the core of the system and have the general purpose instructions given a paltry 1 million transistors, one instruction pipe, non-superscalar, attention as a supporting part of the graphics chip?
I do not know about you, but I certainly do not want to play games that are all eye-candy and no content. I want real processing power put into my games with real AI and complex interaction. Eye candy is just icing on the cake.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

tankaholic

Distinguished
Mar 8, 2002
7
0
18,510
You can also spin that around... Why not have the GPU become part of the CPU? Last I checked, CPU speeds were much higher then GPU's are currently. Imagine a Radeon 8500 or a Ti4600 running at 2 GHz?

It would be similar to when (way back) CPU's used to have separate math co-processors which are now completely integrated together. Sure the chip will be twice as big, require much more cooling and cost a lot to design, but once it hits the market it would be unstoppable.

I guess thats just wishful thinking...
 

dwight_looi

Distinguished
Mar 8, 2002
5
0
18,510
Just hypothetically, consider this. The GPU sits atop the north bridge. The GPU has access to the however many sticks od DIMMs or RIMMs you have as main memory. The GPU writes directly to the video output rasterizer. The rest of the stuff -- PCI masters, IDE, basic I/O, etc -- sits on the south bridge. AND there is a 386DX sitting on the PCI BUS with 64MB of "instruction memory" to do the low performance, mundane, stuff that occasionally needs doing. Basically we swap the position and importance of the CPU and the GPU. Now, of course, we can also moved a low performance "386DX" in as part of the motherboard chipset or build it as a SMALL part of the GPU. Won't that work? Won't that put the emphassis of power where it counts more?

The general purpose co-processor doesn't have to be really powerful because once you distill all the applications down the hardcore processing isn't in deciding the position of the justified text in a word processor or position of the entities in a real time strategy game or xyz co-ordinates of the objects in a flight sim. The hardcore processing is in transforming large polygonal models, textures and sound to make a live-like scene out of the simple data structures representing the virtual world. For all intents and purposes, if you remove the fancy graphics and sound, the applications of today isn't very much more computation demanding than those of the 68000/80286 era. Need a real life example? IF this isn't the case, the game consoles like the PS2 with a paltry 300MHz processor and 32MB of memory won't be able to make great physics model like those in GranTurismo3 work!
 

Schmide

Distinguished
Aug 2, 2001
1,442
0
19,280
Physics on the GPU, or a separate PPU? (Now that has some French marketability) You definitely have a good point. I can see how certain calculations like collision detection could be grafted into a GPU. I would be equally amazed if it came without a price on the GPU. It all comes down to the jack-of-all-trades, master of none argument.

All errors are undocumented features waiting to be discovered.
 

somerandomguy

Distinguished
Jun 1, 2001
577
0
18,980
A physics engine will come alone eventually, but you need algorithims that do it before we can have hardware the make it fast. 3D models will still sink into a wall in game.

"Ignorance is bliss, but I tend to get screwed over."
 

AMD_Man

Splendid
Jul 3, 2001
7,376
2
25,780
As long as all the games are based on the same physics, developing standard algorithms that would be accelerated by the GPU could work. Now, I don't agree with what one person said that all you need is a 386DX because a GPU wouldn't be able to accelerate floating point and integer operations of non-graphical programs. However, processing physics through the GPU will remove the constant need for faster processors for a while.

AMD technology + Intel technology = Intel/AMD Pentathlon IV; the <b>ULTIMATE</b> PC processor
 

kief

Distinguished
Aug 27, 2001
709
0
18,980
Thre is and has been work on "computer on a chip" which would do everything. This is about as close to what your talking as it comes! One day we will have one chip and one pcb.....

Jesus saves, but Mario scores!!!
 

Schmide

Distinguished
Aug 2, 2001
1,442
0
19,280
It would seem efficient to have an area of the GPU where a few vectors could be stored. These vectors would be collision detected against any geometry thrown into the system. Of course geometry could easily be excluded as this collision detection based on a state flag. One foreseeable problem is there is often projection information stored in the matrices driving the geometry engine. However, this could easily be compensated by separately calculated matrices that do not contain this information, consequently theses matrices would have to be created on the CPU, thus adding another reason for a powerful CPU.

All errors are undocumented features waiting to be discovered.
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
If the games you play are not computationally expensive, then I question your taste in games. ;) I believe most of us got bored of Quake 3 Arena after two weeks. It was all eye candy and no content. The same could be said about any game that uses very little CPU to perform realistic AI, alterable content, and a great plot. I actually had much more fun playing the text-based games of yesterday than I do playing most games released today.

One of the main reasons most games today are so similar is they all use the same hardware features on the video card. These features tend to lock you into a similar template. There is simply no way to "think outside the box" if you need to stay inside the box that is your video card in order to get the game to run at reasonable speeds. A real CPU offers much more flexibility and power to do whatever your imagination dreams up.

If you start adding more and more features to your video card, pretty soon it becomes a general purpose CPU. The graphics chip companies are more than welcome to try to compete in the CPU market, but they have a long learning curve ahead of them. Simply defining the graphics chipset as the brain is not going to actually change which component really is the brain.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
What I would prefer to see is memory bandwidth to the processor increased sufficiently to allow the actual CPU to take over the 3D processing functionality of the video card.

The reason 3D accelerators came into existence is the processor simply could not pump enough pixels from memory to the video framebuffer on the video card. The main reason for this was low memory bandwidth and low bandwidth to the video card. Now that we have increased AGP transfer speeds, all we need is higher memory bandwidth to the processor and we can actually get rid of the 3D accelerator portion of the video card entirely. This would free us from the contraints of the various 3D APIs such as DirectX and OpenGL, which constrict us to various depths and clipping planes, forcing games on us that look almost exactly like one another, except for minor graphical differences.

Imagine game worlds where you can break a branch off a tree and use it to knock someone over. Video cards have an extremely difficult time with any scene that actually changes. They completely rely on models being predefined and stored in the video card's memory. Move the processing back from the video card over to the CPU and we would eliminate these problems.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Matisaro

Splendid
Mar 23, 2001
6,737
0
25,780
What I would prefer to see is memory bandwidth to the processor increased sufficiently to allow the actual CPU to take over the 3D processing functionality of the video card.

The 3d graphics chip is hardwired to do graphics, if you had to emulate that with software, even with huge bandwith, you would still not get the same performance.


Try running quake 3 in hardware mode, then run it in software, you probably are seeing a 1000% drop in speed if not more, you would have to increase bandwith and cpu speeds to match that, not bearing in mind the other things the cpu does during a game, I just dont see it hapening.


3D APIs such as DirectX and OpenGL, which constrict us to various depths and clipping planes, forcing games on us that look almost exactly like one another, except for minor graphical differences.

????

You seem to be confusing a static tl system with an api.


Imagine game worlds where you can break a branch off a tree and use it to knock someone over. Video cards have an extremely difficult time with any scene that actually changes. They completely rely on models being predefined and stored in the video card's memory. Move the processing back from the video card over to the CPU and we would eliminate these problems.

A good model is made up of many parts, which can be used seperatly, there is a game(forget the title) where you can blow off an enemies arm and bash him with it.

The api and graphics processor has nothing to do with the features you mentioned.

"The Cash Left In My Pocket,The BEST Benchmark"
No Overclock+stock hsf=GOOD!
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
Actually it does. I want to be able to break apart that tree in any fashion I desire, not just where some 3D modeler decided it would create a place for me to do so. The ability to actually morph objects is important to the next generation of games. 3D cards do not have this ability. In fact they slow down the whole process by requiring you to transfer the model over to the memory on the video card each time you change one little thing. The current form of video card is simply not designed with this type of thing in mind. They are simple polygon pushers; not all that advanced in terms of algorithms.

The last great innovations in the industry came about with Quake. Carmack did some incredible optimizations and implemented the entire thing in software. Since that date DirectX and OpenGL have taken over much of the work, pushing the triangle work onto the video card. I have honestly seen very little innovation since that time.

DirectX started out as an API implemented mostly in software, full of good ideas that had been innovated in software. Gradually the video cards filled in the functionality and took over implementing these functions. Since then I have not seen any algorithm innovations in DirectX beyond what the video card companies have decided to add, which is not much. The gaming industry now sits there waiting for the video card industry to come up with the next best thing rather than innovating themselves. This is a huge loss for gamers.

Either software developers need to start innovating themselves, working without the benefit of a 3D accelerator, or Microsoft needs to start coming up with some new innovations and implementing them in Direct3D in the HEL. Hardware support can come later if companies such as nVidia decide to implement these features, but new features should not require video card hardware support as a prerequisite. This stifles creativity.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Crashman

Polypheme
Former Staff
The Cyrix MediaGX tried that and stunk at it. The processor was neither good at traditional calculations nor games, it was a compromise that barely worked.

What's the frequency, Kenneth?
 

Matisaro

Splendid
Mar 23, 2001
6,737
0
25,780
Actually it does. I want to be able to break apart that tree in any fashion I desire, not just where some 3D modeler decided it would create a place for me to do so. The ability to actually morph objects is important to the next generation of games. 3D cards do not have this ability. In fact they slow down the whole process by requiring you to transfer the model over to the memory on the video card each time you change one little thing. The current form of video card is simply not designed with this type of thing in mind. They are simple polygon pushers; not all that advanced in terms of algorithms.

Its a good thing the cpu sets up all models isnt it then, the gpu ONLY draws what the cpu tells it, there is no limit that the gpu places on games which would not be there if the cpu did everything.

Also, the models would all be there in the first place, you wouldnt need to resend a model if you changed it.

The last great innovations in the industry came about with Quake. Carmack did some incredible optimizations and implemented the entire thing in software. Since that date DirectX and OpenGL have taken over much of the work, pushing the triangle work onto the video card. I have honestly seen very little innovation since that time.[/qupte]

Youre not looking very hard, have you seen the nature demo on 3dmark2001, that is NOT possible without a good dx8 videocard, and to do it in software would ensure less than 1 fps.


DirectX started out as an API implemented mostly in software, full of good ideas that had been innovated in software. Gradually the video cards filled in the functionality and took over implementing these functions. Since then I have not seen any algorithm innovations in DirectX beyond what the video card companies have decided to add, which is not much. The gaming industry now sits there waiting for the video card industry to come up with the next best thing rather than innovating themselves. This is a huge loss for gamers.

I guess thats why when programable gpu engines were released on the gf3 and 8500, all the games now are designed specifically for them, oh wait, they arent.

Computer power as a whole is what has limited game design, and high end gpus like ati's and gf3's are now finally allowing game designers to make groundbreaking games.

Moving everything to the cpu does not work, and the current designs of cpus would make any move to do such foolhardy, redesigning a cpu to do graphics anywhere near as well as a gpu would turn the cpu into a gpu in itself!


Either software developers need to start innovating themselves, working without the benefit of a 3D accelerator, or Microsoft needs to start coming up with some new innovations and implementing them in Direct3D in the HEL. Hardware support can come later if companies such as nVidia decide to implement these features, but new features should not require video card hardware support as a prerequisite. This stifles creativity.

You can emulate programable pixel shaders, at 0.01 fps!!!

If you want all the eye candy to be in software, you should get on intel to make that 10ghz p4, because there is FAR from enough cpu power to come even close to anything currently available on the market today.(done via gpu)

"The Cash Left In My Pocket,The BEST Benchmark"
No Overclock+stock hsf=GOOD!
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
... there is no limit that the gpu places on games which would not be there if the cpu did everything.
Actually there is. Video cards have a very specific implementation for everything. For example, if you want to use 16-bit color you are usually restricted to a 16-bit Z-buffer as well. This offers a tiny 65,536 different levels of depth for every pixel. If objects get too close to each other, the video card will have no way to tell which one should be in front and thus should be drawn. This leads to all kinds of artifacts. Thus, programmers are forced to keep models a certain distance from each other. Reasons such as this keep you from actually being able to see someone 'touch' an object. If we let you (the viewer) watch such an activity, you would see a mingling of the object and the hand drawn in an erratic fashion, most likely making it look like the item in the person's hand just swallowed the hand.

A solution to this is to have an extremely small world (perhaps split the world up into zones), so that 65,536 levels of depth are adequate. This is why you generally do not see one huge continuous world in any 3D game. The only game that does have a massive continuous world (Asheron's Call) uses a software engine.

Problems like this keep realism very limited in games. There are a whole host of these kinds of issues.


Also, the models would all be there in the first place, you wouldnt need to resend a model if you changed it.
Any time you change a model to one with more vertices you must send the entire model over to the video card once again. Models cannot be modified once locked into video memory. They can only be deleted and recreated (which resends it.)


Youre not looking very hard, have you seen the nature demo on 3dmark2001, that is NOT possible without a good dx8 videocard, and to do it in software would ensure less than 1 fps.
The demo uses no new hardware algorithms. It might produce nice eye-candy, but it still uses the same old technology with the same limitations.


I guess thats why when programable gpu engines were released on the gf3 and 8500, all the games now are designed specifically for them, oh wait, they arent.
Oh gee, pixel shaders. We can now determine how a pixel gets shaded. How exciting. And it only requires a few months worth of man-hours to get it working. Joy. This is not what I would call that great of an innovation. In addition, this is a software solution. Programmable means software.


Computer power as a whole is what has limited game design, and high end gpus like ati's and gf3's are now finally allowing game designers to make groundbreaking games.
Up until now I would have agreed with you. However, today CPUs are more than powerful enough to handle these tasks on their own. Asheron's Call does it.


Moving everything to the cpu does not work, and the current designs of cpus would make any move to do such foolhardy
See Asheron's Call.


redesigning a cpu to do graphics anywhere near as well as a gpu would turn the cpu into a gpu in itself!
A CPU is a general purpose processor. It can do graphics without having to be renamed.


You can emulate programable pixel shaders, at 0.01 fps!!!
Programmable pixel shaders are just small software applications. You can perform these tasks just as easily on a standard CPU. The only benefit to having them on the video card is being able to continue using the 3D accelerator for everything else as well.


If you want all the eye candy to be in software, you should get on intel to make that 10ghz p4, because there is FAR from enough cpu power to come even close to anything currently available on the market today.(done via gpu)
Not so. Asheron's Call does a fine job. As far as the 10GHz CPU, wait about 3 years or so. ;)

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Matisaro

Splendid
Mar 23, 2001
6,737
0
25,780
While past games relied on simple textures painted onto objects to provide their color and shading, AC2 boasts far more dynamic tools. The engine runs a scriptable shading language and supports pixel and vertex shaders, providing artists with an easy-to-use system to describe the surface of an object. These shaders work in conjunction with the appearance system, allowing your characters to get covered in blood or the dents in their armor to properly reflect the light and environment around them.


If asherons call does it so well without gpu support, then why are they abandoning it for the next asherons call engine!

Perhaps its say, limited?

<A HREF="http://zone.msn.com/inviso1/articles/articlesengine.asp" target="_new">http://zone.msn.com/inviso1/articles/articlesengine.asp</A>

"The Cash Left In My Pocket,The BEST Benchmark"
No Overclock+stock hsf=GOOD!
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
They are not abandoning the software engine. They have in fact made their engine in a modular fashion so that those who wish to use hardware acceleration can do so. The fact remains that most people do not own a 2GHz processor yet. In order for those folks to play, they will need a 3D accelerator. The general goal is to allow as many people to play as possible.

Eventually 99% of the public will some day own computers with fast processors. When this happens, a 3D accelerator will generally not be required for this level of game. The requirement will be a fast processor and a video card that supports 8X AGP to allow for fast blitting of the scene to the framebuffer.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Matisaro

Splendid
Mar 23, 2001
6,737
0
25,780
Actually there is. Video cards have a very specific implementation for everything. For example, if you want to use 16-bit color you are usually restricted to a 16-bit Z-buffer as well. This offers a tiny 65,536 different levels of depth for every pixel. If objects get too close to each other, the video card will have no way to tell which one should be in front and thus should be drawn. This leads to all kinds of artifacts. Thus, programmers are forced to keep models a certain distance from each other. Reasons such as this keep you from actually being able to see someone 'touch' an object. If we let you (the viewer) watch such an activity, you would see a mingling of the object and the hand drawn in an erratic fashion, most likely making it look like the item in the person's hand just swallowed the hand.

Hence the 32 bit color depth and 24 bit zbuffer of the modern videocard.
And you have yet to tell us how the all purpose cpu is going to do all this magic and maintain an acceptable framerate for gameplay.


A solution to this is to have an extremely small world (perhaps split the world up into zones), so that 65,536 levels of depth are adequate. This is why you generally do not see one huge continuous world in any 3D game. The only game that does have a massive continuous world (Asheron's Call) uses a software engine.
This is more due to bandwith limitations, everquest has large continuous worlds, and (havent played) its zones are for bandwith, not because theres only a certain number of degrees of seperation.

Furthermore, these degrees would eminate from the camera point, meaning the only limit to world size is the amount of texture and polys you can have in memory(and draw acceptably) This would be a limit on a cpu as well as a gpu.


Any time you change a model to one with more vertices you must send the entire model over to the video card once again. Models cannot be modified once locked into video memory. They can only be deleted and recreated (which resends it.)
Perhaps you misunderstood me, the arms vertice is already loaded with the model, and the breaking off of said arm is already planned for.

The demo uses no new hardware algorithms. It might produce nice eye-candy, but it still uses the same old technology with the same limitations.
The demo uses pixel; shaders, which are hardware advancements, the technology is new, and if what you have listed as limitations are what you mean again here, I dont see them as graphics card specific limitations.


Oh gee, pixel shaders. We can now determine how a pixel gets shaded. How exciting. And it only requires a few months worth of man-hours to get it working. Joy. This is not what I would call that great of an innovation. In addition, this is a software solution. Programmable means software.
Asides from how this statement shows how little you know about what pixel shaders actually are, I will rebut it.

A: the programmable in this case is hardware programable, more like sse or sse2 than an actual software app. They have as I stated, pixel shader emulators, which run at 1/1000th the speed of the hardware pixes shader, perhaps you can take those apps and "optimise" them, if you can get them to run at even 50% speed I will concede the point, you are a software engineer after all.


Up until now I would have agreed with you. However, today CPUs are more than powerful enough to handle these tasks on their own. Asheron's Call does it.
Asherons call isnt all that spectacular from the screen shots I have seen, also remember the game was started in 95, hardware t&l engines are very new(gf1), of course an old game like this is software, quake 1 is software too.





Programmable pixel shaders are just small software applications. You can perform these tasks just as easily on a standard CPU. The only benefit to having them on the video card is being able to continue using the 3D accelerator for everything else as well.
Prove this incorrect and audacious statement ray.

Linkage.

You dont know how pixel shaders even work(as shown by the statement "oooh it can shade pixels".



Not so. Asheron's Call does a fine job. As far as the 10GHz CPU, wait about 3 years or so. ;)

I just saw some screen shots of asherons call, and if you think thats graphically impressive then you need to play more modern games.


"The Cash Left In My Pocket,The BEST Benchmark"
No Overclock+stock hsf=GOOD!
 

Raystonn

Distinguished
Apr 12, 2001
2,273
0
19,780
Hence the 32 bit color depth and 24 bit zbuffer of the modern videocard.
This was simply one example of the many restrictions present in the use of hardware APIs. Not everyone has a state of the art video card either.


And you have yet to tell us how the all purpose cpu is going to do all this magic and maintain an acceptable framerate for gameplay.
For the most part in our high end processors, the CPU is generally idle most of the time during today's games waiting on the video card to perform tasks. The pegging of the CPU in such monitoring applications as Task Manager is due to the fact that Windows' Idle task is not being executed while the processor sits waiting. The requirement for Asheron's Call is a 333MHz processor. Plug in a 2GHz processor and it will be sitting there most of the time doing nothing. There exists plenty of processing power to take over the 3D processing.


This is more due to bandwith limitations, everquest has large continuous worlds, and (havent played) its zones are for bandwith, not because theres only a certain number of degrees of seperation.
And Asheron's Call need not worry about bandwidth? There are plenty of ways to deal with bandwidth. It is not difficult to restrict updates to players and creatures within a certain distance of yourself. This is how Asheron's Call does it.


Furthermore, these degrees would eminate from the camera point, meaning the only limit to world size is the amount of texture and polys you can have in memory(and draw acceptably) This would be a limit on a cpu as well as a gpu.
Games today cache textures and models. RPGs have far more textures and models than can fit in the memory on your video card. Video card memory does not limit the size of the world or zone at all.


the arms vertice is already loaded with the model, and the breaking off of said arm is already planned for.
Singular is "vertex." Plural is "vertices." This is exactly my point. The only way to allow people to break any object in any fashion on today's video cards is to make every object out of hundreds of thousands of polygons. This is not workable. You should not have to plan for how something will break.

We need new algorithms for the morphing of a single object into two when the sheer stress of an object exceeds its tolerances and the physics engine determines it should break. When this happens, the objects should dynamically be transformed into at least two separate parts. This kind of creativity is not possible with today's video cards. It, and many other innovative things not yet present in today's games, is very possible using the CPU.


Asides from how this statement shows how little you know about what pixel shaders actually are, I will rebut it.

A: the programmable in this case is hardware programable, more like sse or sse2 than an actual software app. They have as I stated, pixel shader emulators, which run at 1/1000th the speed of the hardware pixes shader...
On the contrary, I know exactly how pixel shaders work. I used to write games for a living. Software engineering is what I do best. Have you taken a look at the programmable pixel shaders? They require you to write in a form of assembly language specific to the video card. This is clearly software. The reason this is required in the video card is because only the video card has access to the stage at which pixel shading must be done. This is because you are using the video card to do the rendering. If you were using the CPU to do the rendering you would simply use the same code (in C or x86 assembly instead) during your own rendering phase.


Asherons call isnt all that spectacular from the screen shots I have seen, also remember the game was started in 95...
And it only requires a 333MHz processor to run. Now imagine what you could do if you upped the requirements to a 2GHz processor. Dynamic lighting and other features would easily be a reality, with plenty of processing power left over.


You dont know how pixel shaders even work
Matisaro, you argue with me over a great many things. You should not be arguing with me on this one. I know exactly how they work. Software engineering is my life. Do a simple search on any engine for this information. In fact, go to nVidia's website and look it up. They even have their own assembler called NVASM. Check <A HREF="http://developer.nvidia.com" target="_new">http://developer.nvidia.com</A>.

-Raystonn


= The views stated herein are my personal views, and not necessarily the views of my employer. =
 

Matisaro

Splendid
Mar 23, 2001
6,737
0
25,780
For the most part in our high end processors, the CPU is generally idle most of the time during today's games waiting on the video card to perform tasks. The pegging of the CPU in such monitoring applications as Task Manager is due to the fact that Windows' Idle task is not being executed while the processor sits waiting. The requirement for Asheron's Call is a 333MHz processor. Plug in a 2GHz processor and it will be sitting there most of the time doing nothing. There exists plenty of processing power to take over the 3D processing.


If that were true, then most games would see little to no benifit of a faster processor, which is very untrue.

Again, software mode on modern games would also be quite fast, and yet it isnt, something about your premise is not adding up ray.


Run3dmark2001 in software mode, and then tell me modern cpus can do the job of a gpu.

"The Cash Left In My Pocket,The BEST Benchmark"
No Overclock+stock hsf=GOOD!