Chip architecture and software performance

G

Guest

Guest
GeForce versus Radeon, Athlon versus Pentium.... Different chip architectures, but in the end, performance seems to wrap around the software. If the code is optimized for the instructions available in the chipset, the code executes faster. Intel looks like they're real fast if popular applications rally behind Intel instruction sets. The same can be said of AMD, if popular applications support the AMD instruction sets.

What's faster, GeForce or Radeon? Again, it depends on how the software makes use of the available instructions. This is why the drivers are so critical to performance.

Before you bombard me with flames regarding benchmark scores, please note that I am well aware of the benchmarks on Tom's many reviews. I understand that GeForce tends to beat Radeon in the benchmarks, and that Athlon tends to beat Pentium. I am simply stating that I believe the reasons for the better performance are primarily that the instruction sets available in Athlon are written for popular applications. I believe the instruction sets used by GeForce chips and nVidia drivers are better adapted to popular applications, thus producing faster task completion.

The future of chips may very well depend upon manufacturer's ability to adapt chip architecture to take advantage of popular compiliing methods, but performance will also hinge of the compiler's willingness and ability to take advantage of enhanced instructions. Who knows, perhaps soon we will have a whole new instruction set, and abandon the now-ancient x86 set entirely.

<A HREF="http://bible.gospelcom.net/cgi-bin/bible?passage=PS+17:7-9" target="_new"> PS 17 </A>
 
G

Guest

Guest
I should say right away that I think architectures should drive compiler designs and not the other way around. All is well in that respect.

I think rather than the existence or non existence of a certain instruction, but the method in which the instructions are handled makes more of a difference to performance. For example: FPU's ideally perform the same function, doing decimal math. Give identical problems to the Athalon and Pentiums FPU's, and the Athalons have the answer sooner. In my opinion that is probably the single largest advantage the Athalons have in todays apps. It's all due to cleverness and implementation in the design.

As AMD has proved, good caching techniques are another important angle.

"perhaps soon we will have a whole new instruction set, and abandon the now-ancient x86 set entirely."

That would certainly make the life of a high performance PC developer much simpler, and is certainly why people are getting all worked up about the IA64's and Itaniums. Problem is getting all the software guys to either redo or port existing software. Also, who gets to be the one to choose the new instructions?
 

girish

Distinguished
Dec 31, 2007
2,885
0
20,780
I was really amused to see "Designed for Windows 95" logo on some PC processors a few years ago. I always though it was the hardware that directed software designs. Here I was seeing a totally different thing - a hardware component as important as the processor was designed for a particular software!

While it might be easier to develop a compiler than to develop a procesor, but the task is not too easy. Consider a Itanium compiler which needs to know whats being executed at <i>every clock cycle</i> of the processor and generate code accordingly. It has to virtually run the code before it generates code blocks. It has to actually <i>know</i> the processor architecture well. Implementation of code optimisation for a particular architecture, especially with a higher level language is a bit difficult. Compiler design and processor design should be symbiotic, favoring each other.

Its a different thing <i>how</i> different processor work up a instruction, that makes one get the answer faster than the other. Actually that does not play any significant role in deciding code to be generated for a compiler especially if it is targetted towards all processors in general. I remember in Michael Abrash's great book <b>Zen of Code Optimisation</b> unfortunately out of print, lucky I bought it well back in 1993 just before it went out, he dealt with different methods for optimising code for the 386, 486 and then just released Pentium. In general, all code generated for Pentium would work fine for a 486 but the reverse was not true. That is older code optimised for the 486 would perform badly on a Pentium thanks to its dual execution pipes. 386 was worse.

One method to get best performance out of a system is have a number of code segments, maybe DLLs optimised for a partucular processor, a particular operating system and if needed a particular graphics library. That would be added efforts on part of the programmer, but the application would perform well on all systems.

Abandoning the instruction set altogether is certainly not a viable solution since it will need a mass rewrite of majority of software. As it stands, every new processor that comes around expects the code to be optimised in a specific manner. Thats no news for Itanium or hammer programmers that they should port all their code to suit the newcomer. If they dont, the show will still go on but rather slowly.

The x86 instruction set is already too complex and supports a large variety of instruction simple to complex. I dont think it should change. Instead, add still more instructions maybe RISC like, and implement legacy instructions as sequences of these. In fact this is what is done in most modern processors.

Legacy support is as important as performance, perhaps more than that. What it needs is hardware designed for software as well as software designed for hardware. Compiler designers should join the team of silicon designers and come up with a better optimum solution.

girish

<font color=red>No system is fool-proof. Fools are Ingenious!</font color=red>
 
G

Guest

Guest
"I was really amused to see "Designed for Windows 95" logo on some PC processors a few years ago. I always though it was the hardware that directed software designs. Here I was seeing a totally different thing - a hardware component as important as the processor was designed for a particular software!"

Wasn't that more of a marketing gimic than any kind of groundbreaking hardware advance?

Point well taken about hardware designs needing to accomodate potential compiling techniques. True to a point. It makes sense to me that there is a point of diminishing returns regarding this. Once a particular CPU architecture is settled upon, it would be nuts to alter the fundamental design to accomadate each new compiler breakthrough. New hardware development costs a fortune, and the line must be drawn at some point. Not to mention we could sure be pissing off all of the chipset and motherboard makers every time we throw out a new design that requires a redesign on their part.

Again, points about maintaining legacy support are well taken. My aim was merely to point out: I'll use an analogy: to build the best performing car in the world using current technology, it's probably not the best idea to modify an existing model, but start from anew, and take advantage of the new, better stuff which has been made available since the last best performing car was developed.

I'm not an experienced code developer or microprocessor designer, and probably need a bit of schooling in code optimization, and the multiple level multi pipelining techniques used in the X86 CPU's available. So what I'm saying is if I'm not seeing some fundamental truth in your messages, that may be partly why.

Kevin

PS. Please don't take my responses as flaming, it just so happens I tend to enjoy this type of discussion. You seem fairly knowledgable and a good source to leach info from. Sorry if this isn't how you take it. I'm practically a wild heathen when it comes to manners or etiquette.
 
I was really amused to see "Designed for Windows 95" logo on some PC processors a few years ago. I always though it was the hardware that directed software designs. Here I was seeing a totally different thing - a hardware component as important as the processor was designed for a particular software!

Marketing.

Secondly, hardware will always drive compilers, not the other way around. Implementation also matters; NVIDIA and AMD may come up with the same answers for the same functions, but they get there completely different ways, and this effects how the following instructions are executed (Different HW resources used, etc). It's a complicated mess under the hood, and you're never going to get optimal performance as a result.