Sign in with
Sign up | Sign in
Your question

It is about time x86 was allowed to die

Last response: in CPUs
Share
February 23, 2003 6:08:43 PM

I shall first set the scene:

I am currently an Electronics Engineering Student at Southampton University. I am currently in my second year and I am currently doing a computer architecture course in which we are "designing on paper" a MIPS RISC Processor.

Anyway, yes......We had a rather in depth discussion about the x86 instruction set and the lecturer explained his point and it all makes lots of sense. Intel and AMD still support the x86 insturction set, the curious thing about this is that the x86 set is difficult to design for, especially when you are designing new proccessors with more insturctions in them when most users will only use 10% of the instructions in the set at any one time. It comes from the fact these processors still include very old instructions that are no longer used, just to be "backwards compatable".

Now if you bear with me, I may not explain what I have said or what I am about to say well, so allow me to reply to any other replies later to clarify this.....

The lecturer continued to explain:

Infact the pentium core is actually a RISC core, the processor has a decoder module on it's front end that decodes the complex instructions into more basic instructions to allow for easier designing and modifications of the core. This is one reason the itanium was created, this was to remove the need for this decoder and so the processor is easier to design, the fact is about half the core of a processor is made up of logic to make sure the clock is suppied at exactly the correct time in the processor. Then this decoder logic would make up a huge amount of space in the core, once you remove it, power consumption can be reduced as well. So I must ask, why do companies like AMD and Intel continue to flog the old x86 horse when RISC processors actually run faster because each instruction is kept as small as possible so that most instructions only take one at most two..clock cycles to execute.


So I have to ask, intel have already shown a move towards a new instruction set (only for server use atm I think) and hopefully will soon produce a new version that allows you to use it in workstations. I can understand AMD continued support as they need to build up a market share before producing their own instruction set, now x86-64 is a great idea, AMD moving out on their own but it is still fundamentally based on the x86 set, when do you think the time that this old set will be dropped?

Lots of questions, what do you think?

More about : time x86 allowed die

February 23, 2003 6:37:27 PM

Well you answered it yourself really - backwards compatibility is the main reason.
February 23, 2003 7:47:21 PM

CISC processors always used a minority of the instruction set. Even in the days of 386 and before. True the Pentium was the first of a Hybrid Processor, but in the real world you cannot just drop the mass following to go and do your own thing.

It's easier to make the transformation when you come to big change in the technology but still not viable in some cases. This is why Intel chose to drop the x86 architecture in their jump from 32 to 64 bit. It hasn't gone down too well as they've had to jump through hoops to emulate a 32 bit processor to have some of the current 32bit stuff run. Not been very good for them.

AMD is going for a simpler change which seems to be more easily welcomed by businesses. It would appear that intel isn't as good as microsoft in persuading customers to chuck away their current (not old, mind you) stuff and go for something new.

For RISC processors, its not always nice and sweet. Look at Apple stuck with their G4 and not exactly telling anyone how much further it will scale. Their hardware and software compatibity seems diabolical compared to the x86 World. It may not always come down to CPU architecure, but it certainly does not give a good impression. And I may be wrong, but the G4 isn't simple RISC anymore, is it?

<b><font color=red>We put a ring of tanks around Heathrow,
and a guy brings a grenade through Gatwick...</font color=red></b>
Related resources
February 23, 2003 8:19:46 PM

Probably the biggest argument against ditching the x86 architecture and instruction sets, despite their flaws, is that there are litterally tens of thousands of applications that would have to be modified or re-written to run on whatever new system profile was created. It would also be asking millions of computer users to go out and buy new hardware that will be essentially unsupported until the software industry catches up. We would be essentially asking people to abandon a well supported and fully servicable platform on the promise of future glories.

Take a parallell. HD-TV is coming. The switch date is set. They want us all to buy new televisions, or some kind of converter for our existing sets. There's a lot of money involved. The penalty for not playing along will be that, at first, we are going to watch these narrow bars across the center of our existing sets. Later the penalty becomes punishment as the compatibility broadcast is dropped. But even then there will still be a lot of completely servicable televisions based on the old standard still floating around... and people will continue using them until they are no longer able to receive their favorite shows.

Switching computer archetectures is a lot less immediate than switching television formats. Software will continue to be used and traded, people will stay with the older architecture until their systems fail and they can no longer get parts. You won't be able to sell a whole new platform to someone who does not find anything wrong with their current system. There are still people out there using 486 boxes with DOS 6 and Win 3 simply because they still do everything they need.

Switching formats is not cheap for consumers. Those new widescreen televisions are <b>expensive</b>. It will be several years before price reductions make them competative with today's sets. A lot of people either can't afford them or won't spend the money so long as their current sets continue to work.

The same will likely be true of computers, the new "super-64" architecture will amount to an industry restart and a lot of the price reductions we enjoy today will be lost in the shuffle. Many computer users, especially the home users on limited budgets, simply won't buy into it. X86 will linger, far beyond the switch date, just as the current television sets will still be in use after the hd-tv deadline, purely as a matter of economics. (I don't know about anyone else's budget but a widescreen TV would make one huge hole in mine and an expensive new computer is simply out of the question for the time being.)

While it is a technically sound idea and it seems likely that a lot of the inadequacies of x86 could be eliminated we do have to ask how many billions of dollars would have to change hands to make it work.


<b>(</b>It ain't better if it don't work.<b>)</b>
February 23, 2003 9:12:15 PM

As microprocessors become more complex, the x86 decoders in them will account for less and less of the die and hence, provide less and less addition to "unneccessary complexity". Take Itanium for example. It still has a fully dedicated x86 processor on it. It just runs at the speed of a Pentium 2 450MHz. It's not a lot of die space at all. Why is the Pentium 4 so much bigger? The x86 decoder, believe it or not, is actually <b>smaller</b> on the P4 than in the P2. Most of the tremendous die size is for cache and for execution logic. I think people treat this deal with added complexity of x86 way too harshly. Processors nowadays devote way more to out-of-order execution efficiency rather than x86 decoders. Of course, there is still a viable reason to move away from x86. It has very limited parallel processing capabilities. However, that's not really enough of a reason nowadays for companies to just drop it, as you can always up the clockrate on processors. Eventually, as processors get really really big, we may see that the decoder stage no longer plays a significant role in processor complexity and we would get processors with multiple types of decoders (capable of decoding x86 instructions and IA-64 instructions). But we're not quite there yet. Prescott is said to be 100 million transistors. I'm guessing it won't be till Teja (which, coincidently, is rumored to have native IA-64 support) before we see a processor that does have multiple dedicated decoders (that work well).

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
February 23, 2003 9:20:07 PM

Excellent post Teq, I think you've outlined a very good comparison and really nailed it.

However, I have to ask, what is so wrong with creating a very top performing emulator processor? The Itanium's 32-bit performance is dismal at best, though I don't know how the Itanium 2 does with it, however it is not even needed for where it's targetted. However, Intel wants IA64 to develop, and I do not blame them as it seems like a powerful platform. So, why not just try to create a powerful emulator that can do today's application performance justice, like x86 CPUs do? Currently, there ain't no difference opening and working with Word XP on a 1.4GHZ processor and a 2.6GHZ one. If they can match the 1.4GHZ CPU's performance in Office applications, specifically the most recent ones so as to make sure we're fighting all bloatware performance killers in them, then we should have a transition to IA64, or anything non-x86, much easier than a straight one.

--
This post is brought to you by Eden, on a Via Eden, in the garden of Eden. :smile:
February 23, 2003 9:30:07 PM

Why doesn't the automotve industry scrap the gasoline-powered engine in favor of alternative fuels like hydrogen power? The benefits of doing so far outweigh the negatives. Petroleum is a pollutant, is in limited supply, and obviously creates political conflicts.

Well, scrapping gasoline engines sounds great in theory, but what about all the tens-of-millions of gasoline and diesel engines already in service? Can the automotive industry reasonably expect everyone who owns a car to scrap the vehicle and buy a costly, alternative fuel vehicle? And what will the new standard in automobility be? Hydrogen? Electric cells? Hovercraft? Who sets the standard?

Because of the socioeconomic considerations, the automotive industry must gradually make the transition to alternative fuels. The computer industry is the same way.

Now, expanding a little further on the automotive analogy, educating the public may, in fact, help to expedite the process. That is, making people aware that a better alternative exists, and informing them of the benefits. The same goes for computer technology. However, like the automotive industry, the computer industry cannot simply abandon a standard that already power tens of millions of computers around the world in favor of something else, even if that something is better in every way.

AMD would probably prefer to build a 64-bit processor without having to waste transistors on 32-bit compatibility, but, alas, AMD realizes that most people want their 32-bit software to function, and would not buy an AMD 64-bit processor if it cannot support current 32-bit apps. So, even though 32-bit compatibility mean their x86-64 will not be truly optimized for 64-bit apps, they are doing what they know they have to do.

I think the computer industry is pretty good at understanding the marketplace and making realistic judgements as to what will and will not work.

I want to move to space, so I can overclock processors cooled to absolute zero.

<P ID="edit"><FONT SIZE=-1><EM>Edited by Twitch on 02/23/03 06:33 PM.</EM></FONT></P>
February 23, 2003 10:47:54 PM

Hi Eden,
Glad you liked my post.

In response to your suggestion of an emulator, I see no reason why that can't be done as an interim step. Might even be a good item to sell on the new platform as a "backward compatibility" tool.

But, social inertia is always a driving force in market decisions. I think we're doing to see a vey slow transition to anything new after all these years of preaching about compatibility and support. Like anything different, forces of resistence will emerge... just as they have to HD-TV.




<b>(</b>It ain't better if it don't work.<b>)</b>
a b à CPUs
February 24, 2003 2:47:55 AM

Ah, but you brought up the wrong subject for a comparison. Hydrogen power is impossible right now. If you could snap your finger and magically turn all cars into hydrogen and all fuel stations to hydrogen, it still wouldn't work. Why? We can't produce that much hydrogen yet.

So you put electric companies to the task of electricity for electrolisis. Wrong answer, to begin with we already have too much load on our power grid, and on top of that most power plants are COAL FIRED! Burning coal for Electricity is a much dirtier process than burning Gasonline for direct power, watt for watt. So you switch to hydrogen, and polution goes up! Followed by what, a COAL shortage?

So then you make new power plants. What to use, the sun? Sorry, solar plants consume more energy for manufacturing than they can produce for nearly 20 years! And the size of a solar plant large enough to support all our hydrogen needs? At least as large as Arizona, probably closer to the size of Texas.

So you only have one choice for hydrogen, and that's NUCLEAR power! Ouch, try getting a new plant built in the U.S.

There is no way to get around these issues at this time. Bush would have you think he's trying, but that's another one of his scams.

Oh, and until we get better ways to store it, the most practical way to store hydrogen is...in Methanol. Why don't you see car companies experimenting with methonal fuel cells more frequently? Because they are commited to the "cleaner" hydrogen alternative. This is called "passing the buck", they can make perfectly clean cars fueled by pure hydrogen, and blame electric companies for polution, rather than make an ultra low emissions Methonol fuel cell car. End result, more polution, no blame on the auto industry. You see, the auto industry doesn't care about polution as long as they can't be held responsible for it.

<font color=blue>Watts mean squat if you don't have quality!</font color=blue>
February 25, 2003 1:20:00 AM

I though the new fuel cell cars coming out (at around 2010) would be using methanol? Last I heard (about 2 or 3 years ago) that was the plan. They've switched to pure H2 now? Are they still using Ballard fuel cells?

What about using water and a solar panel roof to produce the electrolisis (sp) to disassociate (sp) it into O2 and H2? Admittedly you may need a backup plan (such as a plug) for cloudy days/snow on the roof. You also stated that it takes 20 years to recoup the energy investment from manufacturing the solar panels. I won't argue with you since I never researched that, but wouldn't mass production reduce that cost over time? Even considering that the fuel cell cars will be much more durable (very few moving parts) 20 years is still a long time.

And a highly durable car that runs on water and sunlight will sell particularly well in third world markets.

Of course, then you're putting alot of gas station atttendants out of work.

I'm kinda off topic here now aren't I.

--------------
Knowan likes you. Knowan is your friend. Knowan thinks you're great.
a b à CPUs
February 25, 2003 3:23:37 AM

The problem is, I don't see too many sunny states the size of Texas for us to produce enough energy to fuel all this stuff. Solar energy takes a lot of SPACE.

I got a "B" on my independant study because my professor lost that argument. I should have gotten an A. I argued "We can't get enough solar energy for all our energy needs". He'd argue back "but it's the only CLEAN way to do it". Fine, but clean doesn't cut it when it doesn't get the job done!

Nuclear is the only way to go if we want to make an expedient change to Hydrogen without increasing polution. Personally, I favor methanol over Hydrogen because it's easier to store, ultra low emissions in a fuel cell, and cleaner to produce than electricity (by current methods).

<font color=blue>Watts mean squat if you don't have quality!</font color=blue>
February 25, 2003 3:50:58 AM

I'm sorry, but the feul cell example is a bad one. The real thing holding back feul cells is that they use platinum as a catalyst and there simply isn't enough platinum to supply the amount of cars necessary. I think it was like there is barely enough, or maybe a little too little to supply 10% of cars today.

Hilbert space is a big place.
February 25, 2003 4:13:29 AM

I don't design chips so my take on this is from the outside looking in. You can place your complexity in software or hardware (or both if you count EPIC). I don't understand why people insist that RISC is faster. It's a tradeoff. Sure some instructions execute faster but you have to use more instructions to get the same job done (sometimes but not always). What can be done with a single CISC instruction, may take 3 RISC instructions. Then you have the trade off of more registers verses immediate addressing. This is all before you get into multimedia instructions. Cache footprints vary, as do power usage, heat dissipation and execution speed. I think it's worth it to have some extra micro code tables and a more complex instruction decoder and keep CISC alive because it isn't broke yet. Hell, I think the complexity has just begun.

Dichromatic for your viewing plesure...
February 25, 2003 2:40:43 PM

just be not having fixe instruction size it drain a lot of time and resource

Just next to the lab and the bunker you will find the marketing departement.
February 25, 2003 2:52:10 PM

As microprocessors become more complex, the x86 decoders in them will account for less and less of the die and hence, provide less and less addition to "unneccessary complexity". Take Itanium for example. It still has a fully dedicated x86 processor on it. It just runs at the speed of a Pentium 2 450MHz. It's not a lot of die space at all. Why is the Pentium 4 so much bigger? The x86 decoder, believe it or not, is actually smaller on the P4 than in the P2. Most of the tremendous die size is for cache and for execution logic. I think people treat this deal with added complexity of x86 way too harshly. Processors nowadays devote way more to out-of-order execution efficiency rather than x86 decoders. Of course, there is still a viable reason to move away from x86. It has very limited parallel processing capabilities. However, that's not really enough of a reason nowadays for companies to just drop it, as you can always up the clockrate on processors. Eventually, as processors get really really big, we may see that the decoder stage no longer plays a significant role in processor complexity and we would get processors with multiple types of decoders (capable of decoding x86 instructions and IA-64 instructions). But we're not quite there yet. Prescott is said to be 100 million transistors. I'm guessing it won't be till Teja (which, coincidently, is rumored to have native IA-64 support) before we see a processor that does have multiple dedicated decoders (that work well


It got also only 1 decoder wich is very smaller if we go on wider stage K7 more decoder are need more OoO resource are need more logic is need and sheduler become very complex.On a RISC base wide architecture Power 4 look how they have scrap OoO granuality to keep it more simple.You cannot go wide on a RISC/CISC without massive resource.100+ transistor.Also cache eat a lot of place but dont use much power and is easy to produce as with rebundency (sorry for the speeling error)will keep the cache OK.

RISC starting to grow old also.


Presscot 100 transitor 1.1 mb
Mackinley 225 transistor 3.3 mb and about 3 time the excution ressource and a I/A 32 decoder that take about more that 10 % of the die size

That a raw theory but there will be at lease a 50 to 90 miilion transistor add if we go to OoO/Risc

Just next to the lab and the bunker you will find the marketing departement.
February 25, 2003 3:57:09 PM

Quote:
So you only have one choice for hydrogen, and that's NUCLEAR power! Ouch, try getting a new plant built in the U.S.

There are currently <A HREF="http://www.nei.org/doc.asp?catnum=2&catid=93" target="_new">over 100 nuclear power plants</A> in the US providing about 20% of the nations power. I haven't heard about any new plants being built though... does that mean that all the plants in the US are old? Or does that mean that they are all build discretely? I know that public opinion about nuclear power is not the best, but that is because so few people are actually informed about it. I did a research paper on it in school once and even after giving people the facts, they still couldn't get over their prejiduces. No one said they would knowingly live in the same city as a nuclear plant, and were surprised when I said that I would.

I support nuclear power for most of the nations power... I think that solar power should play a larger role in private homes. You can make an economical home with solar water heating, and enough power to run a few basic things in case of a power outage. But never enough to do electrolosys.

Anyway, about x86 architecture. I am not that informed about them, but a comment was made in the original post that many instructions of x86 aren't used any more except for old programs. Would it be possible to tell software companies that a switch would be made where those instructions would no longer be supported? It just seems to me that it would theoretically be possible to write software that would use basic instructions and could be run natively on x86, and also natively on a RISC processor. (I am not talking about a switch to 64 bit). Then when all the software had made the change, people could switch to a new processor and have no troubles. Then when the switch to the new CPU was made, software could then be written to take advantage of any additional capabilities of the new processor that wasn't available with x86. This would still take many years... probably 5 years to change most software, then another 10 years before software would really take advantage of the new CPU... I just doubt that you would ever convince more than 1% of software companies to restrict their code with the promise that it would be better for them in 15-20 years. Many companies aren't even around that long. I am probably showing my ignorance with this post because I am ignorant when it comes to CPU instruction sets. I just thought I would do some speculating.

'It's easy to sit there and say you'd like to have more money. And I guess that's what I like about it. It's easy. Just sitting there, rocking back and forth, wanting that money.'
February 25, 2003 3:57:39 PM

Transfering <A HREF="http://www.theinquirer.net/?article=7966" target="_new">Linus's words</A> from the inquirer to this context.

Quote:
In a discussion on the merits of various processors, Torvalds wrote that Intel had made the same mistakes "that everybody else did 15 years ago" when RISC architecture was first appearing. Itanium tries to introduce an architecture that is clean and technically pure, something that just doesn't seem to work in the real world. He claims that Intel "threw out all the good parts of the x86 because people thought those parts were ugly. They aren't ugly, they're the 'charming oddity' that makes it do well."

Sometimes Ugly is a beautiful word.

Dichromatic for your viewing plesure...
February 25, 2003 5:03:52 PM

You bring up a really interesting point with the G4. The G4 is amazingly fast and runs circles around PCs in ares such as DV editing and animation rendering, but it is horifficly slow at other stuff such as scrolling 2d images, websites, and PDF files. This is because most mac software is converted to the mac from x86 architecture. If you run an app that is designed for the mac, it is blazingly fast! This is exactly the problem/advantage that PC's would run into if the x86 architecture was to be changed. There would be a select group of native apps that would be superfast, but older apps that were ported from the x86 platform would run slower than before.

Essentially, what the PC market needs to do in order to have a real gain in speed is to take a chance and to change the architecture, dragging the software companies with them. But both AMD and Intel need to do this in order for it to work. Maybe Microsoft should make them do it by releasing a 64-bit only windows version... (they probably won't though)

<Brendini>
February 25, 2003 5:12:27 PM

PDF's are darn slow on a PC too and I think Adobe is(was) a primarily MAC producer anyways.

Dichromatic for your viewing plesure...
a b à CPUs
February 25, 2003 7:04:37 PM

I believe the newest U.S. nuclear plant that's still operational is 45 years old! Plants that were built in the 1970's have never been fired up (in fact, I don't think construction was even finished)!

<font color=blue>Watts mean squat if you don't have quality!</font color=blue>
February 25, 2003 7:05:05 PM

Quote:
You bring up a really interesting point with the G4. The G4 is amazingly fast and runs circles around PCs in ares such as DV editing and animation rendering, but it is horifficly slow at other stuff such as scrolling 2d images, websites, and PDF files.

I don't think this is true at all, looking at these benchmarks:
<A HREF="http://www.digitalvideoediting.com/2002/05_may/features..." target="_new">Part 1</A>
<A HREF="http://www.digitalvideoediting.com/2002/07_jul/features..." target="_new">Part 2</A>
<A HREF="http://www.digitalvideoediting.com/2002/11_nov/reviews/..." target="_new">Part 3</A>

While software optimization does play a key role, the G4's simply don't have the architectural advantage in terms of processing power. The only thing that I'm aware of the PowerPC G4's being "blazingly fast" in is RC-5. And that is very eccentric code that plays on one of the few advantages the G4 has (bit slicing) and also scales linearly with SMP solution (i.e. two processors double your performance almost perfectly).
The line between RISC and CISC has been very blurred over the years. It use to be that RISC was a reduced, simple instruction set that wasn't superscalar, it was in-order and had an abundant of registers and sheer execution frequency to make up for it (part of the reason to go simple, higher clockrates). The idea behind CISC was to allow each instruction to do more, have the die incredibly complex to figure out each instruction on-the-fly and process those as fast as possible.
The PowerPC G4 design does not follow those features of RISC. In fact, no modern "RISC" processor that I'm aware of does.
Hence, RISC has taken on a new meaning. Nowadays, it's not about simple instructions nor is it about die complexity. Modern RISC entails a few things:
1. fixed instruction size (not neccessarily simple, just fixed)
2. load/store architecture
3. abundant registers

I'm sure others can come out with more but that's the basic gist of it. x86 does not follow a load/store architecture nor does it have fixed instruction length (although that does not stop any of its extensions from having those properties, SSE for example). It has a measily count of registers as well.
Arguing that a processor is neccessarily better because it's RISC and not CISC is no longer such an easy argument to make. As modern RISC processors have lost many of the properties originally entailed to the idea and modern CISC processors include those properties anyway.
The problem nowadays isn't with how complex the instruction set is. As stated by my previous post, as processors grow, the added complexity that having a variable-length (and therefore, more complex) instruction set brings gets smaller and smaller. The problem nowadays is with extracting parallelism. RISC or CISC, x86 or PPC, they all have the same basic problem, code is being introduced in-order and in a serial nature. No modern programming language that I'm aware of allows a parallel programming model. So it's an inherent problem of how to make processors figure out what can be run in parallel. That's really where VLIW/SIMD designs like IA-64 and SSE/SSE2 extensions come into play.

Quote:
Anyway, about x86 architecture. I am not that informed about them, but a comment was made in the original post that many instructions of x86 aren't used any more except for old programs. Would it be possible to tell software companies that a switch would be made where those instructions would no longer be supported? It just seems to me that it would theoretically be possible to write software that would use basic instructions and could be run natively on x86, and also natively on a RISC processor. (I am not talking about a switch to 64 bit). Then when all the software had made the change, people could switch to a new processor and have no troubles. Then when the switch to the new CPU was made, software could then be written to take advantage of any additional capabilities of the new processor that wasn't available with x86. This would still take many years... probably 5 years to change most software, then another 10 years before software would really take advantage of the new CPU... I just doubt that you would ever convince more than 1% of software companies to restrict their code with the promise that it would be better for them in 15-20 years. Many companies aren't even around that long. I am probably showing my ignorance with this post because I am ignorant when it comes to CPU instruction sets. I just thought I would do some speculating.


This is certainly possible. But you wouldn't be able to call the processor "x86" anymore. "x86" entails you are compatible with all the x86 ISA, each and every instruction. The question would be, if you were going to change the ISA anyway, why not just change it dramatically? The software companies would have to change their code anyway. Either way, you're description of adding new capabilities to x86 without tossing any compatibility support has been done many many times. MMX, MMX+, 3dNow!, SSE/SSE2 and soon to be x86-64 are all extensions that allow software companies to use more advanced instructions without sacrificing x86 compatibility. The problem is, you still have the basic functions of x86 there that are being used and unless you yank that, people won't switch.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
February 25, 2003 7:10:56 PM

Yeah, PDF's are slow on almost anything (they're a really awful format that needs to be changed. They're only a good format for really high quality print ready publications)

But on my mac PDF's take about 10 seconds to begin scrolling the document or page. The whole PDF thing is more Adobe's fault than the fault of Apple, but there still are other things that load really slowly; even on a good G4. Being both a mac and a pc person for various reasons, I hope that Apple can manage to incorporate the good parts of the PC world into their products while still staying innovative and technically advanced. The only question is when the G5 will finally come out. (it's been a looooooooooonnnnnngg time!

<Brendini>
February 25, 2003 7:32:38 PM

I stand partially corrected, but it still matters mostly in both the type of test and the application. They both have their ups and downs.

<Brendini>
February 25, 2003 8:06:08 PM

I showed those tests specifically because you mentioned DV and 3D animation. Those are, by far, not the G4's strengths. As I mentioned, the G4's strength nowadays seems to be in eccentric software like RC-5 that the x86 designers like Intel and AMD have not focused on because, well, consumers don't use that kinda software.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
February 26, 2003 1:11:00 PM

Wow. What a thread to just jump into. Several times I was tempted to make long replies to posts. In the end though, I think I'll try to just keep a nice general reply.

The first thing that came to my mind was the old phrase:

<b><font color=red>Those who can, do. Those who can't, teach.</font color=red></b>

It brings a smile to my face to know that the phrase is still very applicable in today's world. Honestly, most of them are certainly not there because of their expertise in the real world. If they <i>had</i> expertise in the <i>real</i> world, then they would be using it to make <i>real</i> money at a <i>real</i> job.

This is a perfect example. RISC, CISC, who bloody really cares? If the x86 platform wasn't able to complete against a supposedly superior architecture, then don't you think that it would have, oh, I don't know ... <b>failed by now?</b> Gee, why hasn't it then?

It's not <i>just</i> about backwards compatability. (Although that <i>is</i> a considerable merit in and of itself.) It is simply that if it didn't work, it would have failed. If it hasn't failed, then it obviously works.

If we trashed perfectly good solutions every time some over-zealous professor whined about impurities and inferiorities, we'd have <b>nothing</b> left. The reality is that this isn't a perfect world. To suit, we use imperfect solutions. Those imperfect and unpure solutions work damned well, and the world still turns. We all live happily ever after.

And if we don't, then we make changes until we do. Cars, power plants, computer chip architectures, heads of state, coffee or tea, cream or sugar, boxers or briefs, it's all good until it's not, and when it's not good it's replaced by something that is ... until that something isn't good anymore, and then it's replaced again, and so on. Such is the long rambling cycle of life on this planet.

You can debate superiorities all that you want, but the simple fact is that the only time that changes are made are when the old ways don't work so well anymore. It's nothing that you have to debate about or worry about. It'll happen when it is needed and the transition will be as smooth as every other transition that has ever happened. It's really just as simple as that.


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
February 26, 2003 1:21:49 PM

Since this is a seperate topic I'm throwing in my two cents in a seperate thread. :) 

I'm all for nuclear power. Build the plants and keep-em coming.

I'm also all for natural power. And no, I don't mean a solar array or a farm of windmills. I mean a windmill in your back yard and a solar pannel on your roof, charging a battery. That battery gets used until the charge drops to a certain point, and then your power switches over to an actual power plant. It'd be simple, and it'd save on needing more power plants.

I'm also for conservation of energy. If anything, <i>that</i> needs to be pushed more than anything.

We have better lightbulbs, yet there is no incentive for people to buy and use them. We could be making common every-day appliances use less power than they do, but there is no incentive for <i>most</i> manufacturers to spend more money to produce a cleaner product.

And don't even get me started on the energy wasting company buildings. It's just sick how much energy corporations waste simply through not turning off the office lights and computers when everyone has gone home.

Humanity <i>has</i> solutions. We just choose to ignore then so long as they're not necessary. One day they'll probably become necessary at the rate that we're going. Perhaps in 50 years we'll be stunned to see how our grandchildren live so much more intelligently than we did.

Then again, maybe not.


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
February 28, 2003 12:42:23 AM

Well I have many good points here but:

The professor teaching us does build not just teach, and does lots of research into processors.

Anyway my point is:

CISC was designed because computer memory was expensive so each instruction does multiple more complex tasks. Therefore taking less instructions to program.

RISC was designed later, because memory prices fell, it was then made because if the design was simpler then it was easier to make the device clock faster. So what happend was RISC machine clocked faster. This can be seen about as far back as the Pentium 200 when some RISC processors were availible at 500Mhz - 600Mhz. But things changed and now they can be clocked as fast as each other.

My point about the decoder on the chip was that on a pentium 4 (for example) the front end is a decoder that decoder some instructions into RISC that the processor works on, I am not sure if this is true for Athlon but I am sure they are similar.

The second thing is that in general Benchmarks are a pile of rubbish, why? Well Synthetic Benchmarks are rubbish because processors can be optimised to fake these benchmarks (ie to give slightly higher results) depending on many factors.

I will not go into depth on the benchmark thing, if you want to know more, I can give you a list of books.

Now the next point is the Itanium is a great idea, in principle but maybe badly implemented. The fact is While CISC may take one instruction to execute a certain instruction and RISC may take three to implement the same instruction, Because of the way RISC is designed it may only take 3 clock cycles to execute it, while CISC could take 3,4 or even 5, so which is faster? CISC is there to simplify the code so programs are smaller (contain less instructions) but this does not mean it will run faster because CISC instructions do take variable number of clock cycles to complete!


Okay now, most processors are a cross between RISC and CISC to try and gain as many advantages between each processor type, but there are many factors and by increasing one factor means you have to decrease another, this means it is upto personal taste which factors you think gives better performance.


More later............
February 28, 2003 2:44:09 AM

That's not really true anymore. Modern "RISC" processors have just as complex instructions as x86 does. The only thing they don't have is a weird memory/register access model. As I said in a previous post, nowadays there are 3 main things of being RISC:

1. Fixed instruction length.
2. Load/store architecture.
3. Abundant registers.

The instructions can be as complex as they come, they merely have to all be of the same size. It also has to be a load/store architecture and allow an abundant register range that is made apparant to the ISA.
IA-64, in many ways, can be considered RISC, as its instruction size is fixed and it is a load/store architecture with a huge amount of registers. However, it goes beyond even the RISC concept into the concept of VLIW (each instruction is VERY complex and does a lot of things). But it is still RISC because it fits the criteria I've listed above.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
February 28, 2003 6:12:43 AM

Segmented-register-base-scaled-index-plus-displacement ( es:[ebx+eax*8+16] ) isn't that odd. Especially when the PAE ( Physical Address Extension ) is enabled.

Dichromatic for your viewing plesure...
February 28, 2003 6:57:54 AM

Oh by the way, depending on what dependences it follows, the instruction ( mov edx, [ebx+eax*8+16] ) will execute in 2-3 cycles. That's 3 adds, a shift, a load and an assignment.

Dichromatic for your viewing plesure...
February 28, 2003 8:09:41 PM

yeah I would agree, it all depends on how the function has been implemented.

And yes there are quite complex instructions in RISC and that is why I said RISC and CISC aint so true and you get complex fixed length instructions,.......somewhere in the middle
!