enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
I didn't see real CPU progress for over 5 years (except for Mhz and the Centrino).
There was the 486 -> one intruction per clock cycle.
Pentium -> two Integer and two floating point pipelines.
Pentum Pro-2-3-4 -> Three Integer and two floating point piplines. (the K7/K8 has 3 integer and 3 float units I think)

Why don't x86 CPUs have 16 piplines like Nvidia & ATI video cards?? (I beleve they are 10x faster in their own special way)
The duel-core CPUs seems like a lame way to add performance-Like putting two Pentium 4s on one die and add some cache. Double the transistors to gain 30% performance??

If the x86 architecture reached it limit, someone should do the world a favor and put it out of its misery. With the x86 out of the way, make a MODERN CPU in no way compatible with x86 (and without other CPU/Bus problems, perhaps something like the the long instruction Itanium or the Cell??) and make a x86 emulator (the new cpu will be so much faster, the emulator doesn't need to be 100% efficent). I want to see the same approach done to MS-Windows. But that's a different story.

I think something similar was done on the Mac OSX to run OS9-8-7 software. The emulator works well as far as I know. I also don't understand why the G5 isn't much faster that it is. The G5 doesn't have many of the x86 limitations (as Mac lovers tell me).

I don't believe the x86 is the only architecture which is a general purpose CPU. Remember the 64 bit DEC Alpha? That was 10 years ago!

PLEASE correct me if I am badly mistaken. This has been bothering me for a long time.

Thanks!
Erric

erric_usa@hotmail.com
 

dhlucke

Polypheme
It's business. They have to make a profit and I don't think they want to throw everything away, spend all their money, to make the same amount of money they were making before.

Kinda like the combustion engine. Why don't they replace it?

<i><font color=red>"I think I'd rather let the <A HREF="http://www.daveclarke.com/game.cfm" target="_new">monkey</A> have its way with me" - Ksoth</font color=red></i>
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
I have some idea why the x86 has been around forever. Which is why I came up with the emulator idea so businesses can keep their old software and not slow the rest of the world down.
 

jmwpom3

Distinguished
Mar 3, 2005
329
0
18,780
And also, big companies, like Intel or whatever, will pay big money for new technologies. They just don't necessarily do anything with them. Most of the time, new and innovative ideas that go against the status quo get bought and SNUFFED.
I'm guessing that's why Linux is the only answer to windows(and not really a full one either) If they were selling, I'm sure we wouldn't be able to get it. (unless of course it was from Microsoft)


PLEASE NOTE: Some quantum physics theories suggest that when the consumer is not directly observing this product, it may cease to exist or will exist only in a vague and undetermined state.
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
jmwpom3:

So you think many companies are capable of making modern CPU & chipset designs. I wonder why one of the them (such as IBM) doesn't go for the kill against Intel?
Getting closer to understanding.

thanks.

Erric
 

AMD101

Distinguished
Oct 25, 2004
59
0
18,630
The cell Proc is exactly what you are talking about for a GPU-like CPU. Time will tell if it is as powerful as sony claims.
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>Why don't x86 CPUs have 16 piplines like Nvidia & ATI video
>cards??

Because CPU's are general purpose processors, and it just isn't feasible to extract that level of instruction parallellism from general purpose code. Even for istance Athlons 3 integer units are rarely used at the same time. Sure, Intel or AMD could add more, but most likely it would not give a benefit, on the contrary, clock speed would suffer. Its simply a trade off, not a concious decission not to build the fastest possible CPU.

GPU's are entirely different beasts, they only need to process extremely limited instructions and can do so in pararellel with no trade off. Now try running an OS on a GPU :)

>The duel-core CPUs seems like a lame way to add
>performance-Like putting two Pentium 4s on one die and add
>some cache. Double the transistors to gain 30% performance??

That would be a very worth while trade off. Just for reference, a Pentium I had 3M transistors, Pentium II had 20M, Pentium 4E now has 125M. A P4E would be roughly ten times as fast as a Pentium MMX, but needs >40x as many transistors. Further more, dual core chips will be ~80% faster on highly parallel workloads, making the trade off even more worthwhile. transistors are cheap, and get dramatically cheaper with each process shrink, don't forget that !

>With the x86 out of the way, make a MODERN CPU in no way
>compatible with x86

The ISA itselve plays only a very minor role in the performance you get. All modern x86 CPU's are RISC at heart, and the overhead involved in breaking up x86 into micro ops, is so small, its entirely neglectable. Its the implementation that matters, not the ISA. If Alpha was faster, it was because it was an extreme (insane ?) implementation, not that much because of a superior ISA.

>and make a x86 emulator (the new cpu will be so much faster,
> the emulator doesn't need to be 100% efficent)

Itanium has a software x86 emulation layer. its performance is around 10-50% of native code. Not my idea of progress.

>I don't believe the x86 is the only architecture which is a
>general purpose CPU. Remember the 64 bit DEC Alpha? That was
> 10 years ago!

So ? A ten year old Alpha wouldn't hold a candle to current x86 chips. And yes, Alpha was VERY fast, but not because it got rid of x86 legacy, but because it was designed and implemented making no compromises to performance. The result was speed, but at insane high prices (and insane >200W power consumption as well btw).

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

Crashman

Polypheme
Former Staff
Not a good comparison, he's asking why they don't come out with a far more powerfull CPU architecture: Internal combustion engines are already far more powerfull than things meant to replace them.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
 

dhlucke

Polypheme
Possibly, but they aren't efficient and given time and money the replacements would be more powerfull.

I'm fine with a car that only goes 80 mph if I get 200 mpg and zero emmissions.

<i><b><font color=red>"I think I'd rather let the monkey have its way with me" - Ksoth</font color=red></b></i>
 

SoDNighthawk

Splendid
Nov 6, 2003
3,542
0
22,780
I would like a look into the vault at Intel where they purchase and hide any new technology and keep it from anyone's use.

I have worked with optical components for years in Lab settings and let me tell you the amount of bullshit surrounding the manufacture of these devices is massive.

The one thing about this type of current technology is they don't even know if the die on a particular wafer works until they test it after it is assembled. They have 80% failure rate on die already in production and it takes 20 weeks to grow a wafer with a 100 die on it.

All in all time is money in this design and fabrication of CPU's. An engineer cant make many changes on the design because they need to wait 20 weeks just to make one of them and another (manufacturing time period) on top of that before they even get to test one.

A Graphics card CPU/GPU is a different type of theory, all they do is pipeline information they do not compute it like a CPU does. The CPU builds the instruction code from the game engine and fills the GPU up with instructions that it transmits into a visual picture. The more powerful the GPU the larger amounts of information you can flow through it.

Like a larger pipe to handle more water per minute.

<font color=red>GOD</font color=red> <font color=orange>LOVES</font color=orange> <font color=red>CANADA</font color=red>
 

Crashman

Polypheme
Former Staff
Zero emissions is a lie, these people are getting hydrogen from electrolysis at refuling stations, which comes mostly from coal. Coal polution is a more serious problem than emissions from gasoline powered vehicles, all using hydrogen does is move a small pollution problem from one site and make it a larger polution problem at another (the power plant). Californians don't listen to such logic because they like the fact that they're upwind of those coal-fired plants and don't care about other states.

Hydrogen fuel cells are comparitively weak and heavy, especially when you consider fuel storage and the electric motors, the power to weight ratio will keep such cars accelerating slowly. Hydrogen powered internal combustion engines offer better power-to-weight, but they do have some emmissions, including the worst of what's attributed to gasoline engines, oxides of nitrogen.

Clean electric power could lead to truely zero-emissions vehicles, but solar power doesn't cut it (too little power for the amount of space used, cost to benefit is terrible, use of petrochemicals to produce the cells, etc), wind power kills birds, water turbines disrupt fisheries, etc etc etc. And nuclear power should have been perfected by now, except that development has been nearly halted for the last 50 years.

That leaves us with the practical, rather than high tech, solution of hydrocarbons and internal combustion engines. Now if you could convince people that Biodeisel is better than Hydrogen given current technology, you'd sell me, but they won't listen. And you'd still have internal combustion engines using the Biodeisel.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
 

jammydodger

Distinguished
Sep 12, 2001
2,416
0
19,780
The problem with moving to a completly different artitecture is that the ammount of money and cooperation needed to do it would be huge. You would need microsoft to back a new operating system (bear in mind its taken about 3years to upgrade an existing operating system to 64bit and its still not done yet), you would need a big CPU manufacturer to design the CPU (How long was the prescott in developement? That was only an advancemeant on the netburst artitecture...an non-X86 CPU would have to be built from scratch!). Plus Motherboard and RAM manufacturers would have to be in on it. The whole project could take maybe 6-7years, not to mention that when you have platform nothing will run on it (without emulation software which will negate any performance benefits).

I think eventually x86 is gonna have to be thrown out the window, but right now there is just not enough reason to.
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
Hi Guys.
I appreciate your answers and it's helping me understand.
One thing I still don't understand is why more than 3 piplines on x86 are "rarely used" ?

I understand the GPUs in graphics cards have very few instructions and run very fast (with 16+ pipelines). Why is that bad for a general purpose CPU? Sure a CPU built like that will run slower when a complex instruction is needed, but it will FLY otherwise.. I understand making a OS for such a chip will cost a fortune and take years to make. etc etc.

Again, I am not a hardware engineer. but Please currect me if I am just blowing hot air.

thanks!

(about the DEC Alpha: I expected a Pentium 4 run faster than any 10 year old cpu. But I also believe the Alpha totally waxed the 486 made about the same time while addressing 64 bits!) I just wanted to understand why the alpha was so fast.
 

ChipDeath

Splendid
May 16, 2002
4,307
0
22,790
Sure a CPU built like that will run slower when a complex instruction is needed, but it will FLY otherwise..
that's kinda the wrong way around. It'll only fly when running highly complex code that's <i>designed specifically</i> to benifit from that design.

GPUS benefit from the extra pipes because the developers know <i>exactly</i> what sort of operations the chip will need to do, and a lot of <i>those</i> sorts of operations are along the lines of "apply this operation to this group of 500 pixels". in that situation, each one of those operations is independant of the others, so being able to do 16 at once is a huge boon for performance.

There's no telling what CPUS will be asked to do, so they have to be 'Jacks of all trades'. The vast majority of operations a CPU carries out have to be sequential (rather than parallel), as in:
Perform operation X on A and B;
take that result and perform operation Y on it;
take that result and do something else...
..
etc
..
So although you could in theory do all the operations at once, because each one has to wait for the previous one anyway, you just can't.

A very simple analogy: If you design a robot arm for welding a particular door panel onto a car, then it could be made to do so very efficiently and quickly. A more general-pupose robot arm could probably be programmed to do the same job, but would <i>never</i> do it as well as the specialised one. But of course by the same token, the specialised one would be hopeless at anything apart from that one welding operation. Same kind of thing with GPU Vs CPU.


---
A64 3200+ Winchester @ 250x10= ~2.5Ghz, ~1.41 Vcore
1Gb @ 209Mhz, 2T, 3-5-5-10
Voltmodded Sapphire 9800Pro @ 450/350 w/ modded VGA silencer 3.
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>One thing I still don't understand is why more than 3
>piplines on x86 are "rarely used" ?

There is not much point in executing code in one of the pipelines that uses data that depends upon the result of code in the others. There is only so much you can do in parallel. Now CPU's can (and do) guess, and could speculatively execute some code, but if you end up having to toss away the result each time because you guessed wrong, you're just wasting cycles and power.

As Chipdeath (I think) pointed out above, CPU's run very much linear code, where you build upon the results of previous operations. Such code just doesn't lend itselve to parallellization. Exceptions are when the workload can be parallellized, like rendering, encoding and such where we do see signicant advantages of more than one CPU/Core/thread.

>I understand the GPUs in graphics cards have very few
>instructions and run very fast (with 16+ pipelines). Why is
>that bad for a general purpose CPU?

It would make no sense on a CPU, because at least 14 of the 16 pipelines would be iddle all the time, waiting for the results of the other one or two. Also, such a CPU would clock much slower, and generate considerably more heat. Just as a token, consider current high end videocards only run in the 500 MHz range, in spite of using advanced process. They also consume about as much as a high end CPU.

>I understand making a OS for such a chip will cost a fortune
> and take years to make. etc etc.

You can't make an OS for such chips ! These chips don't support all the functionality that is required for an OS. They can not even hold a candle to the programmability of a 20 year old 8086 processor, which is utterly unable to run a modern, memory protected, 32 bit OS.

>Again, I am not a hardware engineer

Neither am I, but trust me, there are plenty of them at AMD and Intel, and they know what they are doing.

>(about the DEC Alpha: I expected a Pentium 4 run faster than
> any 10 year old cpu. But I also believe the Alpha totally
>waxed the 486 made about the same time while addressing 64
>bits!) I just wanted to understand why the alpha was so
>fast.

It was so fast, because it was by far the most advanced design at the time, and created by a team of "all stars". Highly superscalar (4 issue if I'm not mistaken), pioneering in technologies like SMT ("hyperthreading"), no compromises anywhere, everything was tuned for speed. However, don't be fooled by comparing it to desktop chips, Alpha was HUGE (~400-500 mm²), which is 4x the size of a typcial desktop x86 processor, hot, had incredible memory bandwith (not economically achieveale on the desktop), etc. It should be compared to IBMs Power, Sun SPARC, HP PA-Risc etc.

As for the "64 bitness", that is not much of an achievement, just a choice reflecting the intended markets. Its not difficult to make a 64 bit chip, especially not when you start with a clean sheet of paper like they did, like most things, its a compromise. 64 bit chips are actually slower on certain things than 32 bit chips, so if you do not need its addressing capability (like no desktop chip back then), there is no point in implementing it.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

SoDNighthawk

Splendid
Nov 6, 2003
3,542
0
22,780
My God the kids to soon forget their history. If they took a look at 1800 England all they would see is black coal dust on everything. In your food in your water on every square inch of real-estate.

Smog ten times thicker then the air over current L.A.

<font color=red>GOD</font color=red> <font color=orange>LOVES</font color=orange> <font color=red>CANADA</font color=red>
 

Xeon

Distinguished
Feb 21, 2004
1,304
0
19,280
There is not much point in executing code in one of the pipelines that uses data that depends upon the result of code in the others. There is only so much you can do in parallel. Now CPU's can (and do) guess, and could speculatively execute some code, but if you end up having to toss away the result each time because you guessed wrong, you're just wasting cycles and power.
The same principle that allows the G5 and Itainium to chew FP operands, with smart core logic and a good compiler *oddly enough what the Itanium needs* you can get much higher IPC, and if the programmer is lazy I guess 100% wont get used.

It's the type of technology that we will need to allow AI to be developed.

But that isn’t the reason x86 is stuck. For example an Athlon 64 has 3 x86 decoders, 3 floating-point pipelines, and 3 integer pipelines. This is compared with Intel’s Pentium 4, which has only one X86 decoder, 2 floating-point pipelines, and 1 more integer pipeline than AMD’s Athlon 64.

Now the limits implied to x86 is the logic has been built for smart(er) code execution well there is table size limits, shifting limits, and cache line limits but for this discussion this will work. Say you add 3 more pipelines to a Athlon your going to have to make some significant changes to the prefetch engines and instruction ordering to accommodate for the additional registers since every pipeline would most likely be in the case of 3; 1 general purpose pipeline 2 float point pipeline. At least to retain some significant performance advantages in target software which in the case of the Athlon 64.

As well more pipelines kill clock scaling, evidence is in the G5 and Itainium’s both haven’t seen too much of a clock speed increase due to the complexity of pipelines and various other silicon/transistor reasons.

It would make no sense on a CPU, because at least 14 of the 16 pipelines would be iddle all the time, waiting for the results of the other one or two. Also, such a CPU would clock much slower, and generate considerably more heat. Just as a token, consider current high end videocards only run in the 500 MHz range, in spite of using advanced process. They also consume about as much as a high end CPU.
If the programmer(s) were retarded 14 out of 16 surely would be idle then.

It was so fast, because it was by far the most advanced design at the time, and created by a team of "all stars". Highly superscalar (4 issue if I'm not mistaken), pioneering in technologies like SMT ("hyperthreading"), no compromises anywhere, everything was tuned for speed. However, don't be fooled by comparing it to desktop chips, Alpha was HUGE (~400-500 mm²), which is 4x the size of a typcial desktop x86 processor, hot, had incredible memory bandwith (not economically achieveale on the desktop), etc. It should be compared to IBMs Power, Sun SPARC, HP PA-Risc etc
Hot no it was not upwards up 74watts if my memory serves me right. As well you forgot to mention that the Alpha had significantly higher speeds than Intel and AMD on equal micro processes, even with the 4x-6x size differences between the processors.

As well technologies like hyper threading, on die memory controller, VT and the such were only conceived by the Alpha team they did not get to build the technologies Intel and AMD had to pay design and implement.

As for the "64 bitness", that is not much of an achievement
Wow 6-8 months ago you thought it was better than sliced bread.

-Jeremy Dach
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
Thanks for all the replies guys. Things are starting to make sense.
I can't help to wonder. What does it take to make a chip now that will have the same big lead the Alpha had over the 486 10 years ago. Should anyone care?
Since the Alpha was a large expensive chip, I don't care about price, how large the new chip is, or if it's hot enough to fry an egg. Remember though the Alpha did have a huge lead in Mhz as well.

It seems "impossible" to make a very low instruction set CPU. So making a GPU type CPU seems unlikely.

I can still run Word for DOS in Windows XP without an emulator.
There will be progress in some direction. I just hope it's not x86 and DOS forever!
 

Johanthegnarler

Distinguished
Nov 24, 2003
895
0
18,980
In 2026 most of our technology is lost in the civil war/ww3. After we get nuked by China, the Urbanites are forced to surrender and rural people are victorious. The war ends fairly fast.. only a 3 year war i believe then we are stuck with only machines that run Unix. The base Unix is lost and it only survives in Linux type OS's that aren't even complete enough to run. Since fairly all Unix is lost and Windows will not exist anymore we hit a dud.. almost a near dark age where localized farming and "towns" are actually comitted to eachother. People get voted in to move to other towns and it takes years to rebuild what we lost. We lose most of our technology and have to rely on our basic human nature to survive. Government was our enemy, we had to establish a new one after they completely destroyed the constitution.

Rubber bullets, salt packs and gasses that just make you sick.. think they are developing non-lethal weapons to fight in Iraq? The real enemy is YOU!

oh yea.. sorry. Dreaming again of johntitor type stuff.

<A HREF="http://arc.aquamark3.com/arc/arc_view.php?run=277124623" target="_new">http://arc.aquamark3.com/arc/arc_view.php?run=277124623</A>
46,510 , movin on up. 48k new goal. Maybe not.. :/
 

RX8

Distinguished
Dec 5, 2004
848
0
18,980
errrrrrrrrhhhhhhhhh hhhhhhhhhhhmmmmmmmmmmmmmmmmmmmm

Sony/toshiba

Cell Processer springs to mind.

a super computer chip which kicks ass. beats amd and intel.

Why are we stuck in old designs. x86

The abilitie for software to be compatable.

why is shader3.0 only working on a 6600/6800 and not on a x800 card all though a x800 has the performance but does not have the hardware to do so.

if the hardware is changed to much from the old archetecture of the cpu then current programs will not run and software makers will revolt and say move from intel to amd.

Like a xbox owner to ps2 because xbox does not have GT 4 or GTA:SA.

if i was the head of intel or amd i wouldent want to put somthing to radicle out there, because guys like the general public will say its another marketing ploy for all of us to upgrade again just when i brought a fx55.

Even we would complain, to much change NOW would really cause a unstable market, if intel/AMD release a prosesser which has 10x times the performance of a fx55, this would render all/most pc useless for gaming in 1 years time as new games will be made to take advantage of such a powerfull proccesser.

and then we will really complain.


<font color=purple> MY FINGER IS ON THE BUTTON! </font color=purple>
 

Xeon

Distinguished
Feb 21, 2004
1,304
0
19,280
I can't help to wonder. What does it take to make a chip now that will have the same big lead the Alpha had over the 486 10 years ago. Should anyone care?
Most likely that could happen when AI is finalized the silicon that powers that will be the building blocks to tomorrows semi conductors. But as things are now it's tough to say whether or not we should care. The performance we are seeing on x86 is fine and there is very viable options waiting in the wings.

If I was you I would just sit back and let the industry giants duke it out over the next generation IA and enjoy the wonderful software we have access to now.

It seems "impossible" to make a very low instruction set CPU. So making a GPU type CPU seems unlikely.
Not overly NVIDIA has changed gears they are moving towards a more CPU GPU, they are easier to tune with constant new compilers and rock solid development platforms OpenGL and DX.xx with give future developers nearly limitless possibilities.

As well there is plenty of RISC CPU's around its just they end up in specialized devices. It's too hard in my honest opinion to take C and C++ which were made to represent everything via instructions cut that by 30-70% of its existent instruction set to accommodate a RISC processor. We would be going backwards in every known form as far as I’m concerned.

Truth be known I suppose we could make some cleaver IC logic to run such instruction sets but would that really help us in the end?

I can still run Word for DOS in Windows XP without an emulator.
There will be progress in some direction. I just hope it's not x86 and DOS forever!
Well 2k and XP don’t actually support DOS natively they emulate it, but rest assure longhorn will change that grim outlook.

Oh the true god will come from the heavens, and share his wisdom and love.

Cell Processer springs to mind.
Cell is highly overrated it's a specialized processor, from what I have read in the development briefs the Cell processor would fall over and die if introduced into the software environments we work/play/and live in.

why is shader3.0 only working on a 6600/6800 and not on a x800 card all though a x800 has the performance but does not have the hardware to do so.
I would have to say it was a case of were on top why worry syndrome, reality has set in ATI took a hit on pushing the envelope.


-Jeremy Dach
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
With the x86 out of the way, make a MODERN CPU in no way compatible with x86 (and without other CPU/Bus problems, perhaps something like the the long instruction Itanium or the Cell??) and make a x86 emulator (the new cpu will be so much faster, the emulator doesn't need to be 100% efficent).

...

I don't believe the x86 is the only architecture which is a general purpose CPU. Remember the 64 bit DEC Alpha? That was 10 years ago!
**ROFL** I still have a DEC Alpha box sitting under my desk at work. Of course I never turn it on anymore, but it's still there.

The DEC Alpha even had an x86 emulator, FX!32. So in theory it should have done exactly what you're saying.

The reality however went an entirely different way. The x86 emulation crashed on way too many things. I couldn't even use half of my development softwares because it would crash on the <i>install</i>.

And then there was the actual issue of software support. While I was <i>trying</i> to compile and build an Alpha version of the x86 software that my company released, the sad fact was that <i>every single</i> author of a 3rd party software library that we had used in the x86 software refused to work with my company in any way to develop an Alpha version. And nearly every time said author would say virtually the same thing: When Microsoft supports the Alpha, then I will too. The funny thing was, I was compiling this with Dev Studio using an Alpha patch made available by Microsoft, on an Alpha patched version of NT4, again, made available by guess who.

So not only did the Alpha suck at x86 emulation, but it also had no support from the software industry. You couldn't have chosen a more self-defeating argument that explains exactly why x86 isn't being replaced any time soon if you had tried.

But while we're at it, what do you suppose PowerPC CPUs are? They're certainly not x86. The G4s and G5s aren't Intel or AMD products. They don't have the same architecture. Yet they sure aren't doing much to crush the supposedly weak x86, are they? I'd take an Athlon 64 over a G5 any day.

So it isn't even just that replacing x86 is somewhere between difficult and impossible, but also that x86 in actuality isn't nearly as weak as you think and in reality doesn't need replacing at all.

<pre>Antec Sonata 2x120mm
P4C 2.6
Asus P4P800Dlx
2x512MB CorsairXMS3200C2
Leadtek A6600GT TDH
RAID1 2xHitachi 60GB
BENQ 16X DVD+/-RW
Altec Lansing 251
NEC FE990 19"CRT</pre><p>
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Zero emissions is a lie, these people are getting hydrogen from electrolysis at refuling stations, which comes mostly from coal.
Except in Iceland. But then that is an extremely rare case. ;)

But maybe hydrogen fuel cells could be exported from Iceland on hydrogen fuel cell powered boats.

How about sail-cars? A wind-powered land vehicle wouldn't even have an engine! :O

Back to being serious though. Crashman, what's your opinion on ethanol?

<pre>Antec Sonata 2x120mm
P4C 2.6
Asus P4P800Dlx
2x512MB CorsairXMS3200C2
Leadtek A6600GT TDH
RAID1 2xHitachi 60GB
BENQ 16X DVD+/-RW
Altec Lansing 251
NEC FE990 19"CRT</pre><p>
 

RichPLS

Champion
CPUs don't make sense !
They are supposed to make BUCKS, BIG BUCKS! :)

<pre><font color=red>°¤o,¸¸¸,o¤°`°¤o \\// o¤°`°¤o,¸¸¸,o¤°
And the sign says "You got to have a membership card to get inside" Huh
So I got me a pen and paper And I made up my own little sign</pre><p></font color=red>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>I can't help to wonder. What does it take to make a chip now
>that will have the same big lead the Alpha had over the 486
>10 years ago.

Cash, thats all, like you needed for Alpha 10 years ago. Just pick up a Power5 based system, and it will eat any x86 box for lunch.

However, as others pointed out, unless you fancy running SAP and DB2 on your desktop, there won't be all that much software for it. Guess what ? just like for Alpha.

These days, the ISA barely matters anymore. Chips have become so complex, that they do not even use their own instruction set anymore, they break them down to micro-ops, they execute them out of order, guess, etc,etc. Wether the chip is based upon RISC, CISC or VLIW is not making much impact, let alone the specific ISA. The implementation is what matters, therefore, there is no point in designing an all new chip that won't be any better, and won't run todays software. And making any chip, wether it be PPC, x86 or ARM based, that performance in the same range as todays x86, with a comparable price tag, is just not feasable, let alone desireable.

= The views stated herein are my personal views, and not necessarily the views of my wife. =