Sign in with
Sign up | Sign in
Your question
Closed

New CPU! Win XP on 1.91Mhz!!! This is not a misprint!

Last response: in CPUs
Share
November 2, 2006 2:16:18 PM

This is not a mis-print, but their have been rumors that intel have produces a new micro-processor that utilises 32 micro-CPU's operating on 1.91Mhz!!! It's operating on 95% load, but on 2Mhz!?! Their have been no screenshots but they're have been articles on tinernet (just google it, they're are loads about)

WTF? intel have produced some good CPU's recently, very tempted to upgrade to an intel CPU... :p  :p  :p  :p  :p  :p  :p  :p  :p  :p  :p  :p  :p 

More about : cpu win 91mhz misprint

November 2, 2006 4:24:33 PM

What. The. Hell
Related resources
November 2, 2006 4:31:36 PM

... Thats what I thought too, still can't decide what the point of this would be used for...
November 2, 2006 4:43:24 PM

calculate 1+1?
November 2, 2006 4:44:50 PM

I got a 10 year old solar calculator that can do that just fine...

Well... it used to be able to do that, but the 1 button doesn't work anymore... so I guess it can't even do that anymore =(
November 2, 2006 4:48:50 PM

So this is perfect for you... :lol:  :wink:
November 2, 2006 4:56:36 PM

1.91MHz?
November 2, 2006 4:59:00 PM

Well, I got to hand Intel one thing, they're heading on the right track by not just making things go faster and hotter...

However, the track this thing is on is straight to the looney bin.
November 2, 2006 5:00:47 PM

I thought that was IBM.

I read that the "CPU" runs ats 1.5-2Mhz but there's enough of them on one die to crank up to about a couple of teraflops at room temp. and going up to 5 or so when cooled with liquid nitrogen.
November 2, 2006 5:03:06 PM

1.91mhz... i wonder if we can overclock it hehehehe
November 2, 2006 5:15:21 PM

Quote:
calculate 1+1?


You do realize it's possible to add 1+2 and get 4?

I can't recall the mathematical proof, my Tutor taught me it while doing A level maths.
November 2, 2006 5:28:03 PM

Sweet. I'm all set up to take advantage of this. It's nice to see Intel hasn't stopped developing new products for Socket 7. ALL HAIL THE LONGEST RUNNING SOCKET PLATFORM. :?

What do you think INtel will call it: Intel Overdrive D for socket 7? Or maybe they'll call it Core32-OH for socket 7? :lol:  Doesn't matter what they're calling it, I'm buying it.

The big question is: What will AMD do as a counter manuever? AMD has held the socket 7 performance crown since 1998 and has become a little too comfortable with their commanding lead over Intel in the socket 7 arena. Intel may have caught AMD with their pants down on this one. It might be years until we see a native 32 core AMD K6 cpu. In the meantime, AMD is rumored to be brewing up a new mobo technology using existing K6-2s. This new motherboard, called 32x2, will have accept 32 k6-2 processors, have two banks of pc100 sdram per cpu, and allow for two PCI graphics cards in SLI mode. There will be 1MB of on-motherboard L2 cache shared between the 32 K6-2s. Unfortunately K5 and k6 will not be supported. "K5 and K6 do not have the 3D Now! technology necessary for core to core communication" AMD stated in the press release.
November 2, 2006 5:35:34 PM

I wonder how good it overclocks? :p 
November 2, 2006 5:39:41 PM

it may be experimental, in the sense that they are gonna slowly up the frequency etc, to see what kind of performance they can get, maybe a view into future 32 processor computer chip...
November 2, 2006 5:45:32 PM

As for the original post, I'm pretty sure this item is for research and developement, not to be marketed. I'm sure we all read the articles that told us that 80 cores is not unimaginable.

Quote:

You do realize it's possible to add 1+2 and get 4?

I can't recall the mathematical proof, my Tutor taught me it while doing A level maths.


Uhhh... no it's not.

You made the claim... now back it up or take it back.
November 2, 2006 5:46:53 PM

Heh, they probably had quite a bit old processors around and decided to play around with them to see what they can get from it...

Don't know if anything will really come from it.
November 2, 2006 5:51:37 PM

Wow.. thats cool. I just hope Jack pics up on this and comes over to explain why to mere mortals like me.

Obvious points though...

Very scalable. (add more cheep cores to add performance)
Very Low Clock (Cool running and lots of room to grow)
Simple design (they implemented a working version on an FPGA)

If anybody does know what they are doing here please do shed some light on it...
November 2, 2006 6:04:41 PM

Quote:
I got a 10 year old solar calculator that can do that just fine...

Well... it used to be able to do that, but the 1 button doesn't work anymore... so I guess it can't even do that anymore =(


Yeah, it can do 1+1, but can it run windows xp?
HELL no.

most of you guys are missing the point here. they have 4 lil 1.5mhz cores together running windows xp and some moving graphics, something that today takes something around 500mhz to even begin to run "well".

possibilities are limitless!! if they get them down to microscale, and of course they will, figure 1000, 10,000 of these things on a board. if four can run winxp, how about one small machine that can run 1000 xp virtual machines at the same time?

but as i said, possibilities are limitless.

Valis
November 2, 2006 6:05:33 PM

Quote:
As for the original post, I'm pretty sure this item is for research and developement, not to be marketed. I'm sure we all read the articles that told us that 80 cores is not unimaginable.


You do realize it's possible to add 1+2 and get 4?

I can't recall the mathematical proof, my Tutor taught me it while doing A level maths.


Uhhh... no it's not.

You made the claim... now back it up or take it back.

Okay, i'll go get the proof tomorrow. It'll be posted up by about 5pm GMT!
November 2, 2006 6:05:45 PM

Quote:
Don't know if anything will really come from it.

i guess teh deciding factor is how much processing power is needed to do teh job...
everything is computerized...
hell you can get a toaster that will Tell you your toast is ready...

but i think the big thing is what they can do with 1.91 mhz.
apply that technology to their newest processor running at 3Ghz...
November 2, 2006 6:16:27 PM

Quote:
As for the original post, I'm pretty sure this item is for research and developement, not to be marketed. I'm sure we all read the articles that told us that 80 cores is not unimaginable.


You do realize it's possible to add 1+2 and get 4?

I can't recall the mathematical proof, my Tutor taught me it while doing A level maths.


Uhhh... no it's not.

You made the claim... now back it up or take it back.

Okay, i'll go get the proof tomorrow. It'll be posted up by about 5pm GMT!


I dont know if Ive seen this exact one, but Ive seen other similar ones. They generally divide by 0, which isnt actually allowed, but they are still pretty neat.
November 2, 2006 6:25:50 PM

Oh... if they were planning on those kind of cheats, I've seen plenty.. but I can usually spot where they cheat after the first pass...

But once we start doing things in non-eucldian spaces, I have a much harder time finding the trick...
November 2, 2006 6:26:51 PM

1+2 does not compute. LOL. I can only add 0's and 1's!!

Seriously though, I could see this helping out with some applications, and having a "real" cpu that controls this things input and output paths while running an OS wouldn't be too difficult.
November 2, 2006 6:29:47 PM

Quote:
As for the original post, I'm pretty sure this item is for research and developement, not to be marketed. I'm sure we all read the articles that told us that 80 cores is not unimaginable.


You do realize it's possible to add 1+2 and get 4?

I can't recall the mathematical proof, my Tutor taught me it while doing A level maths.


Uhhh... no it's not.

You made the claim... now back it up or take it back.

Okay, i'll go get the proof tomorrow. It'll be posted up by about 5pm GMT!


I dont know if Ive seen this exact one, but Ive seen other similar ones. They generally divide by 0, which isnt actually allowed, but they are still pretty neat.

Not necessarily, once you get into advanced linear algebra you discover warped addition. I can't remember how it actually works (haven't done it in several years) but you can actually get 1+2 to equal 4.
November 2, 2006 6:32:46 PM

hmmmm... advanced linear algebra... I only took linear algebra, guess they skipped the cool parts where they break the laws of addition.. shrug...
November 2, 2006 6:38:41 PM

I think I can put a whopping 50% overclock on this thing without any danger of unstability... 3MHz here I come!

Seriously, though, WinXP? NO. FREAKIN. WAY.
And I can probably do 1+1 faster in my head.
November 2, 2006 6:45:54 PM

Quote:
hmmmm... advanced linear algebra... I only took linear algebra, guess they skipped the cool parts where they break the laws of addition.. shrug...


Technically you're not breaking the laws, you're "warping" them. This happens accidentally, but is unavoidable in some instances on algebra. It's a shame i didn't take Maths on with me to Uni, found it very interesting. As it is i'm doing a Politics/History/Economics cross degree, which is interesting in it's own right.
November 2, 2006 6:54:48 PM

If this is correct, then the only viable application would be space ships? 8O
They will be small powerful and have low thermal discharge.

Maybe????
November 2, 2006 7:45:14 PM

I'm trying to dig up that proof. I know it's in my notebook somewhere but I can't seem to find anything on the internet about it.
November 2, 2006 7:51:47 PM

Here is the original link to where this topic was discussed.

To restate what I mentioned in the other post.

The point is that an FPGA can run Windows. FPGAs have been known to be very high in power consumption and not be able to process information that quickly. This is just showing the progression in FPGA, that it can include all the needed x86 instruction and run a traditional application.

If you don't know what an FPGA, here is a quick explanation.
There are a lot of fets on a piece of silicon and applying voltage to some of them to turn them on or off will change the internal logic of the circuit. That is a very basic view of them.

More info here.
November 2, 2006 8:02:56 PM

That's kinda spiffy. Should be good for cell phones, PDAs and UMPCs. Each little process of the OS would bascially get it's own core and never have to wait (as long as no processes caused bandwidth bottleneck) which is good when some thing you're doing require real-time responses and you're ussually in a hurry when trying to do something "on the go". While it wouldn't turn your cell phone into a super computer, it looks like it might help it run generic software more smoothly without catching your pants on fire or having to be recharged every couple hours. This is good.

It doesn't matter how many cores you have though, you're not going to be able to do much computational work a 1.91MHz no matter how advanced the architecture is. "Parralism" is NOT the future, CPU makers just want you to think that because they can't figure out how to actually make their CPUs twice as fast, so they are making them twice as big instead. Many processes simply do not benefit from being multi-threaded. Even for those that do, the gains are sometimes not big (ex: if you made an application that split a video encoding project into 80 different parts for your 80 cores, the overhead would nerf your performance down to nothing and you would need 80 hard drives with 80 dedicated interfaces with 80times as much ram that's 80times as fast and 80times as much FSB bandwidth... eh, not exactly, but you get the idea). Multi-CPU systems work best for processes that are low-bandwidth and high-computation, this prototype is obviously lacking in the "computation" area. If they get ~80core CPUs up to speed this will be the way to go for super-computers, but here's a hint: this is what super-computers already are doing and have been doing for decades. Does your average user need a super-computer architecture on their desktop? No. The average gamer? No. Enthusiasts? Um... no. If intel gave you an 80core Conroe system running at 3ghz right now what would you even do with it? With only 4gb of ram how many cores do you think you could use even if you tried? You could run like 27instances (they use about 150mb each) of Folding@Home and a webbrowser. That would use 28cores and you better hope that not to many of them try to write a save point at the same time or you'll be waiting for a second or two to get access to your HD. And if they all tried to upload their results at the same time you would have to wait for about 48minutes before you can surf without lag if you have a typical home braodband package with 384kb upload speed. In that perspective you can see that this is not a desktop solution anytime soon.

Multiple processing units in a single system is a solution that has been around for a LONG, LONG time. It works well for some things, but it's nothing to get all excited about. Dual core on the desktop is largely a gimick. If you had really needed it you could have done it 5 years ago by buying a dual-CPU server board (you'd have spend ~3x as much for 2CPUs and a server mobo vs. 1CPU and a desktop Mobo, but it wasn't that much more expensive for a complete system. The reason people didn't do it was because there wasn't much of a performance advantage.). The only major difference now is that it's cheaper (but you're still paying more then you would for a single-core of the same architecture, and before some dumbass flames me I'd like to point out: not being able to buy Core2 Solo* does not make Core2 Duo cheaper, it just means intel didn't want to give you a choice).

Desktop systems don't need more cores, we need faster networking and HDs first, and then some sort of program that can actually benefit from all that. I try to encourage people not to get too excited about dual core, but hey, I bought one, they're kinda spiffy and all. But quadcore? 80core? Smart desktop users will spend their money elsewhere.

*I found a place selling Core1 Solo CPU here and another http://store.sunwilltech.com/zz00-4301431.html" target="_blank">here[/url ] for outrageously high prices and that's it. They're supposed to be for "mobile" solutions only. People talk about Inte's "budget" +$200 e6400... $200 doesn't fit my definition of a budget CPU, but intel needs to offload a bunch of Pentium Ds still so the customers get screwed.

Back on topic though: A super-low-power multi-core processor could certainly have some nice niche applications.
November 2, 2006 8:11:54 PM

Quote:
The point is that an FPGA can run Windows. FPGAs have been known to be very high in power consumption and not be able to process information that quickly. This is just showing the progression in FPGA, that it can include all the needed x86 instruction and run a traditional application.


Nicely worded.

If people want more info, I suggest:
Quote:
Today’s multi-core chips use package-level interconnects to connect the cores. The on-chip core-to-core communication
which this prototype explores will have significantly reduced latencies and more bandwidth, and it will be orders of
magnitude better than the chip-to-chip multiprocessing systems used today for parallel computing. Here are some of the
advantages of on-chip interconnects:
• On-chip metal layers are cheap and plentiful compared to off-chip networks in printed circuit boards.
• On-chip interconnects can often be routed on top of some types of circuit structures (such as caches), so that they
consume little space on the die.
• Given the smaller distances on-die interconnects must span, such interconnects are generally more responsive and
power efficient than off-die interconnects.
• On-die bus widths can be much wider than those on printed circuit boards, allowing for more efficient signaling rates.
In other words, bandwidths can be increased by widening the bus and actually slowing the signaling speed to save
power.
In addition, the prototype uses low power techniques combined with fine-grain power management. The prototype uses
both static and dynamic techniques, allowing researchers to experiment over a wide range of power and performance.

ftp://download.intel.com/research/platform/terascale/tera-scaleresearchprototypebackgrounder.pdf

Quote:
He started by revealing the first details of Intel’s tera-scale research prototype silicon, the world’s first programmable TeraFLOP processor. Containing 80 simple cores and operating at 3.1 GHz, the goal of this experimental chip is to test interconnect strategies for rapidly moving terabytes of data from core to core and between cores and memory.

“When combined with our recent breakthroughs in silicon photonics, these experimental chips address the three major requirements for tera-scale computing – teraOPS of performance, terabytes-per-second of memory bandwidth, and terabits-per-second of I/O capacity,” said Rattner. “While any commercial application of these technologies is years away, it is an exciting first step in bringing tera-scale performance to PCs and servers.”

Unlike existing chip designs where hundreds of millions of transistors are uniquely arranged, this chip’s design consists of 80 tiles laid out in an 8x10 block array. Each tile includes a small core, or compute element, with a simple instruction set for processing floating-point data, but is not Intel Architecture compatible. The tile also includes a router connecting the core to an on-chip network that links all the cores to each other and gives them access to memory.

http://www.intel.com/pressroom/archive/releases/20060926corp_b.htm
November 2, 2006 8:35:15 PM

Quote:
I'm trying to dig up that proof. I know it's in my notebook somewhere but I can't seem to find anything on the internet about it.


I just have the problem of never having written it down!
November 2, 2006 8:49:24 PM

Quote:
"Parralism" is NOT the future, CPU makers just want you to think that because they can't figure out how to actually make their CPUs twice as fast, so they are making them twice as big instead.

Yeah, that whole physics thing is tough to beat. I'd love to hear your ideas on how to do it.
November 2, 2006 9:21:48 PM

well... not even really a size increase either, unless im wrong... because with a process size reduction, you can fit more physically in the same area, i believe anyhow...

so, maybe you can only physically fit 2 cores on a 90nm process, and 4 cores at a 65nm process... but maybe at a 9nm process, you can fit 20 cores (or more), and only take up the same room as a 90nm process would, so youre taking up no more room actually than you are now, even by massively parallelizing everything...

someone really should correct me though if thats wrong :) , cuz im not entirely sure myself lol
a c 140 à CPUs
a b å Intel
November 2, 2006 9:26:06 PM

Ohhhh man....time to break out that old socket 7 board :) 

Sweet....
November 2, 2006 9:28:28 PM

You are right that as process decreases you can fit more on a die. Though when increasing the core count, the size generally increases some. Depending on the size of the die and the package dimensions, on a particular process the number of cores may not be able to be doubled.

This can be seen on older processes with the L2 cache. The processor was too large and so they needed to add an off-ship L2 cache, which used the BSB (Back Side Bus) to access the cache. This was much faster than going to memory because it had it's own dedicated fast bus. The same concept applies to doubling of cores.
November 2, 2006 9:32:10 PM

okay cool :) , hopefully by the time they get down anywhere near there, theyll impliment shared caches and stuff, like theyre doing now with core 2, to more efficiently use how much space gets taken up... or something similar to that anyhow, for other functions as well on a cpu
November 2, 2006 9:35:49 PM

To the first part of your post about adjusting the processor in real-time and power consumption.

The FPGA adjusting in real-time is quite a ways off. The reason is FPGAs need a specific set of codes to adjust to the design you want. This takes multiple steps to verify the processor will do the correct thing Here's an explanation:
Quote:
In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to stimulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.


All the simulations and verification could not be done on the cellphone. The cell phone could have its own storage where preset codes for different FPGAs were stored. This could work depending on how large the codes are. The main problem with an FPGA in a handheld device is that the power consumption is much greater than standard configuration. For every switch of the FPGA, a constant supply of power is needed to keep the switch in the position. This leads to constant power usage.

As to the second part of your post I would have agree with you that parallelism is difficult to implement, but I would have to disagree and say that parallelism is the future. Parallelism needs to be implemented at every layer though, at the programming, compiler, and thread level. As is usually the case, the software isn't requiring the current level of hardware. (Good thing or else the company could never sell its software). I'm just communicating that the transition to parallelism will take time, but it will happen. Some parallelism is already being exploited at the thread level, also compilers are being remade to take more advantage of multi-threading. The transition will take time, just like the move to 64 bit will happen but will take time.

I actually would like to see more research done with automatic multi-threading as NEC did. They are using a custom instruction set, but the concept should be able to be transfered to the x86 instruction set.
November 2, 2006 9:46:29 PM

Quote:
As for the original post, I'm pretty sure this item is for research and developement, not to be marketed. I'm sure we all read the articles that told us that 80 cores is not unimaginable.


You do realize it's possible to add 1+2 and get 4?

I can't recall the mathematical proof, my Tutor taught me it while doing A level maths.


Uhhh... no it's not.

You made the claim... now back it up or take it back.

Okay, i'll go get the proof tomorrow. It'll be posted up by about 5pm GMT!


Look for something like 0/0. I can "prove" 1=2 (and you can see the prove in Calculus of Spivak), but it's just a cheat. If you use 0/0 it's out of the scope of maths (anything/0=doesn't exist)
November 2, 2006 10:35:30 PM

This kinda reminds me of RISC architecture. Rather than have one core with set pipelines, as I understand it, a RISC processor could perform many more calculations at lower speeds than a standard x86 chip. Take an old Spark or SuperSpark- I forget, but one of those runs at only 400mhz, and if it could run Windows, it could theoretically run it hundreds of times faster than a 400Mhz Pentium...

Same thing, with the instructions and calculations split between 32 different cores with however many pipes, you wouldn't need massive clock speed to perform your applications. Parallel processing. Sweet.

Check this out for example:
if
1 core runs WinXP at 1000Mhz
2 cores run it at 500
4 at 250
8 at 125
16 at 66.6
32 at 33.3mhz

It doesn't really work this way because 32 cores running XP at 1.9Mhz only scales up to 1 core running XP at 60.8Mhz, but it's a neat thing to think about when parallel processing enters the discussion. I would have been amazed even if it was only 33.3 mhz WinXP!
November 2, 2006 10:55:37 PM

The problem with your logic is that Windows also has dependencies so you can't just double the number of cores and have windows take advantage of all of them (Windows is very threaded though).

The differences between RISC and CISC (while x86 is a combination of the two), is that RISC uses many smaller instructions (all the same length) which makes processing faster per instruction (1 instruction per cycle).
CISC uses longer and most complex instructions (variable length instructions) but take several clock cycles to process.

The point is that both CISC and RISC both have a specific type of work they are good at. x86 is the general purpose architecture that decided to employ both CISC and RISC at different levels to speed up computing. If you want to look at a parallel instruction set, look at IA-64, the EPIC architecture. There are really interesting attributes to it. The IA-64 will be interesting to follow into the future as it gains more market share. (I know most people don't care for non-x86 instruction sets, but I find them all interesting. I'm currently learning the MIPS architecture in my free time.)
November 2, 2006 11:01:59 PM

Quote:
1.91mhz... i wonder if we can overclock it hehehehe


hah
November 2, 2006 11:20:54 PM

It would be neat if instead of a shared cache, they could find some way to make the separate caches run in dual channel mode. It might provide the same benefit as running RAM in dual channel mode!
November 2, 2006 11:25:55 PM

Well, I have a few random proofs...
0.99 repeating = 1
Simple, ask yourself, what's 1 - 0.9999repeating?. That's right, 0.
......

Another, 0 = 1
0 = (1-1) + (1-1) + (1-1)...etc
0 = 1 + (-1+1) + (-1+1)...etc. This is simply moving the brackets, which is mathematically allowed when only adding numbers.
0 = 1 + 0 + 0 + 0...on to infinity. So 0=1. Tentative, but if it goes on to infinity, this is true.
......

And a weird physics one.
Potential energy increases as you move away from the earth. So at distance infinity away from the Earth, you have infinite potential energy.

But...gravity decreases over distance, so at distance infinity, there's 0 gravitational force. So somehow, there's an infinite amount of potential energy there, with no force creating it.

Potential energy = mass * gravitational force * height
Infinite energy = mass * 0 * infinity (0 times anything is zero right?)
Infinite = 0
November 2, 2006 11:32:25 PM

Quote:
Another, 0 = 1
0 = (1-1) + (1-1) + (1-1)...etc
0 = 1 + (-1+1) + (-1+1)...etc. This is simply moving the brackets, which is mathematically allowed when only adding numbers.
0 = 1 + 0 + 0 + 0...on to infinity. So 0=1. Tentative, but if it goes on to infinity, this is true.
......

No, the order of addition does matter when computing infinite sums.

Quote:
Potential energy increases as you move away from the earth. So at distance infinity away from the Earth, you have infinite potential energy.

No. Gravity is proportional to 1/r^2. Calculate the integral from r=+# to r=infinity, and you have a finite value. INTEGRAL ( 1/r^2 dr, r=+#..+inf)
November 2, 2006 11:36:05 PM

Wow, shut down so quickly :(  .

Forgive my high school math skills, I haven't gone onto integrals yet. And my infinity calculation knowledge is also limited. This is just some stuff I was told by teachers. Now I can go prove them wrong! :lol: 
November 3, 2006 12:07:50 AM

Quote:
And a weird physics one.
Potential energy increases as you move away from the earth. So at distance infinity away from the Earth, you have infinite potential energy.

But...gravity decreases over distance, so at distance infinity, there's 0 gravitational force. So somehow, there's an infinite amount of potential energy there, with no force creating it.

Potential energy = mass * gravitational force * height
Infinite energy = mass * 0 * infinity (0 times anything is zero right?)
Infinite = 0


True gravitational potential energy is assigned a value of zero AT infinity. In reality, you can only have a negative gravitational potential energy, because gravitational force is attractive, so the work done by you in moving an object from infinity closer to earth is not actually done by you (when the surface of the earth is taken as GPE = 0, going DOWN decreases potential energy - from infinity, you can only go "down", so potential energy can only decrease from zero).

What is actually measured when the surface of the earth is taken as GPE = 0 is the DIFFERENCE in the potential energy of the earth's surface and whatever height h with respect to infinity.
!