MatTheMurdera

Distinguished
Mar 19, 2006
366
0
18,780
Intel has been very open about scaling down processors on new transistor sizes, but when will it end? Intel has talked about 32nm but nothing passed that. There is a physical limit to how small it can get before a processor leaks to much to be useful. Does anyone have an educated guess as to how small they can get before it needs to switch over to something more efficent and scaliable? Also how will Intel deal with them memory bandwidth issues of multiple cores? I mean in the sense that sooner or later they will also hit a wall with processor pins, how will they deal with that? Lucky for us both companies wont run into these problems for a while, just they are coming to be somewhat of a puzzle to me.
 
There's a limit coming up to how small they can go with their current process, but they can switch materials (photons, single electrons, flux capacitor, whatever), etc... and perhaps go even smaller. But yes, no matter what they do they will hit a limit, the building blocks of our universe are only so big.
 

-Mithridate-

Distinguished
Apr 28, 2006
20
0
18,510
this might help what your looking for. i cant tell you much technical detail on the information your asking, so wait till someone comes around with there info on that, but some of that might be answered in the link provided. (i would try and elaborate on some of the info in the link, but im about to go to bed and quite tired)

22nm is the next one down according to that article


http://www.dailytech.com/article.aspx?newsid=2649

hope that helps

EDIT:- JumpingJ beat me to it :p "Ill get you next time heman"
 

ltcommander_data

Distinguished
Dec 16, 2004
997
0
18,980
Here is the official schedule given at the Fall 2005 IDF:

http://www.legitreviews.com/article.php?aid=234

2012 – 22nm – 32 billion transistors
2014 – 16nm – 64 billion transistors
2016 – 11nm – 128 billion transistors
2018 – 8nm – 256 billion transistors
There's also a nice little PPT for embedded techologies that shows their process techology on slide 4.

http://www.emea-distributor.com/downloads/paris/Intel-Ts-Technologies.ppt

This roadmap is a little bit more aggressive than the Fall IDF numbers with all the dates pushed 1 year ahead. It's possible the PPT quotes when the process is online while IDF quotes projected product availability. The PPT also gives their prototype schedule, which is that they always plan on having prototypes at least 2 processes ahead.
 

qurious69ss

Distinguished
Mar 4, 2006
474
0
18,780
There's a limit coming up to how small they can go with their current process, but they can switch materials (photons, single electrons, flux capacitor, whatever), etc... and perhaps go even smaller. But yes, no matter what they do they will hit a limit, the building blocks of our universe are only so big.

Ahh....I always new flux capacitors were they way to go. :)
 

BGP_Spook

Distinguished
Mar 20, 2006
150
0
18,680
They already have universities running quantum computers.
They haven't gotten it to work in 8-bit yet, and further development is hampered somewhat because we haven't figured out everything there is to know about quantum physics. Although, great strides have been made.
http://computer.howstuffworks.com/quantum-computer.htm

Photonic computing is actually very similar to standard electronic computing but had been held back by the necessity of being able to cancel a ray of light with another ray of light.(like stopping a bullet with another bullet) It has been done but is hard to reproduce consistently and massproduce. Hence it is not yet practical. Strides are being made, though.

EDIT:
Sorry, forgot why I had originally started to post. The physical size limit of a transistor is reached when only 1 electron is controled by the transistor. Safe to say that is a bit off yet.

However, starting around 32nm(roughly) we will have to start worrying about "sympathetic" electron flow. So the distance between transistors could become an issue as much as the actual size.
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
I wanted to know about the quantum computers (The ones that miltiply q-bits).
I heard about them for years. But is that STILL just theory?
Or has SOMEONE made SOMETHING of SOMEPART that did SOMETHING SOMETIME in SOME lab SOMEPLACE?
BGP_Spook, thanks the the link. I'll be impressed to see ANY q-computer calculate ANYTHING.
 

JonathanDeane

Distinguished
Mar 28, 2006
1,469
0
19,310
My way out thoery is chips will switch to single atom vibrations powered by different powered and read by different wavelenghts of light ! kind of like a two sidded lazer powered abacus !!! lol ok so im nuts :)

Edit: On thinking about it maybe its not such a bad idea only switch to microwaves on one side light to read hmmm THz chip !!! fun to think about :)
 

Multiplectic

Distinguished
Apr 17, 2006
1,029
0
19,280
Besides quantum computing (already mentioned by BGP) there is another possible alternative. Carbon Nanotubes. Here's a link.

The interesting thing about carbon nanotubes is they can be semiconductors, due to "Quantum Tunneling" (Link), carrying signals. They can also modify their electrical and mechanical behaviour according to the geometrical shape given.
 

ethernalite

Distinguished
May 24, 2006
215
1
18,680
Here is the official schedule given at the Fall 2005 IDF:

http://www.legitreviews.com/article.php?aid=234

2012 – 22nm – 32 billion transistors
2014 – 16nm – 64 billion transistors
2016 – 11nm – 128 billion transistors
2018 – 8nm – 256 billion transistors

First of all, those numbers can't be right. The Core 2 chip has about 300m transistors. So, assuming a constant die size of about 150mm^2, you're looking at about 600m transistors for 45nm, and 1.2 billion transistors for 32nm, and 2.4 billion transistors for 22nm (roughly). A 32 billion transistor die at the 22nm node would need to be about 1880mm^2!

Second, I can see the "two year cycle" continuing through 2010 with 32nm. Maybe even 2012 with 22nm. But after that, things are going to slow down. I know people have been prophesizing this for decades, but at sizes that small, single atoms become huge chunks of the transistor. Even at 65nm this is the case, as certain parts of the transistor are simply not shrinkable anymore, as they begin to consist of only two of three atoms. At below 32 or 22nm, (I would guestimate), it no longer becomes an issue of simply upgrading the lithographic process and a slight modification to the silicon wafers. It would require radical new materials and concepts to continue to decrease in size.

Then again, I could be wrong.
 

Multiplectic

Distinguished
Apr 17, 2006
1,029
0
19,280
I wanted to know about the quantum computers (The ones that miltiply q-bits).
I heard about them for years. But is that STILL just theory?
Or has SOMEONE made SOMETHING of SOMEPART that did SOMETHING SOMETIME in SOME lab SOMEPLACE?
BGP_Spook, thanks the the link. I'll be impressed to see ANY q-computer calculate ANYTHING.

IBM has been playing with those for some time.
They have managed to build a 5-QBit Quantum Computer.
A couple of links (they're kinda old, but useful): Here and here.
 

m25

Distinguished
May 23, 2006
2,363
0
19,780
We're currently @ 65 and not yet at a stable 45nm. Nobody really knows the effects of EM or temperature at these steps so the limit may be even before 32nm.
 

gman01

Distinguished
Jun 25, 2006
272
0
18,780
We're currently @ 65 and not yet at a stable 45nm. Nobody really knows the effects of EM or temperature at these steps so the limit may be even before 32nm.

I have read that IBM has already created prototype 32nm chips for it next series of 'cell cpu's.... But if you follow cell, you know they are having problems with current manufacturing.... They throw out about 4 out of every 5 they make....
 

m25

Distinguished
May 23, 2006
2,363
0
19,780
We're currently @ 65 and not yet at a stable 45nm. Nobody really knows the effects of EM or temperature at these steps so the limit may be even before 32nm.

I have read that IBM has already created prototype 32nm chips for it next series of 'cell cpu's.... But if you follow cell, you know they are having problems with current manufacturing.... They throw out about 4 out of every 5 they make....

Exactly, and in PC CPUs is exponentially worse because they run much faster and much hotter. We haven't even tested the long term stability of 65nm since it's been around for only ~1year. 32 nm is an exciting figure but you have to think of many connections made by 1 atom, i'ts very easy for that atom to be kicked off, even in normal operation.
 

S7A88Y

Distinguished
Jan 22, 2006
136
0
18,680
I'd say by the time Intel's ready to jump off the 32nm boat it will be about time for Silicon Geranium chips; or even better Carbon Nanotube; both have better performance per price ratios then silicone and are cheaper.
 

thematrixhazuneo

Distinguished
Jun 17, 2005
84
0
18,630
BTW - If anyone has read the book or is interested, pick up a copy of "Visions" by Michio Kaku; Michio Kaku explains alot of ideas that could be possible in the future w/ computers and amongst other interesting topics.

Mike
 

exit2dos

Distinguished
Jul 16, 2006
2,646
0
20,810
G

Guest

Guest
Itanium Montecito Dualcore with 24 Mb of cache is 1 billion transistor on 65nm.

so
2005 - 65nm - 1 billion
2007-8 - 45nm - 2 billion
2010 - 32nm - 4 billion
OOOPs
2012 – 22nm – 32 billion transistors
2014 – 16nm – 64 billion transistors
2016 – 11nm – 128 billion transistors
2018 – 8nm – 256 billion transistors

yeah looks like a missing link somewhere...lt_com can probably enlight us
 

syn1kk

Distinguished
May 19, 2006
113
0
18,680
All my work and research in academia pertaining to transistor design and research pertaining to transistor leakage... most professors expect transistors to keep advancing for another 5 to 10 years.

BUT (and this is a big but) each new improvements in transistors is exponentially more expensive. So I predict traditional improvements through smaller transistors for 65nm and 45 and 32nm.

After 32nm anything more improved will be PROHIBITIVELY expensive to research, manufacture. Assuming you even get a solid manufacturing process. They will not be stable. At such small sizes the transistor will react unstably from small EM disturbances.

But you will get better transistors from 32nm... I just believe they will start hating transistors due to the HUGE cost to research... and manufacture.

------

That is when quantum computing will seem CHEAP in comparison and the transistion will occur.

Or they will use organic computers.

Or they will start using the same transistors but Async circuit designs instead. For all of you NOT in the know. All current processors are clocked circuits. Async circuits do not have any clock to drive the components. heres some stuff on that: http://en.wikipedia.org/wiki/Asynchronous_circuit

i wrote about it before:

Power will always increase as performance increases. It's that simple.

The reason this is true is because current technology requires more transistors to get higher performance or higher clock.

---------------

What you can hope for is to have more efficient designs that don't waste as much power with sloppy brute force hardware design.

---------------

The only way to get lower power I know of with transistors is to use Asynchronous circuit design instead of Clocked/Sequential/Synchronous circuits. But that'll never happen in the desktop realm =P.

p.s. for those of you http://en.wikipedia.org/wiki/Asynchronous_circuit . "An asynchronous circuit is a circuit in which the parts are largely autonomous. They are not governed by a clock circuit or global clock signal... Lower power consumption due to the fact that no transistor ever transitions unless it is performing useful computation (clock gating in synchronous designs is an imperfect approximation of this ideal)"

The original statement that I made "Power will always increase as performance increases. It's that simple." is a general rule much like moore's law.

And much like moore's law you will see exceptions to it like the conroe. Like when a new processor comes out early and it breaks moore's law. Or if one comes out late a breaks moore's law.

---------

Even with new architectures like conroe as performance increases eventually they will use more power. Take for example a conroe clocked at 2.6Ghz and then another at 3.2Ghz. The 3.2 will use more than the 2.6Ghz.

Then eventually power consumption will increase until conroe is using as much power as the old netburst architecture. This will happen a couple years down the road. But at that point conroe will have better performance than netburst at the same power. But it still follows that as performance increases so too does power in this example of conroe.

--------
 
G

Guest

Guest
Interesting, thanks. also on wikipedia, (for the other it's pretty sure you know that stuff already) you can read on Extrem Ultra Violet and immersion lithography. They clearly state, just like you, that around 22nm, the cost will probably go up a lot, the mirror will have to be smooth at the atomic level, the mask in use will have to be much more complex, etc.

On the other hand that's to be expected, these newer lithography process are much more complex and much more evolved than what we have today...
 

m25

Distinguished
May 23, 2006
2,363
0
19,780
BTW - If anyone has read the book or is interested, pick up a copy of "Visions" by Michio Kaku; Michio Kaku explains alot of ideas that could be possible in the future w/ computers and amongst other interesting topics.

Mike

Looks interesting but nobody will put a cent of investment until they have squeezed the last drop of silicon.
 

m25

Distinguished
May 23, 2006
2,363
0
19,780
Interesting, thanks. also on wikipedia, (for the other it's pretty sure you know that stuff already) you can read on Extrem Ultra Violet and immersion lithography. They clearly state, just like you, that around 22nm, the cost will probably go up a lot, the mirror will have to be smooth at the atomic level, the mask in use will have to be much more complex, etc.

On the other hand that's to be expected, these newer lithography process are much more complex and much more evolved than what we have today...

The real question is; will the 32 or 22nm chips withstand thermal and electrical stress well enough. Serious problems were arising since 90nm, got worse in 65 nm... The problem is with the structure itself, not the way it's manufactured.
 

syn1kk

Distinguished
May 19, 2006
113
0
18,680
You are missing something here also. Assuming they can solve the heat/electrical issues in a controlled test.

at this size ... the signals are easily affected by minor Electromagnetic interference... which is even more of a problem.
 

RawStorm

Distinguished
May 17, 2006
9
0
18,510
I heard that someone was trying to make diamond cpus that can resist extreme temperatures and can be clocked at much higher speeds then today's silicon processors.