Sign in with
Sign up | Sign in
Your question

Very Embarrassing Question about nm's

Tags:
Last response: in CPUs
Share
March 1, 2006 1:56:21 AM

Can someone please explain the importance and definition of the nanometres in a CPU? All I know is that smaller is better.

I know this is a dumb question, but I don't know the answer and want to know it (instead of pretending). That is why I come here, to get my answers solved. I am embarrassed and I know I am an idiot.

More about : embarrassing question

March 1, 2006 2:00:43 AM

Quote:
Can someone please explain the importance and definition of the nanometres in a CPU? All I know is that smaller is better.

I know this is a dumb question, but I don't know the answer and want to know it (instead of pretending). That is why I come here, to get my answers solved. I am embarrassed and I know I am an idiot.


man if ppl would use google...

and no i wont answer ur question cu zill explain it weird then go offtopic :) 

dont worry a nice guy will come buy in a few mins
a b à CPUs
March 1, 2006 2:19:09 AM

nm is a size rating (nanometre - one millionth of a meter isnt it?), and intel every year or so scales transistors down (eg 90nm to 65nm) to pack more transistors into the same size chip, each time its scaled down it (should) lower heat output, power consumption, cheaper production (depending) and so on aswell as allowing higher clock speeds.

Size examples - A P2 core (250nm deschutes, no integrated cache even) is twice as big as a P3 (180nm coppermine, 256k cache) core and the coppermine has nearly twice the transistor count!

Heat and power examples - the 350nm P2 333mhz clamath core (2.8v) put out 40+w of heat, where as a P3 tualatin running at 1400mhz with 512k cache puts out a mere 27w of heat (130nm, 1.5v).

Intel and AMD name cores sometimes because of the nm sizes - both P2 cores are exactly the same BUT there 250nm and 350nm, named as deschutes and covington.
Related resources
March 1, 2006 2:19:22 AM

i'm not really a "nice guy" but i'm pretty sure nm, have to do with how close the manufacturer can place the transistors together on the same piece of silicon, and therefore, how much information can be passed through and "processed" in the proccessor.

i think 8)
March 1, 2006 3:13:18 AM

Quote:
Can someone please explain the importance and definition of the nanometres in a CPU? All I know is that smaller is better.

I know this is a dumb question, but I don't know the answer and want to know it (instead of pretending). That is why I come here, to get my answers solved. I am embarrassed and I know I am an idiot.


You're not an idiot. The only dumb questions are the ones that aren't asked. Don't worry about not knowing something and don't be upset if someone flames you for asking. For the most part, people that do that are those that have no self-esteem and little confidence. The only way they can feel better about themselves is to tear someone else down. Just ignore them.

Pengwin: sure, a google search might work, but sometimes you get too many hits to wade through and/or you may not understand the ones you get. There's no harm in asking a question.
March 1, 2006 3:49:17 AM

It's not a stupid question, its just a term used to describe the size of the space between transistors on a processor. A nm (nanometer/nanometre depending on your regional spelling preference) is actually one billionth of a meter. Until recently this was measured in micrometers (when electronic components are concerned the term micron is typically used in place of micrometer), but it's a little more straightforward nowadays to refer to a technology as "90 nm" as opposed to "0.09 micron."
a b à CPUs
March 1, 2006 4:35:36 AM

i thought it was the actual size of the transistors (which therefor makes a smaller efficent chip)?
March 1, 2006 6:12:57 AM

It's not the actual size but it's the actual diameter of a transistors embedded inside a die.
March 1, 2006 6:17:14 AM

In cpu/chip smaller is better because the chip can have more transistors and there are millions of it in a single chip. So its possible to cram more core in a single die/chip, hence the dual core and multicore processor like quad core that are coming out soon. Smaller also means shorter so it means faster and also uses less power/voltage so it generates less heat and makes it more effecient. :D 
Anonymous
a b à CPUs
March 1, 2006 6:19:12 AM

nm = 10 to the -9

thats a thousanth of a milionth.

Just to make that clear.
a b à CPUs
March 1, 2006 8:51:33 AM

i was reading some random hardware article and it said closer together so there you go, learn somethin new every day - its both smaller and tigher packed
March 1, 2006 1:25:35 PM

hey peng nice sig where did you get it
a b à CPUs
March 1, 2006 9:18:38 PM

dvdpiddy - your sig is gona cause some serious siht man, either that or anyone with an intel system wont listen to your advice LOL.
March 1, 2006 9:23:50 PM

Quote:
dvdpiddy - your sig is gona cause some serious siht man, either that or anyone with an intel system wont listen to your advice LOL.


i go tna intel i agree with him, i hate my intel

i made my sig dvd
March 1, 2006 9:40:45 PM

really nice man really nice
March 1, 2006 10:10:14 PM

ty
March 1, 2006 11:42:27 PM

Hello!

Mind this:
Ingenuous [from the Latin ingenuus: honest, freeborn], is someone who doesn't know he/she doesn't know;
Ignorant, is someone who knows he/she doesn't know;
Stupid, is someone who knows what an Ingenuous & an Ignorant are, but doesn't recognize himself/herself in neither, never.

Now, compare the above with this: a nanometre (nm) is a millionth of a milimetre (mm), about 100 000 times smaller than the diametre of an average human hair. The next step down, is the picometre (pm), about 1000 000 x smaller than the diametre of a human hair; then, the femtometre (fm), about 100 000 000 x smaller than the diametre of an human hair... and so on.

What were you before knowing all this?
What are you now that you know all this?
What will you be after knowing the previous answers? :D 


Cheers!
March 2, 2006 12:09:34 AM

Real simple. You don't need to know what nanometers means. Unless you plan on being a chip engineer.
March 2, 2006 12:30:17 AM

Quote:
Can someone please explain the importance and definition of the nanometres in a CPU? All I know is that smaller is better.

I know this is a dumb question, but I don't know the answer and want to know it (instead of pretending). That is why I come here, to get my answers solved. I am embarrassed and I know I am an idiot.


Quote:
You may find this interesting :)  http://www.tweak3d.net/articles/howcpusaremade/


abeck_23: If you read the article at the link posted by Facey, you will see a diagram of a transistor. The nm measurement for transistors refers to the "minimum feature size", which is generally (but not always) the width of the area labelled "gate".

As others have said, the smaller the feature size, the more transistors you can pack into a circuit of a fixed size.

Hope this clears some things up!

-WBSpring
March 2, 2006 1:05:21 AM

does anyone know the theoretical limit to how small it can get, i heard that 20nm is where the electrons can't be contained anymore, and begin to leak, thereby ending and performance gains
March 2, 2006 1:07:05 AM

I'm not sure if this is right but here it goes. The smaller the nm, the less power leakage there is and also smaller waffers so more cpu on one waffer means more cost effective. Again I am not sure, don't quote me on it.
March 2, 2006 2:25:27 AM

I don't think anyone' calling you an idiot, but there are better ways to get the answer you're looking for. Like pengwin suggested, try using google or wikipedia. You'll get more accurate and unbiased answers that way. (in my opinion anyways)

;-)

-mpjesse
March 2, 2006 3:29:56 AM

>does anyone know the theoretical limit to how small it can get, i heard >that 20nm is where the electrons can't be contained anymore, and begin >to leak, thereby ending and performance gains

I used to work in the dielectrics industry, but I'm no expert. If I remember correctly, 20nm was the approximate limit a few years ago when polymeric dielectrics were just begining to replace vapor-deposited SiO2. There is an effort to make nanoporous polymeric dielectrics work in order to achieve a significantly higher dielectric constant - meaning that microcircuits can be made smaller because if you use an insulator with a higher dielectric constant, then the conducting and semiconducting layers can be made thinner/narrower without excessive leakage or crosstalk.

Also: smaller means more transistors per unit area as was mentioned earlier but equally (or maybe even more) importantly, it means that the signal paths are shorter, which results in potentially higher processor speeds. This brings up the ultimate dilemma: cramming more circuits into smaller spaces leads to increased current density and that means more heat production. For sure, we think we're getting into the ballpark of a theoretical limit, but I'm not betting the farm that there won't be new tech developed to help get past what we perceive to be a limit right now. What I saw when working in nanoporous dielectrics was difficulty in achieving consistent porosity in the polymer. Not sure where that technology has gone since I left the biz. I sure don't miss it!
March 2, 2006 5:17:54 AM

a b à CPUs
March 2, 2006 5:49:25 AM

Quote:
does anyone know the theoretical limit to how small it can get, i heard that 20nm is where the electrons can't be contained anymore, and begin to leak, thereby ending and performance gains


prescotts with 90nm had leakage issues already (actually all processors do but it got worse with prescotts)
March 2, 2006 11:27:26 AM

Quote:


A human hair is said to be about 50 micrometers wide.


Humans with excellent eyesight can resolve features on the order of 50 to 75 micrometers in size with the unaided eye. Most of us are limited to unaided eye resolutions on the order of 150 micrometers or more. Thus, common features on your CPU are approximatley 2000 times smaller than most of us can see with the unaided eye. Using a good quality conventional light microscope, visible light can resolve features on the order of half of a micrometer (~500 nanometers). Therefore traces on AMD CPUs are on the order of 6 times smaller than can be seen with a good conventional light microscope. My personal favorite unit for length is the Angstrom, which is equal to 0.1 nanometers. Then again, there's something humorous to the sound of a country mile.
March 2, 2006 12:19:55 PM

We're reaching the molecular limits of silicon with these sizes, that's the major hurdle they're researching to overcome. When getting smaller, the leakage problems go up since things are crammed closer together and it's a shorter distance for the electron to travel when leaking. But that's also why speeds go up, the travel distances are shorter. It's a tough thing to balance.

Here's a grossly oversimplfied explanation of the "nm" term in the technology, based on a course I had on VLSI chip design in engineering school: (back in the late 80's, but the concepts are still good)

Chips are designed in a square grid formation, so if you sit down to design a chip, you'd start with a piece of graph paper. The size of each square on the graph paper is the basic unit measured in nanameters. So if you're talking 90nm technology, the basic unit of measure is 90nm, and the geometry is based on 1 unit=90nm. Obviously, the smaller the unit the more things you can cram into a given area. An incredible amount of R&D goes into deciding how to construct a transister using that basic unit size. The companies that manufacture the silicon foundry equipment does all that R&D, that's why it costs billions to make a new fab or to change your geometry from say 90nm to 65nm. It's all brand new equipment to make a size change, you can't "upgrade" the foundry equipment to a smaller size.

The first 2-3 months of the course dealt with how they determine this based on material science of silicon, microelectrical properties, etc. Frankly I was lost after two classes. 8O But after all the theory was worked out, you were then free to design using the set of rules devised for that size without having to to all the ugly math and analysis. It's kind of using Lego blocks at that point, you don't worry about how to make a Lego brick, you just know how it fits with the other bricks. The www.tweak3d.net article shows how the layers and shaped of a transistor are made, but the geometry in nm is what determines the units used for the features of every part of the chip.

Chips today aren't designed at that low level anymore, the designers have libraries of functional blocks in the CAD programs they use. That intellectual property of those functional blocks are what gives Intel or AMD their advantages, and it took decades to build up that knowledge, just like a software company's intellectual property value is their library of code routines. AMD is a generation behind Intel on their geometry size, since Intel has a far bigger R&D budget and they started earlier. But AMD makes up for that disadvantage by designing more efficient processor structures and architecture. Intel bet the farm on NetBurst architecture, and they've now learned it has limitations, and Intel isn't very nimble at making major changes in thought. like AMD is. Sometimes smaller is better for a company. :) 

The other problem with getting smaller and smaller is referred to the "Lithography". The features are made on the chip optically using masks that cast shadows. Well, at these sizes we're at, the actual wavelength of light is too big to make accurate shadows, so they've gone to X-rays and Ultraviolet lasers to make the "light". Producing features ever smaller is becoming more difficult to make smaller wavelengths to produce smaller shadows.
March 2, 2006 12:38:33 PM

Quote:
does anyone know the theoretical limit to how small it can get, i heard >that 20nm is where the electrons can't be contained anymore, and begin >to leak, thereby ending and performance gains


Yes, indeed there's an unavoidable limit, function of the quantum fluctuations (Heisenberg's Uncertainty Principle) & inductive/capacitive magnetic crosstalk (due to the proximity of [very] tiny electronic "features"). Although technological down-scaling is approaching those limits (they're different for current leakage, magnetic inductance/capacitance & quantum tunneling effects), there still is a lot of room for current/near future materials(high-k dielectric metals;...) & techniques (EUV; Immersion lithography;...) to go further down to the near-picometre (pm) node, below 11nm.

Then, newer materials/technologies will be available, such as 3D structures like molecular films, nanotubes, nano-robots (which, despite its 'nano' prefix, can be smaller than that...); even when the ultimate limit is reached (quantum tunneling effects), quantum computing, like spintronics & others, might take over & profit from the actual undesirable effects...

It will be, perhaps, the "Angstronic" era...


Cheers!
March 3, 2006 11:39:59 PM

I had read before about using a concept known as probabilistic computing, in that when the transistors get small enough and the movement of electrons gets harder to predict and control there could be a move to predict the most likely path an electron would take on a chip. I would imagine that this would require a lot of redundancy considering you'd be dealing with what would only "likely happen." I'm a film student and a PC enthusiast, not an engineer or a physicist, so this sort of thing is by no means within my realm of expertise. It's simply a curiosity, does anyone know anything about this?
March 4, 2006 2:01:32 AM

The basic idea is not new (SUN has already proposed something slightly similar to that, called "Proximity Communication Technology"):

http://research.sun.com/spotlight/MITSunTR100AwardRelease.pdf


Although the approach cannot be considered Quantum Computing, "Probabilistic Computing" [also] deals with quantum effects & requires probabilistic algorithms; The whole concept rests upon Complexity theories and... it's one more [computing] approach.

http://www.crest.gatech.edu/low-energy/palem-ea.pdf

Actually, I would rely upon more straightforward approaches, mainstream-market directed & available in the short-to-mid term (5 to 10 years). Spintronics, Molecular, Nano & Quantum Computing seem, to me, far more close to the "Desktop Computing" than anything based upon Complexity theories & probabilistic algorithms.

Then again, I might be wrong, of course.


Cheers!
March 4, 2006 2:14:28 AM

this is kind of going out on a limb here but here goes.

I know they are trying to get nano robots going, but they are also exploring and are actually pretty far along with getting carbon, especially graphite, to build itself. They can get the individual atoms to arrange themselves in different patterns, for structures, they can make veryu strong ultralight building material, so if this is possible, it seems to me that getting around that problem with electromagnetic radiation bandwidth getting to be too big for smaller transitors and light shadows, could be easily erased if they could use this process with silicon, or some other material,that they may be forced to switch to. Then again it might not be practical, and computers might just stop upgrading. 8O 8)
a b à CPUs
March 4, 2006 2:22:22 AM

WTF is with all the images on peoples sigs shrinking!!?!?!??!
March 4, 2006 2:59:00 AM

Quote:
this is kind of going out on a limb here but here goes.

I know they are trying to get nano robots going, but they are also exploring and are actually pretty far along with getting carbon, especially graphite, to build itself. They can get the individual atoms to arrange themselves in different patterns, for structures, they can make veryu strong ultralight building material, so if this is possible, it seems to me that getting around that problem with electromagnetic radiation bandwidth getting to be too big for smaller transitors and light shadows, could be easily erased if they could use this process with silicon, or some other material,that they may be forced to switch to. Then again it might not be practical, and computers might just stop upgrading. 8O 8)


Faster processors, mobos, memory, HDs and all are cool. But you know what I'm more anxious for? I'm wanting software to catch up. For most people, the only time their computer is working hard is when they are playing a game on their PC. Most of the time, modern processors sit and wait to be told what to do. Software certainly has improved, but I want to see vastly improved software - excellent voice recognition, artificial inteligence and other self-learning capabilities.
March 4, 2006 8:09:27 AM

Quote:
does anyone know the theoretical limit to how small it can get, i heard that 20nm is where the electrons can't be contained anymore, and begin to leak, thereby ending and performance gains


yeah, i thought i read that the atoms will start falling apart at about 15nm and at 10nm you have real problems something like that anyway
March 4, 2006 8:12:27 AM

Quote:
Faster processors, mobos, memory, HDs and all are cool. But you know what I'm more anxious for? I'm wanting software to catch up. For most people, the only time their computer is working hard is when they are playing a game on their PC. Most of the time, modern processors sit and wait to be told what to do. Software certainly has improved, but I want to see vastly improved software - excellent voice recognition, artificial inteligence and other self-learning capabilities.


go buy a mac. thats some real artificial intelligence for ya
March 4, 2006 9:48:35 AM

I think its the transistors Drain-Source chanel wide.... and the conductors space between them. :?: not sure....
March 4, 2006 11:24:35 AM

Quote:
Faster processors, mobos, memory, HDs and all are cool. But you know what I'm more anxious for? I'm wanting software to catch up. For most people, the only time their computer is working hard is when they are playing a game on their PC. Most of the time, modern processors sit and wait to be told what to do. Software certainly has improved, but I want to see vastly improved software - excellent voice recognition, artificial inteligence and other self-learning capabilities.


go buy a mac. thats some real artificial intelligence for ya

I used to work exclusively with macs, on the job and at home. They were great and were much faster than PCs for what I did (image editing). Then all local IPSs except AOL quit supporting macs, so I switched to the Dark Side. This was about 10 years ago.

Come with me, Luke, I'm your FAATHA.

Now if you wanna see the real dark side (like the shade of poop), check out AOL.
March 4, 2006 12:45:05 PM

1nm=1/1,000,000,000m ( one billionth of a metre ). The smaller the features on a chip, the more components can be built per unit area and hence the smaller the chip. The smaller the chip, the more one can get onto a silicon wafer(20 or 30cm)and hence the cheaper each chip. Also, the smaller each chip, the less wastage there is as a consequence of defects in the silicon lattice. Smaller chips can also operate at a higher frequency because the distance between components on the chip is shorter and hence the time for a signal to travel is shorter. Smaller components also require less power and hence operate cooler. So, smaller is cheaper, faster and cooler-seems too good to be true but is!
March 4, 2006 1:01:05 PM

Quote:
So, smaller is cheaper, faster and cooler-seems too good to be true but is!


I think you're confusing reality with propoganda. Otherwise, why aren't the new process CPUs gonna be cheaper than current 90nm tech? And if what you say is true, then there will be no need for liquid cooling, etc? And eventually, if you carry your logic to its conclusion, infinitely fast CPUs will be free and generate zero heat, right?
March 4, 2006 1:58:34 PM

The main importance is one of economics - for the Intel/AMD. The smaller the transistors and spacing, the smaller the resulting die. The smaller the die, the more dies can be produced for a given diameter wafer and the less wasted area at the edge of the wafers.

Smaller topology transistors also require less bias current and therefore require less energy to operate and produce less heat from [junction] operation. This is one factor that contributes to high cpu frequencies.

Of course in the real world, there is also current leakage to deal with. If the basic production technology is the same, then the smaller topologies will tend to leak MORE around the junction. Going smaller in this case means less operating current but more leaked current. Hopefully the former more than offsets the latter. The classic example where it did not was the infamous [early] Prescott which saw its drop in operating current more than offset by an increase in leaked current. (Given that Intel continuously improves it manufacturing processes, I would not make any generalizations about their current 90nm offerings based on their initial ones.)

The larger the volume of a company the greater the incentive to be on the bleeding edge of die shrink. A smaller company can afford to make the transition a half-cycle later when the transition cost are lower.
March 4, 2006 2:35:32 PM

I have a paper from an industry journal you all might find interesting reading. Its about 5MB .pdf. abeck 23 has it too now. Email requests to:

noctemcarpe@xmission.com

I'll attatch it in the reply
DC
March 4, 2006 3:22:37 PM

The way they taught it to us was with transmission line equations. (this concept goes back to the 50s w/ telegraph wires)
Supposedly, since electricity doesn't travel at an infinte speed.
When you get up into the Ghz range, the waves get really really small and even at the nm range transmission line you could have a clock cycle at one end will be different than at the other end.

That is why they have to make the processor smaller and smaller, to make sure that doesn't happen.


Quote:
High-frequency transmission lines can be defined as transmission lines that are designed to carry electromagnetic waves whose wavelengths are comparable to the length of the line. Under these conditions, the approximations useful for calculations at lower frequencies are no longer accurate. This often occurs with radio, microwave and optical signals, and with the signals found in high-speed digital circuits.


I took a class on transmission lines, and it was really boring.

You can read more here if you like:

http://en.wikipedia.org/wiki/Transmission_line
March 4, 2006 3:31:36 PM

Quote:
I know they are trying to get nano robots going, but they are also exploring and are actually pretty far along with getting carbon, especially graphite, to build itself. They can get the individual atoms to arrange themselves in different patterns, for structures, they can make veryu strong ultralight building material, so if this is possible, it seems to me that getting around that problem with electromagnetic radiation bandwidth getting to be too big for smaller transitors and light shadows, could be easily erased if they could use this process with silicon, or some other material,that they may be forced to switch to. Then again it might not be practical, and computers might just stop upgrading.


The type of Carbon you're, presumably referring to, is Carbon 60, usually known as Fullerene (after the american inventor & mathematician Buckminster Fuller). Out of the three known types of 'natural' Carbon (diamond, graphite & fullerene), it's of this last one, fullerene, nanotubes are made. Near-term nanotechnologies will use this amazing, almost alien, form of Carbon. Some have already been implemented.


See, for instance:

http://www.dailytech.com/article.aspx?newsid=1058


Cheers!
March 7, 2006 9:06:03 PM

Clue69Less Wrote:"And eventually, if you carry your logic to its conclusion, infinitely fast CPUs will be free and generate zero heat, right?"

WRONG!! Matter is composed of atoms so this process has its limits. Where this limit lies is a matter(no pun intended) of debate-if some people were to be believed, we should have hit the limit a long time ago but we still keep going. No one ever claimed this process could be carried on ad infinitum so don't be too literal in the interpretation of words.

Regarding CPU pricing: just because a company can produce a product for lower cost does not mean to say that these savings will be passed on to the consumer-SURPRISE, SURPRISE-welcome to the real world.

Liquid cooling is not NEEDED at the present, it just happens to be something some companies are making some high margin products for!! If there is propaganda involved, it is surely here!! You can even cool your CPU with liquid nitrogen, IF YOU WISH, but there is no NEED to do so!
March 7, 2006 9:29:38 PM

Quote:
>does anyone know the theoretical limit to how small it can get, i heard >that 20nm is where the electrons can't be contained anymore, and begin >to leak, thereby ending and performance gains

I used to work in the dielectrics industry, but I'm no expert. If I remember correctly, 20nm was the approximate limit a few years ago when polymeric dielectrics were just begining to replace vapor-deposited SiO2. There is an effort to make nanoporous polymeric dielectrics work in order to achieve a significantly higher dielectric constant - meaning that microcircuits can be made smaller because if you use an insulator with a higher dielectric constant, then the conducting and semiconducting layers can be made thinner/narrower without excessive leakage or crosstalk.

Also: smaller means more transistors per unit area as was mentioned earlier but equally (or maybe even more) importantly, it means that the signal paths are shorter, which results in potentially higher processor speeds. This brings up the ultimate dilemma: cramming more circuits into smaller spaces leads to increased current density and that means more heat production. For sure, we think we're getting into the ballpark of a theoretical limit, but I'm not betting the farm that there won't be new tech developed to help get past what we perceive to be a limit right now. What I saw when working in nanoporous dielectrics was difficulty in achieving consistent porosity in the polymer. Not sure where that technology has gone since I left the biz. I sure don't miss it!


On the backend of the process, we use good ol air as our di-electric. Its as good as you can get.

All you need to contain an electron is 1 atom (duh=) so really that 20nm wall is incorrect. Conroe's gate is only 5 atomic layers thick, and the whole length of the gate is actually only 54nm, not 65. 90, 65, 45, 23, and 9nm are just the process generation that was determined long ago by a consortium of IC manufacturers. The actual measurement will probably not adhere strictly to those numbers.
March 8, 2006 5:00:55 AM

Quote:
Clue69Less Wrote:"And eventually, if you carry your logic to its conclusion, infinitely fast CPUs will be free and generate zero heat, right?"

WRONG!! Matter is composed of atoms so this process has its limits.
Where this limit lies is a matter(no pun intended) of debate-if some people were to be believed, we should have hit the limit a long time ago but we still keep going. No one ever claimed this process could be carried on ad infinitum so don't be too literal in the interpretation of words.

Regarding CPU pricing: just because a company can produce a product for lower cost does not mean to say that these savings will be passed on to the consumer-SURPRISE, SURPRISE-welcome to the real world.

Liquid cooling is not NEEDED at the present, it just happens to be something some companies are making some high margin products for!! If there is propaganda involved, it is surely here!! You can even cool your CPU with liquid nitrogen, IF YOU WISH, but there is no NEED to do so!


You can find seedlings to grow your own sense of humor on aisle 3 of your local grocery store. Take a large cart.
March 8, 2006 5:12:01 AM

Quote:
Quote:
>does anyone know the theoretical limit to how small it can get, i heard >that 20nm is where the electrons can't be contained anymore, and begin >to leak, thereby ending and performance gains

I used to work in the dielectrics industry, but I'm no expert. If I remember correctly, 20nm was the approximate limit a few years ago when polymeric dielectrics were just begining to replace vapor-deposited SiO2. There is an effort to make nanoporous polymeric dielectrics work in order to achieve a significantly higher dielectric constant - meaning that microcircuits can be made smaller because if you use an insulator with a higher dielectric constant, then the conducting and semiconducting layers can be made thinner/narrower without excessive leakage or crosstalk.
Quote:


On the backend of the process, we use good ol air as our di-electric. Its as good as you can get.

All you need to contain an electron is 1 atom (duh=) so really that 20nm wall is incorrect. Conroe's gate is only 5 atomic layers thick, and the whole length of the gate is actually only 54nm, not 65. 90, 65, 45, 23, and 9nm are just the process generation that was determined long ago by a consortium of IC manufacturers. The actual measurement will probably not adhere strictly to those numbers.


I don't think you quite understood me. I was talking about 20nm being the limit in practice a few years ago. Not a theoretical limit, not a modern limit and not a hypothetical limit. What's the smallest trace width in CPU production today? Why aren't we down at 9 now? Yes, I know (some of) the answers.

1 atom contains an electron, you say? Do you think one atomic layer is sufficient to eliminate crosstalk? Not yet by any means. You talk about "good ol' air" - look at what I said about nanoporous polymeric dielectrics - air is what's in the pores and it's there intentionally. At one time, Intel dumped research on such materials but I hear they are looking into it again. But chip makers don't magically suspend transistor parts in air - they need some substance.
March 8, 2006 8:20:40 PM

Quote:
On the backend of the process, we use good ol air as our di-electric. Its as good as you can get.

All you need to contain an electron is 1 atom (duh=) so really that 20nm wall is incorrect. Conroe's gate is only 5 atomic layers thick, and the whole length of the gate is actually only 54nm, not 65.



Below a certain dimensional threshold, "air" begins to loose one of its famous properties: «I's as good as you can get», as a dielectric (i.e., it begins to interact beyond its intended purpose);

«All you need to contain an electron is 1 atom», although you can also contain it within magnectic fields (not relevant for the subject, anyway); and, Conroe's gate being only «5 atomic layers thick» doesn't really mean much; for instance, a silicon atom (which is on the order of tenths of a nanometre in diametre) can expand its electronic cloud, if embedded in a Galium Arsenide semiconductor crystal, up to a few nanometres in diametre (see "Quantum phenomena in nanoscale structures", by P. Hadley et al); and, Intel uses Silicon Germanium strained silicon...
Even the "54nm" is an approximation.


My point is not to bluntly amend what you've posted; it's a mere alert for the linear & assertive assumptions we all tend to make...


Cheers!
!