MIT Advances E-Beam Lithography for Chips
Researchers at MIT consider e-beam lithography as a potential future candidate for semiconductor mass production.
It is unclear how far in the future, as the current immersion lithography technology has been much more stubborn than we originally thought a decade ago. Companies such as Intel, have successfully pushed out the adoption of the extremely expensive transition to extreme ultraviolet lithography (EUV) so far.
MIT researchers now believe that e-beam lithography, which is commonly used for prototyping and is currently a slow and low-volume production process for semiconductors, could be an option for chip manufacturers as the technology can be scaled down to structures of 9 nm. Compared to e-beam lithography, traditional photolithography uses light that shines through the entire surface of a mask at once. The e-beam uses electrons that scans across the surface of the resist (a material that covers each layer of a chip) on a row-by-row basis.
The MIT researchers said that they were able to increase the efficiency of e-beam lithography by using a thinner mask, which requires less energy per beam and enables a higher number of parallel electron beams to accelerate the production process. They also said they used a common table salt solution to "develop the resist, hardening the regions that received slightly more electrons but not those that received slightly less."
There is doubt that the MIT approach will find its way into production. One manufacturer of lithography systems, Mapper, said that the presented system was "a little bit too sensitive."
Time will prove Mapper wrong. If you can't imagine something, that doesn't mean it's impossible.
How will using an E-beam help with that?
im guessing that it would make it more accurate, and if dont right, make the process faster than it currently is.
how long does it take now to process a chip? and how much of that time is waiting on crap to finish?
Best chances the semiconductor industry has is to build transistors by atom.
Of course shrinking to smaller scales is a problem. If it wasn't why didn't we reach the theoretical transistor size limit a decade ago? Making transistors smaller is a problem that may escape our notice, however because chip makers have been able to consistently whittle down sizes since the first CPU.
MIT has revisited and possibly improved on a lithography method that, although less expensive than Intel's method, has a yield that is too low volume to be practical in mass production. It has nothing to do with the transistor size limit, there is a whole lot more involved with chip fabrication than a single theoretical boundary the industry has not run into yet.
Leakage is the enemy, and it's going to be exponentially worse at future nodes.
I may be wrong but I believe phatboe is talking about quantum tunneling. To keep it simple. The cpu is made up of many circuits with paths for electrons to flow. The paths are divided by barriers to electron flow. When the barrier width becomes to small it cannot contain the electrons very well. Even if the barriers are really really high it doesn't matter because the electron can just appear on the other side sometimes. This results in increased current leakage. I believe it is supposed to become a major problem at 9nm. I imagine this technology would not help with this problem. anyone have insight on this?
How long has 0/1 switch processing been around?
I think analogue computing might come back into fashion at some point soon. Their are some pretty cool things it can do that digital (binary) can't.
But they improve it and make it more feasible for mass production.
Electrons are basically always tunneling through all mater, it's just a very predictable movement.. usually.
So yeah to get down to 9nm, we will face a choice of insanely large amounts of unused die space (9nm lanes but 15~20nm spacing in-between), or insanely low voltage (0.5~0.8v). Low voltage will make getting high clock speeds difficult, wasted space will increase price. Pick your poison.
There is nothing an analogue computer can do that a binary can't.
Any analogue value that has a margin of error (ie. any analogue value) can be represented by binary. As long as you use enough bits, any real value that would round to the value represented by the binary value is within said margin of error.
It is possible some calculations may be done faster in analogue, but you build up error with each operation, error that is usually removed by regularly converting to and from binary.
Also - not every wire in a binary chip has only two states - many have 3 (positive, negative and floating).
Um.......yeah.
The questions is:
A. Their are some pretty cool things...
B. They're are some pretty cool things...
C. There are some pretty cool things...
It's a 5th grade English question. Any takers?