SRC and Stanford Enable Chip Pattern Etching for 14nm
In a project sponsored by the Semiconductor Research Corporation (SRC), researchers at Stanford University claim to have solved one of the major semiconductor manufacturing problems standing in the way of further scaling.
Stanford scientists were able to successfully demonstrate a new directed self-assembly (DSA) process not just for regular test patterns, but for irregular patterns that are required for the manufacture of smaller semiconductors. It was the first time that this next-generation process was used to contact hole patterns at 22 nm, but the scientists claim that the technique will enable pattern etching for next-generation chips down to 14 nm.
"This is the first time that the critical contact holes have been placed with DSA for standard cell libraries of VLSI chips. The result is a composed pattern of real circuits, not just test structures," said Philip Wong, the lead researcher at Stanford for the SRC-guided research. "This irregular solution for DSA also allows you to heal imperfections in the pattern and maintain higher resolution and finer features on the wafer than by any other viable alternative."
The research group also noted that the process is much more environmentally-friendly as a "healthier" solvent - polyethylene glycol monomethyl ether acetate (PGMEA) - is used for the coating and etching process.
Leveraging the new DSA process, the researchers manufactured chips by covering a wafer surface with a block copolymer film and using "common" lithographic techniques to carve structures into the wafer surface and create a pattern of irregularly placed "indentations." These indentations are used to as templates "to guide movement of molecules of the block copolymer into self-assembled configurations." According to the researchers, these templates can be modified in their shape and size, which enables distance between holes to be reduced more than current techniques allow.

The problem is the localized heat on the chips, which can't be dissipated.
1. lower power consumption --> cheaper operation, lower operating heat 2. less material needed for production --> cheaper products.
It's basically a no-brainer to choose the more shrinked die.
1) to compete with ARM on the low end Intel NEEDs to get smaller parts in order to lower material, heat, and battery costs. Sure, they could cram a lot more transisters in a CPU on 32nm before having heat issues, but someone needs to pay for the costs of developing the small tech for atom and other extremely low power CPU technologies that compete with ARM, and Intel has decided a long time ago that it will be the desktop CPUs that will pave the way because desktop users do not mind paying more for the product, while devices that use Atom products are extremely price sensitive. I mean, imagine how power efficient a 22nm Atom would be? On a 32nm process they are down to 3.5W TDP, and they operate much lower than that when under a normal load. But they are not on 22nm because it is cheaper to do these on the old fabs.
2) More cores does not help 90+% of the people who use a computer. 2 cores is enough for web browsing and media consumption (hell, you can even game pretty decently on a duel core). Civilian applications tend to only use 1-2 cores, and heavy applications have a hard time using more than 4. If you need more than 4 cores then there are other solutions (SBE, Xeon) which can bring you many more cores, and duel CPU configurations (I think the new Xeon CPUs can even do quad configurations). So if you need more cores, there are solutions for you, but all the cores in the world are not going to help you one bit until software takes advantage of it, so other solutions must be found.
3) It is cheaper and easier to shrink the die than it is to modify the instruction set (though that is always happening as well). Once we hit the 8-12nm wall of CPU die shrinks we will begin to see major changes to code, how code is processed, and a complete revolution to the x86 architecture and instructions. We will also begin to see 3D/Stacked CPU designs, and other more creative approaches to getting things more streamlined. but we are still several years away from that.
1nm is when it kicks in
Your cluelessness about computers and semiconductors is quite apparent. Start doing your homework on the subject before you make such an unintelligent statement.
Very quick (layman's term) breakdown, smaller lithography (production) of semiconductors (computer chips) allows for lower power consumption. This leads to better battery life in mobile products, less heat production, and allows more transistors (on/off switch), which allows for faster calculations per second. From the economy of scale aspect you have the ability to cram more chips on a single wafer, which allows for cheaper production and higher product yield, all lowering the cost for the consumer.
The temperature of a 16core 24mb cache 32nm chip would be so high because the voltage requirements that the clock speed would have to be so slow it would make things worse. Programmers are so lazy I'm not going to hold my breath waiting for them to program.