Sign in with
Sign up | Sign in

SRC and Stanford Enable Chip Pattern Etching for 14nm

By - Source: SRC | B 16 comments

In a project sponsored by the Semiconductor Research Corporation (SRC), researchers at Stanford University claim to have solved one of the major semiconductor manufacturing problems standing in the way of further scaling.

Stanford scientists were able to successfully demonstrate a new directed self-assembly (DSA) process not just for regular test patterns, but for irregular patterns that are required for the manufacture of smaller semiconductors. It was the first time that this next-generation process was used to contact hole patterns at 22 nm, but the scientists claim that the technique will enable pattern etching for next-generation chips down to 14 nm.

"This is the first time that the critical contact holes have been placed with DSA for standard cell libraries of VLSI chips. The result is a composed pattern of real circuits, not just test structures," said Philip Wong, the lead researcher at Stanford for the SRC-guided research. "This irregular solution for DSA also allows you to heal imperfections in the pattern and maintain higher resolution and finer features on the wafer than by any other viable alternative."

The research group also noted that the process is much more environmentally-friendly as a "healthier" solvent - polyethylene glycol monomethyl ether acetate (PGMEA) - is used for the coating and etching process.

Leveraging the new DSA process, the researchers manufactured chips by covering a wafer surface with a block copolymer film and using "common" lithographic techniques to carve structures into the wafer surface and create a pattern of irregularly placed "indentations." These indentations are used to as templates "to guide movement of molecules of the block copolymer into self-assembled configurations." According to the researchers, these templates can be modified in their shape and size, which enables distance between holes to be reduced more than current techniques allow.

Discuss
Ask a Category Expert

Create a new thread in the News comments forum about this subject

Example: Notebook, Android, SSD hard drive

This thread is closed for comments
  • 6 Hide
    Anonymous , May 29, 2012 3:16 PM
    Hmmm... what does IBM and Intel think? They are at the forefront of 14nm.
  • 7 Hide
    IndignantSkeptic , May 29, 2012 3:57 PM
    It's just amazing how many times scientists can keep Moore's law going. It helps make me think that Dr. Aubrey de Grey may be correct about the unbelievable future of biotechnology.
  • Display all 16 comments.
  • 0 Hide
    Anonymous , May 29, 2012 3:59 PM
    IClass - the world is moving to smaller components due to smaller form factors and mobility. If everyone still did all their computing work on desktops, then I totally agree with you. A well done 32 nm process in a chassy with sufficient cooling would do well for a powerhouse system (at least 8 cores, dual graphics, etc.). To get that level of performance out of a mobile platform, they will need to go to 14nm. I have no problem with that as long as they also build a 14nm system with 32-64 cores and that is 5+ Ghz :) .
  • 2 Hide
    bak0n , May 29, 2012 4:02 PM
    Well then hurry and get me my 14nm GPU that doesn't require an extra power connector!!!111221!
  • 2 Hide
    robot_army , May 29, 2012 4:07 PM
    IClass Smaller dies mean companies can add more transistors, for either additional cores or hight performance per clock. this Allows companies to maintain profit margins and price points, more silicon would equal more cost to consumers!
  • 0 Hide
    zzz_b , May 29, 2012 4:13 PM
    @IClassStriker
    The problem is the localized heat on the chips, which can't be dissipated.
  • 2 Hide
    mpioca , May 29, 2012 4:15 PM
    IClassStrikerI don't get it, they are making smaller and smaller chips even though they could improve on the current ones. I would be fine if they still make 32nm die, but add more cores and higher clock speeds. Most computers have sufficient cooling anyways.


    1. lower power consumption --> cheaper operation, lower operating heat 2. less material needed for production --> cheaper products.

    It's basically a no-brainer to choose the more shrinked die.
  • 0 Hide
    CaedenV , May 29, 2012 6:08 PM
    IClassStrikerI don't get it, they are making smaller and smaller chips even though they could improve on the current ones. I would be fine if they still make 32nm die, but add more cores and higher clock speeds. Most computers have sufficient cooling anyways.

    1) to compete with ARM on the low end Intel NEEDs to get smaller parts in order to lower material, heat, and battery costs. Sure, they could cram a lot more transisters in a CPU on 32nm before having heat issues, but someone needs to pay for the costs of developing the small tech for atom and other extremely low power CPU technologies that compete with ARM, and Intel has decided a long time ago that it will be the desktop CPUs that will pave the way because desktop users do not mind paying more for the product, while devices that use Atom products are extremely price sensitive. I mean, imagine how power efficient a 22nm Atom would be? On a 32nm process they are down to 3.5W TDP, and they operate much lower than that when under a normal load. But they are not on 22nm because it is cheaper to do these on the old fabs.
    2) More cores does not help 90+% of the people who use a computer. 2 cores is enough for web browsing and media consumption (hell, you can even game pretty decently on a duel core). Civilian applications tend to only use 1-2 cores, and heavy applications have a hard time using more than 4. If you need more than 4 cores then there are other solutions (SBE, Xeon) which can bring you many more cores, and duel CPU configurations (I think the new Xeon CPUs can even do quad configurations). So if you need more cores, there are solutions for you, but all the cores in the world are not going to help you one bit until software takes advantage of it, so other solutions must be found.
    3) It is cheaper and easier to shrink the die than it is to modify the instruction set (though that is always happening as well). Once we hit the 8-12nm wall of CPU die shrinks we will begin to see major changes to code, how code is processed, and a complete revolution to the x86 architecture and instructions. We will also begin to see 3D/Stacked CPU designs, and other more creative approaches to getting things more streamlined. but we are still several years away from that.
  • 1 Hide
    IndignantSkeptic , May 29, 2012 7:05 PM
    where does graphene fit in this?
  • 0 Hide
    quangluu96 , May 30, 2012 12:21 AM
    IndignantSkepticwhere does graphene fit in this?

    1nm is when it kicks in :) 
  • 1 Hide
    ojas , May 30, 2012 4:02 AM
    Lol Intel's already building 14nm fabs...
  • 0 Hide
    Uberragen21 , May 30, 2012 5:02 AM
    IClassStrikerI don't get it, they are making smaller and smaller chips even though they could improve on the current ones. I would be fine if they still make 32nm die, but add more cores and higher clock speeds. Most computers have sufficient cooling anyways.

    Your cluelessness about computers and semiconductors is quite apparent. Start doing your homework on the subject before you make such an unintelligent statement.

    Very quick (layman's term) breakdown, smaller lithography (production) of semiconductors (computer chips) allows for lower power consumption. This leads to better battery life in mobile products, less heat production, and allows more transistors (on/off switch), which allows for faster calculations per second. From the economy of scale aspect you have the ability to cram more chips on a single wafer, which allows for cheaper production and higher product yield, all lowering the cost for the consumer.
  • 0 Hide
    gsxrme , July 31, 2012 3:20 PM
    IClassStrikerI don't get it, they are making smaller and smaller chips even though they could improve on the current ones. I would be fine if they still make 32nm die, but add more cores and higher clock speeds. Most computers have sufficient cooling anyways.


    The temperature of a 16core 24mb cache 32nm chip would be so high because the voltage requirements that the clock speed would have to be so slow it would make things worse. Programmers are so lazy I'm not going to hold my breath waiting for them to program.
  • 0 Hide
    gsxrme , July 31, 2012 3:23 PM
    I'll take my 5.1Ghz quadcore over any 6 or 8 core running at or below 4GHz anyday. I really want to see memory to catch up, I want to see 6Ghz DDR4-5 chips now.
  • 0 Hide
    gsxrme , July 31, 2012 3:24 PM
    lower nm = lower voltage = lower heat = higher clock speeds