Imec Reveals Sub-1nm Transistor Roadmap, 3D-Stacked CMOS 2.0 Plans

Imec, the world's most advanced semiconductor research firm, recently shared its sub-1nm silicon and transistor roadmap at its ITF World event in Antwerp, Belgium. The roadmap gives us an idea of the timelines through 2036 for the next major process nodes and transistor architectures the company will research and develop in its labs in cooperation with industry giants, such as TSMC, Intel, Nvidia, AMD, Samsung, and ASML, among many others. The company also outlined a shift to what it dubs CMOS 2.0, which will involve breaking down the functional units of a chip, like L1 and L2 caches, into 3D designs that are more advanced than today's chiplet-based approaches.

As a reminder, ten Angstroms equal 1nm, so Imec's roadmap encompasses sub-'1nm' process nodes. The roadmap outlines that standard FinFET transistors will last until 3nm but then transition to the new Gate All Around (GAA) nanosheet designs that will enter high-volume production in 2024. Imec charts the course to forksheet designs at 2nm and A7 (0.7nm), respectively, followed by breakthrough designs like CFETs and atomic channels at A5 and A2. 

imec

(Image credit: imec)
Paul Alcorn
Editor-in-Chief

Paul Alcorn is the Editor-in-Chief for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.

  • InvalidError
    If PCB physics hold up at the nano-scale, signal layers will still require power/ground planes to carry high-speed return currents and mitigate crosstalk even if the bulk of power is distributed on the backside.
    Reply
  • bit_user
    I'm just wondering if/when we'll reach a point that chips will wear out after months of intensive use, rather than years or decades. Either that, or we could see ECC and other redundancies starting to eat into some of the gains made by further density & efficiency improvements.

    This chart really needs an update through 2022, to include Zen 3, Zen 4, Sunny Cove, and Golden Cove. Not to mention Neoverse N1 and V1. Plus, they ought to clarify whether they're talking about server CPUs (which I assume).

    Also, I'd like to see projections of how many transistors per $, since the increasing cost of new nodes could ultimately be the limiting factor on chip complexity.
    Reply
  • elforeign
    Thank you for this update, it's very interesting to see where the industry is headed in terms of design and the innovation therein.

    I've been following the industry closely since the CELL Processor, which is what initially inspired me to learn more about semiconductor production and design.

    I remember some of the early 2K industry roadmaps estimating 1nm designs around 2020 and here we are still marching towards that milestone.
    Reply
  • elforeign
    bit_user said:
    I'm just wondering if/when we'll reach a point that chips will wear out after months of intensive use, rather than years or decades. Either that, or we could see ECC and other redundancies starting to eat into some of the gains made by further density & efficiency improvements.

    Also, I'd like to see projections of how many transistors per $, since the increasing cost of new nodes could ultimately be the limiting factor on chip complexity.
    I ran various Intel/AMD chips 24/7 running 100% utilization and overclocked for years on end on BOINC. No chips ever failed or caused issues. I'm convinced it's pretty hard to "wear a chip out" unless the user really doesn't know what they're doing or for those niche cases of overclockers really pushing the physical boundaries of the chip design with exotic cooling.
    Reply
  • InvalidError
    bit_user said:
    Also, I'd like to see projections of how many transistors per $, since the increasing cost of new nodes could ultimately be the limiting factor on chip complexity.
    With stacked transistor layers, I think we may see a revival of Moore's law as applied to transistors per dollar, though this may come at the expense of lower voltages and clocks to keep thermal density in check.
    Reply
  • bit_user
    elforeign said:
    I ran various Intel/AMD chips 24/7 running 100% utilization and overclocked for years on end on BOINC. No chips ever failed or caused issues.
    That's backwards-looking. If you look at the roadmaps in this article, they're talking about shrinking down to atomic structures. I think you can't assume CPUs and GPUs will always be so resilient. These chips could become much more of a "consumable resource" than how we're used to thinking of them.

    Just look at what's happened with NAND and now even DRAM! The higher the density gets, the more dependent they're becoming on error-correcting technologies to make them work reliably.
    Reply
  • InvalidError
    bit_user said:
    That's backwards-looking. If you look at the roadmaps in this article, they're talking about shrinking down to atomic structures. I think you can't assume CPUs and GPUs will always be so resilient.
    I wouldn't worry too much about the wear aspect of it: as stuff gets smaller, voltages have to go lower to keep both conductive and electron tunnelling leakage in check. The extreme purity requirements of all materials and process accuracy could be a challenge for manufacturing yields when you get to the point where you basically cannot afford to have any atoms out of place or even be the wrong isotope of a given element

    If you meant resilience in terms of how easily bits can be flipped by radiation, that could certainly get problematic as activation energies get smaller. This could definitely dictate the practical limit of how small transistors can get in stuff where random errors aren't tolerable.
    Reply
  • bit_user
    InvalidError said:
    I wouldn't worry too much about the wear aspect of it: as stuff gets smaller, voltages have to go lower to keep both conductive and electron tunnelling leakage in check.
    The article said they haven't been able to go below 0.7 V.

    InvalidError said:
    If you meant resilience in terms of how easily bits can be flipped by radiation, that could certainly get problematic as activation energies get smaller. This could definitely dictate the practical limit of how small transistors can get in stuff where random errors aren't tolerable.
    An old boss of mine once did a research project for a government lab, to design radiation-resistant logic. Basically, I think it included some form of multi-bit ECC at every stage of computation, and would repeat the operation as many times as necessary until the check succeeded. This made computation non-deterministic in time, but at least you eventually got the right answer.
    Reply
  • Amdlova
    Don't worry about wear some amd chips do that now. My poor 5700g died after seven months (he Don't die just become unstable and locking up windows). My be the new generations will wear and tears faster than before :)
    Reply
  • bit_user
    Amdlova said:
    Don't worry about wear some amd chips do that now. My poor 5700g died after seven months (he Don't die just become unstable and locking up windows).
    Somehow, I think you could manage to break just about anything with the letters "AMD" written on it.
    Reply