AMD Explains Advantages of High Density (Thin) Libraries

We have seen the improvements that the "Steamroller" will offer in performance per watt with its design improvements. In addition to those improvements, AMD will be using "dense" or "thin" libraries employed by its GPU design teams, but for CPU implementation.

AMD told us that products currently shipping with 32nm use a combination of automated place and route and hand-placed semi-custom design (top plot), which reduces power and area somewhat. To deliver more power efficient computations, AMD has employed a high-density cell library to reduce the area and power by 30 percent (bottom plot). The design yields a more portable and energy efficiency CPU core employing industry standard design methodologies well adapted to a foundry model. These improvements, according to AMD, are yielding a 15 to 30 percent lower energy per operation for power constrained designs, as compared to a full process node improvement.

Look for more details from AMD during Hot Chips Symposium on its Surround Computing and Steamroller.

Contact Us for News Tips, Corrections and Feedback

Latest in CPUs
Tech Deals
AMD's non-X3D Ryzen 9 9950X processor hits an all-time low in Amazon's Big Spring Sale 2025
Ryzen 5 9600X
AMD Ryzen 5 9600X hits all-time low price
Core Ultra 200S CPU
An Arrow Lake refresh may still be in the cards with only K and KF models, claims leaker
Tech Deals
Our alternate pick for the best gaming CPU is $190 cheaper than the 9800X3D right now - pick up the AMD Ryzen 7 9700X for just $289
ASRock fixes AM5 motherboard by cleaning it
ASRock claims to fix 'burned out' AM5 motherboard by cleaning the socket
Ryzen AI
AMD's Gorgon Point APU line-up breaks cover — Allegedly aiming for a 2026 launch
Latest in News
Raspberry Pi
Raspberry Pi Pico fightstick randomly mashes buttons for fighting game combos
The world's first color e-paper display over 30-inches
Mass production of 'world's first' color e-paper display over 30-inches begins
RTX 4090 48GB
Blower-style RTX 4090 48GB teardown reveals dual-sided memory configuration — PCB design echoes the RTX 3090
GlobalFoundries
China's SiCarrier challenges U.S. and EU with full-spectrum of chipmaking equipment — Huawei-linked firm makes an impressive debut
TSMC
Nvidia's Jensen Huang expects GAA-based technologies to bring a 20% performance uplift
Despite external similarities, the RTX 3090 is not at all the same hardware as the RTX 4090 — even if you lap the GPU and apply AD102 branding.
GPU scam resells RTX 3090 as a 4090 — complete with a fake 'AD102' label on a lapped GPU
  • wiyosaya
    In some respects, this sounds like the programmable gate array concept. It is interesting to see this adapted to non-programmable chip design.

    I am somewhat surprised, though, that this implies that such optimization was never before computerized. I would be really surprised if there were no computer optimization of chip layouts before this.

    So, is this just AMDs marketing engine at the helm again?
    Reply
  • madooo12
    read on anandtech that they will only be used in excavator
    Reply
  • Ragnar-Kon
    So more logic in a smaller area. Basically what chip designers have been doing since ICs were first invented. Nothing new...

    EDIT: My bad, more logic in a smaller area without a die shrink. So essentially just housecleaning on current libraries. Still clever marketing.It is a die shrink, myyyy baaddddd. Doesn't seem anything like the 3-D transistors used in Intel's 22nm process though. Not that is necessarily a bad thing, I just thought it was similar to that originally.
    Reply
  • ikefu
    Its also lower power consumption without a die shrink, which means more thermal headroom to up frequencies, add a die shrink on top of this and you suddenly gets LOTS more headroom.

    So no, not just good marketing. But I am confused why this didn't happen already.

    Doesn't fix their instruction per clock efficiency problem, but it will help increase CPU frequencies to cover for it while they work on that problem.
    Reply
  • Shin-san
    Okay, what about the performance?!
    ikefuIts also lower power consumption without a die shrink, which means more thermal headroom to up frequencies, add a die shrink on top of this and you suddenly gets LOTS more headroom.So no, not just good marketing. But I am confused why this didn't happen already.Doesn't fix their instruction per clock efficiency problem, but it will help increase CPU frequencies to cover for it while they work on that problem.I'm thinking that they are going for raw clocks.
    Reply
  • acadia11
    What you talking bout Willis?
    Reply
  • acadia11
    Ragnar-KonSo more logic in a smaller area. Basically what chip designers have been doing since ICs were first invented. Nothing new... move along.But yeah... marketing at its finest (or worst?).EDIT: My bad, more logic in a smaller area without a die shrink. So essentially just housecleaning on current libraries. Still clever marketing.
    I thought it was going to be used in asphalt paver?

    Ok , I just made that name up there is no chip asphalt paver.
    Reply
  • blazorthon
    This could be used in place of a die shrink or at least with a minor die shrink rather than a major die shrink. That's quite something even if it won't be used until Excavator.
    Reply
  • pharoahhalfdead
    "Our next gen cpu's will feature..." coming soon... at the end of 2013.
    Reply
  • dusk007
    I don't get that AMD focus on raw high clocks anyway.
    Todays CPUs are constrained by heat and power anyway before the maximum clock is hit.
    That is like building an aircraft turbine that can work well at Mach 2 while the entire airframe and efficiency requirements and noise regulations won't let the plane past 950 km/h anyway.
    Reply