AMD Explains Advantages of High Density (Thin) Libraries

We have seen the improvements that the "Steamroller" will offer in performance per watt with its design improvements. In addition to those improvements, AMD will be using "dense" or "thin" libraries employed by its GPU design teams, but for CPU implementation.

Look for more details from AMD during Hot Chips Symposium on its Surround Computing and Steamroller.

Contact Us for News Tips, Corrections and Feedback

  • wiyosaya
    In some respects, this sounds like the programmable gate array concept. It is interesting to see this adapted to non-programmable chip design.

    I am somewhat surprised, though, that this implies that such optimization was never before computerized. I would be really surprised if there were no computer optimization of chip layouts before this.

    So, is this just AMDs marketing engine at the helm again?
    Reply
  • madooo12
    read on anandtech that they will only be used in excavator
    Reply
  • Ragnar-Kon
    So more logic in a smaller area. Basically what chip designers have been doing since ICs were first invented. Nothing new...

    EDIT: My bad, more logic in a smaller area without a die shrink. So essentially just housecleaning on current libraries. Still clever marketing.It is a die shrink, myyyy baaddddd. Doesn't seem anything like the 3-D transistors used in Intel's 22nm process though. Not that is necessarily a bad thing, I just thought it was similar to that originally.
    Reply
  • ikefu
    Its also lower power consumption without a die shrink, which means more thermal headroom to up frequencies, add a die shrink on top of this and you suddenly gets LOTS more headroom.

    So no, not just good marketing. But I am confused why this didn't happen already.

    Doesn't fix their instruction per clock efficiency problem, but it will help increase CPU frequencies to cover for it while they work on that problem.
    Reply
  • Shin-san
    Okay, what about the performance?!
    ikefuIts also lower power consumption without a die shrink, which means more thermal headroom to up frequencies, add a die shrink on top of this and you suddenly gets LOTS more headroom.So no, not just good marketing. But I am confused why this didn't happen already.Doesn't fix their instruction per clock efficiency problem, but it will help increase CPU frequencies to cover for it while they work on that problem.I'm thinking that they are going for raw clocks.
    Reply
  • acadia11
    What you talking bout Willis?
    Reply
  • acadia11
    Ragnar-KonSo more logic in a smaller area. Basically what chip designers have been doing since ICs were first invented. Nothing new... move along.But yeah... marketing at its finest (or worst?).EDIT: My bad, more logic in a smaller area without a die shrink. So essentially just housecleaning on current libraries. Still clever marketing.
    I thought it was going to be used in asphalt paver?

    Ok , I just made that name up there is no chip asphalt paver.
    Reply
  • blazorthon
    This could be used in place of a die shrink or at least with a minor die shrink rather than a major die shrink. That's quite something even if it won't be used until Excavator.
    Reply
  • pharoahhalfdead
    "Our next gen cpu's will feature..." coming soon... at the end of 2013.
    Reply
  • dusk007
    I don't get that AMD focus on raw high clocks anyway.
    Todays CPUs are constrained by heat and power anyway before the maximum clock is hit.
    That is like building an aircraft turbine that can work well at Mach 2 while the entire airframe and efficiency requirements and noise regulations won't let the plane past 950 km/h anyway.
    Reply