Rumor: AMD Looking at 600W cTDP for 5th Gen EPYC 'Turin' CPUs

AMD
(Image credit: AMD)

According to a rumor, AMD is looking at a very high maximum configurable thermal design power (cTDP) for its 5th Generation EPYC processors. If the information is accurate, AMD's EPYC 7005-series CPUs, set to be introduced several years from now may feature a cTDP of 600W.

On Thursday, hardware blogger ExecutableFix, who has been very accurate with information they publish, said that AMD's EPYC 'Turin' processor in SP5 form-factor will have a maximum cTDP of 600W, which is more than double compared to cTDP of the latest EPYC 7003-series 'Milan' processors. The information comes from an unofficial source, it is not detailed, and it cannot be verified, so take it with a grain of salt. Yet, there is a rationale behind this information.

AMD's 5th Generation EPYC processors in SP5 form-factor are rumored to feature up to 256 Zen 5 cores. AMD is also rumored to be preparing hybrid processors with integrated CDNA-based compute GPU for high-performance computing (HPC) and datacenter applications. CPUs with up to 256 'fat' cores are poised to consume quite a lot of power, though a 600W cTDP seems a bit high. 

AMD's SP5 platform for 4th Generation and 5th Generation EPYC processors is designed to be capable of supplying up to 700W of power for very short periods, according to the Gigabyte leak, so a 600W cTDP may well be true. Meanwhile, cooler makers already list 400W-capable cooling systems for AMD's 4th Generation EPYC 'Genoa' processors (that use the SP5 infrastructure), so it is evident that AMD's next-generation server platforms are designed to support power-hungry CPUs. 

Nowadays operators of hyperscale cloud datacenters as well as enterprises with demanding workloads want to have the maximum performance they can get, so AMD and Intel have to offer them CPUs with unbeatable performance that often feature a high TDP. In fact, TDP of server processors have been growing rapidly for about a decade, so it will not be a surprise if next-generation server CPUs consume more power than existing ones. Meanwhile, at times AMD and Intel have to offer select clients custom CPUs that consume significantly more than regular models.

Server-grade platforms are not only meant to support standard processors, but also various custom versions optimized for certain workloads that may have a higher TDPs. Thus, while special versions of AMD's next-generation EPYC processors may indeed boost themselves to 600W, regular SKUs may have considerably lower TDPs. That said, while some of AMD's future EPYC processors may have a cTDP of 600W, it is too early to make assumptions about TDP levels of standard CPUs.

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • -Fran-
    Well, having a configurable TDP of 600W is not unheard of. IBM has been threading (geddit? lel) that territory for a good while. Also, that would be one monster SoC with the accelerators, dang.

    I guess the industry is just heading into high power designs, whether we like it or not, for everything. I hope they also introduce a solution to the energy problem around the world =/

    Regards.
    Reply
  • TerryLaze
    Yuka said:
    I guess the industry is just heading into high power designs, whether we like it or not, for everything. I hope they also introduce a solution to the energy problem around the world =/
    It's going to be 256 (rumor) cores for 600W (rumor) replacing 64 cores at 280W.
    High power design doesn't mean that it burns more power per performance, if the rumors are true you could get 4 times the cores for only twice the power, that IS how you tackle the energy problem.
    Reply
  • -Fran-
    TerryLaze said:
    It's going to be 256 (rumor) cores for 600W (rumor) replacing 64 cores at 280W.
    High power design doesn't mean that it burns more power per performance, if the rumors are true you could get 4 times the cores for only twice the power, that IS how you tackle the energy problem.
    No, not really.

    You're still using more energy for more performance; sure it's more efficient, but it is still more energy. The processing needs are growing and to accommodate that growth, chip manufacturers can do it at the expense of even more energy consumption; that's another way of phrasing it. Thing is, can the energy generation side of things keep up? It has already been proven that most of the current infrastructure around the world is kind of tight and there needs to be more producing plants.

    Regards.
    Reply
  • TerryLaze
    Yuka said:
    No, not really.

    You're still using more energy for more performance; sure it's more efficient, but it is still more energy. The processing needs are growing and to accommodate that growth, chip manufacturers can do it at the expense of even more energy consumption; that's another way of phrasing it. Thing is, can the energy generation side of things keep up? It has already been proven that most of the current infrastructure around the world is kind of tight and there needs to be more producing plants.

    Regards.
    Usually companies go with either a power budged or a performance budget, they don't just add as much as possible without any rhyme or reason.

    Yes the need for energy due to computing will keep rising, that's a different conversation, this is still more efficient so it's still more better.
    Reply
  • Umfriend
    Nowadays operators of hyperscale cloud datacenters as well as enterprises with demanding workloads want to have the maximum performance they can get
    Yeah, I remember 10 yeards ago they were all like "meh, I could do with about half the performance". LOL
    Reply
  • chiperian
    TerryLaze said:
    Usually companies go with either a power budged or a performance budget, they don't just add as much as possible without any rhyme or reason.

    Yes the need for energy due to computing will keep rising, that's a different conversation, this is still more efficient so it's still more better.
    Yeah, the power is one thing, but a lot of businesses are considering the density, as in how many CPUs/GPUs you can fit in a rack.

    So if you can fit 4x the amount of processing in your datacenter that might work out for a net positive, cooling and power considered, instead of splitting it into several facilities.
    Reply
  • -Fran-
    TerryLaze said:
    Usually companies go with either a power budged or a performance budget, they don't just add as much as possible without any rhyme or reason.

    Yes the need for energy due to computing will keep rising, that's a different conversation, this is still more efficient so it's still more better.
    Not all, and certainly not the type of company that has limited space and has growing processing needs (data centers gallore; cloud or not). They will fit as many CPUs as they can fit and meet their performance goals. They are willing to modify entire buildings to accommodate whatever power and cooling requirements need to be filled. If you can fit 100 of the new ones to replace the old 100, you will do it. They're netting you 10X more performance at 5X more power (to say something). Very efficient, but still netting an increase in power that has to come out of somewhere.

    My original statement stands. And, again, this is not a matter of how efficient new CPUs or GPUs are. They are using more power per unit, period. That is where the industry is going now, for better or for worse. I just hope energy generation is also taken into account by the same giants pushing for it.

    EDIT: I guess I forgot to clarify a base assumption of mine, or make it explicit: we're not getting much more performance per watt compared to the growing demand of pure performance in computing. This is to say, the rate at which the performance is growing is not enough to satisfy its demand, so if you want to keep the same performance level, you'll miss on satisfying that ongoing increase in performance. This is what the industry movement is telling me and I assume as base for my rant/comment.

    Regards.
    Reply
  • Eximo
    Generally datacenters are built around an expected power output and cooling capacity. Big undertaking to expand. You have to get the power company to upgrade your capacity, gear, meters, etc. And sometimes they don't have the capacity on the line to do so and will just say no. That is when datacenters move to new facilities, or customers move and the data center finds smaller customers.

    More efficiency just means they can get more $ per square foot with potentially the same expenses. Though you might have to tack on more administrators with more cores/tasks/applications/services running. At least until you hit a bandwidth limit with the ISP.
    Reply
  • -Fran-
    Eximo said:
    Generally datacenters are built around an expected power output and cooling capacity. Big undertaking to expand. You have to get the power company to upgrade your capacity, gear, meters, etc. And sometimes they don't have the capacity on the line to do so and will just say no. That is when datacenters move to new facilities, or customers move and the data center finds smaller customers.

    More efficiency just means they can get more $ per square foot with potentially the same expenses. Though you might have to tack on more administrators with more cores/tasks/applications/services running. At least until you hit a bandwidth limit with the ISP.
    Heh, a troll quoting me made me notice your reply.

    Anyway... Datacenters are built with a lot of extra capacity from the get go. Sure, it can't use 10X the power, but they can comfortably use 50% to 90% more power overall (this is in terms of power line delivery). This also includes cooling capacity (with an asterisk and this will depend exclusively on location, more so than power). I would agree from a recovery perspective (power redundancy and generation) as you don't usually build with the same over-capacity in mind and usually add as required; then again, it's not hard to do. Same with ISPs, TBH. At least here in the UK, all the Datacenters we own are specc'd for 300% overcapacity in bandwidth with 3 different ISPs in a redundancy setup.

    So, while I can't fully disagree with what you said, I can't say I'm wrong either. I can say that the Datacenters I know of across the globe, do build with over capacity in mind (and plenty of it), so they can always build ahead of increasing power needs (up to a point, true). Most of the time, a Datacenter will be restricted by physical space first and cooling second, but as it stands right now, not power (yet).

    It's quite an interesting topic (going on a hard tangent from the original article), so I think it's quite interesting to touch on in the context of this announcement. As I said, IBM has been doing this for ages.

    Regards.
    Reply