Sticky

Amd Ryzen Threadripper & X399 MegaThread! FAQ & Resources

Welcome to the Official AMD Ryzen Threadripper & X399 MegaThread!

This thread will serve as the primary discussion thread for all things regarding Ryzen Threadripper and the X399 platform.



Ryzen Threadripper:

For the first time in history AMD is launching it’s own HEDT platform based on their all new X399 platform. Powering this new platform will be AMD’s Ryzen Threadripper CPUs. AMD promises to bring more connectivity and CPU cores to HEDT than ever before. Threadripper’s flagship, the 1950X sports an astounding 16 cores, 32 threads and 60 PCIE lanes (technically 64 but 4 of those lanes are dedicated to the X399 chipset).

So far, AMD has announced three models of Ryzen Threadripper:

Threadripper 1950X:
16 Cores/32 Threads
3.4GHz (4GHz Turbo w/ 4.2GHz XFR)
32MB L3 Cache
Quad Channel Memory Support up to 2667MHz
Max Temp 68C
Price Tag: $999

Threadripper 1920X:

12 Cores/24 Threads
3.5GHz (4GHz Turbo w/ 4.2GHz XFR)
32MB L3 Cache
Quad Channel Memory Support up to 2667MHz
Max Temp 68C
Price Tag: $799

Threadripper 1900X:
8 Cores/16 Threads
3.8GHz (4.0GHz Turbo w/ 4.2GHz XFR)
20MB L3 Cache
Quad Channel Memory Support up to DDR4-2667
Price Tag: $549

Architecture:



Threadripper is still based off of the Ryzen architecture and still uses the same CCX's as all Ryzen 3, 5, and 7 CPUs. However, what's (significantly) different about Threadripper is it's multi die design (no pun intended), it's been nearly a decade since we last saw mutli die CPUs on the market and those being the Core 2 Quad CPUs from Intel).. In basic terms, Threadripper uses two Ryzen 7 CPU dies to make up it's total of 16 cores and 32 threads, clearly showing the full power and potential of AMD's infinity fabric technology.

However, it's not without it's flaws. Because of AMD's dual dies and massive core count on the 1950X and 1920X, certain games will actually crash due to too many cores being active. Also, because Threadripper is a multi-die architecture, when one or more cores have to communicate with a core or L3 cache on another die (NOT CCX), the latency penalty is very high compared to the likes of Ryzen 7 which only has to deal with core latency between CCX's.
To combat this, AMD has created hardware level CPU modes for Threadripper, something we've never seen before. AMD calls these two modes Creators Mode and Gaming Mode. Creator's mode is the default mode that Threadripper runs as this enables all CPU cores on the chip, meanwhile Gaming Mode completely disables one of the two dies on the CPU, effectively turning the 1950X into a higher clocked R7 1800X. This does improve gaming performance immensely, however because only half of the resources are enabled, it kills any type of content creation performance (like video rendering). You have to restart each time you change modes aswell.


Threadripper Performance:




In Gaming applications, Threadripper does indeed stink at FPS when in it's default mode. However when you switch to Gaming Mode, the performance is quite massive, yielding anywhere from 5 to almost 50% improvements in FPS.



Content creation as of right now is more hit and miss. In programs exclusively designed for rendering like Blender, the 1950X beats the whole competition, however in other applications like Adobe CC which are still single core heavy, the 7700K still beats out the 1950X.



The good news is that Threadripper is VERY new, so optimizations are always needed for a new architecture. I'd expect the 1950X to beat intel's current offerings in almost everything (except gaming) by the end of next year (optimizations will come before that but I mean optimizing as a whole).


Memory Support:

All Threadripper CPUs will run a quad channel memory configuration with a max of 1TB of supported RAM (you can thank EPYC for that). Official memory frequency maxes out at 2667Mhz, though this is just offical, if Ryzen 7 is any indicator Threadripper should be able to hit 3200MHz and above quite easily.

X399 Chipset:


Threadripper will be seating itself into the all new TR4 socket and connected via the X399 chipset. According to AMD specs, X399 is an absolute behemoth of a chipset with a monstrous amount of connectivity that doesn’t even come close to what Intel has done for us since the good ole days of Intel licensing Nvidia to make chipsets for their high end platforms.

X399 Specs:

Quad Channel DDR4 Memory
66 PCI Express Gen 3.0 Lanes (Including the 64 on Threadripper)
8 PCI Express Gen 2.0 Lanes
Nvidia SLI and AMD CrossFire Support
2 Native USB 3.1 Gen 2 Ports
14 Native USB 3.1 Gen 1 Ports
6 Native USB 2.0 Ports
12 SATA 3.0 Ports (With RAID 0, 1 and 10 support)

TR4 Socket:
You thought LGA 2011 was big, think again, the TR4 socket will put all CPU sockets to shame, easily being the largest socket to ever be produced for the consumer market. The TR4 socket by itself is about the size of two LGA 2011 sockets side by side and has over 4000 pins -- almost 2x the amount LGA 2011 has -- for threadripper to seat into.

While this might be cool as a wow factor, it’s most certainly not for CPU cooler manufacturers, the TR4 socket is very challenging to cool with conventional CPU coolers as it is physically impossible to cool the entire IHS with the coolers that we have today. Fortunately companies like Noctua and Fractile Define are already on this and have made new coolers specifically for threadripper.

X399 Motherboards:

Asus:
ASUS ROG ZENITH EXTREME X399
PRIME X399-A

Gigabyte:
X399 AORUS Gaming 7

ASRock:
X399 Taichi
Fatal1ty X399 Professional Gaming

MSI:
X399 GAMING PRO CARBON AC

************************************************************************************************
So that is Threadripper in a nutshell, if you have any questions or want to chat about Threadripper, feel free to post a comment below.
Reply to TechyInAZ
93 answers Last reply
More about amd ryzen threadripper x399 megathread faq resources
  1. Sure hope it takes off, just invested a bunch into AMD. Although should have done it in February at $2.00 a share. On another note, will this be the answer for a high end CAD workstation?
    Reply to LTVETTE2
  2. The specs of the 1900X are known. The OP can be edited



    Also this part "Ryzen Threadripper 1950X [...] handilly beating out the 7900X in most of the benchmarks we can see from both rumors and official reviewers" would remark this is only happening on workloads that scale up to 32-threads. The 1950X is slower on games and latency sensitive benches.

    To me the more interesting review will be that comparing the 1900X and the 1800X. I expect the 1900X to be slower in a number of benches, due to the dual-die approach.
    Reply to juanrga
  3. juanrga said:
    The specs of the 1900X are known. The OP can be edited



    Also this part "Ryzen Threadripper 1950X [...] handilly beating out the 7900X in most of the benchmarks we can see from both rumors and official reviewers" would remark this is only happening on workloads that scale up to 32-threads. The 1950X is slower on games and latency sensitive benches.

    To me the more interesting review will be that comparing the 1900X and the 1800X. I expect the 1900X to be slower in a number of benches, due to the dual-die approach.


    Can you give me the exact link to that picture?? Thanks!! I will add this as soon as i can.
    Reply to TechyInAZ
  4. I actually expect it to be faster dual to quad channel memory but who knows. Ryzen 16 core vs Intel 16 is going to be interesting since intel's 16 core has 21% less base frequency then 1950X.
    Reply to jdwii
  5. I am investing in a Threadripper. I'm going with the MSI board because I don't see large performance related differences in the mobos so I went with the cheapest Amazon option. Looking forward to being one of the early adapter and working through the kinks with everyone else over the forums and YT.
    Reply to letsrun4it
  6. letsrun4it said:
    I am investing in a Threadripper. I'm going with the MSI board because I don't see large performance related differences in the mobos so I went with the cheapest Amazon option. Looking forward to being one of the early adapter and working through the kinks with everyone else over the forums and YT.


    Actually I bet MSI is using NIKO VRM which isn't that great your money would be better spent on something else. These CPU's are at 180TDP already and that is at stock. Not at 4.1 like this user

    http://www.pcgamer.com/liquid-cooled-threadripper-1950x-cpu-gets-overclocked-to-41ghz/
    Reply to jdwii
  7. Amazon is quickly selling out of all the motherboards AND both threadripper cpus.
    Reply to letsrun4it
  8. jdwii said:
    letsrun4it said:
    I am investing in a Threadripper. I'm going with the MSI board because I don't see large performance related differences in the mobos so I went with the cheapest Amazon option. Looking forward to being one of the early adapter and working through the kinks with everyone else over the forums and YT.


    Actually I bet MSI is using NIKO VRM which isn't that great your money would be better spent on something else. These CPU's are at 180TDP already and that is at stock. Not at 4.1 like this user

    http://www.pcgamer.com/liquid-cooled-threadripper-1950x-cpu-gets-overclocked-to-41ghz/


    we'll see. if i hate it, i'll buy a new one but i'll get this one and see how it goes, see what board turns out to the be the best, see what cooler options end up working the best.
    Reply to letsrun4it
  9. jdwii said:
    I actually expect it to be faster dual to quad channel memory but who knows. Ryzen 16 core vs Intel 16 is going to be interesting since intel's 16 core has 21% less base frequency then 1950X.


    But at the same time, Intel's higher IPC makes up the difference. Per-core performance likely still has a slight (~10-20%) Intel edge even factoring in the slower clock. So at the same number of cores, assuming full CPU utilization, Intel should be ahead by roughly that percent.
    Reply to gamerk316
  10. gamerk316 said:
    But at the same time, Intel's higher IPC makes up the difference. Per-core performance likely still has a slight (~10-20%) Intel edge even factoring in the slower clock. So at the same number of cores, assuming full CPU utilization, Intel should be ahead by roughly that percent.


    ...which of course it would have to do to make up for the 70% higher pricetag. We'll know for sure in a couple months when both systems are out in the wild, but from a bang/buck perspective this battle is over.

    What will be of particular interest to me is how well each HEDT platform performs under extended full load per dollar spent on cooling. Just looking at the specs the 165w TDP of the Sky-X chips looks great against the 180w TDP of TR...though I think we all know the 165w TDP rating of the Sky-X product is...well let's just say it doesn't jive with reality.
    Reply to Solarion
  11. TechyInAZ said:
    juanrga said:
    The specs of the 1900X are known. The OP can be edited



    Also this part "Ryzen Threadripper 1950X [...] handilly beating out the 7900X in most of the benchmarks we can see from both rumors and official reviewers" would remark this is only happening on workloads that scale up to 32-threads. The 1950X is slower on games and latency sensitive benches.

    To me the more interesting review will be that comparing the 1900X and the 1800X. I expect the 1900X to be slower in a number of benches, due to the dual-die approach.


    Can you give me the exact link to that picture?? Thanks!! I will add this as soon as i can.


    https://www.hardocp.com/article/2017/07/30/amd_ryzen_threadripper_specs_pricing_revealed/
    Reply to juanrga
  12. jdwii said:
    Ryzen 16 core vs Intel 16 is going to be interesting since intel's 16 core has 21% less base frequency then 1950X.


    But higher single-core and all-core turbo.
    Reply to juanrga
  13. Solarion said:
    We'll know for sure in a couple months when both systems are out in the wild, but from a bang/buck perspective this battle is over.


    Since performance is not a linear function of cost, the company that targets lower performance is always to win the "bang/buck" metric. But there is people that doesn't purchase based on that metric, but simply want the best.

    Solarion said:
    What will be of particular interest to me is how well each HEDT platform performs under extended full load per dollar spent on cooling. Just looking at the specs the 165w TDP of the Sky-X chips looks great against the 180w TDP of TR...though I think we all know the 165w TDP rating of the Sky-X product is...well let's just say it doesn't jive with reality.


    It is possible that the 16-core SKL breaks the official 165W TDP, albeit I would be surprised if this happens, seeing as up to now all the SKL models released satisfy the official 140W TDP. I would expect reviews to measure something around 175--180W at the socket level (including loses from non-perfect efficiency).

    On the other hand the official TDP of RyZen products clearly disagrees with real TDP (e.g. 95W --> 128W) as RyZen reviews have demonstrated, and I have been advising for a while that the official 180W of TR doesn't correspond to reality either.
    Reply to juanrga
  14. Quote:
    Since performance is not a linear function of cost, the company that targets lower performance is always to win the "bang/buck" metric. But there is people that doesn't purchase based on that metric, but simply want the best.


    Except today the company with the fastest desktop processor is also the company that offers more bang for the buck as you move up the performance scale. Allow me to demonstrate...

    Company A:
    1900x = 8/16 = $549 = $68.63/core
    1920x = 12/24 = $799 = $66.58/core
    1950x = 16/32 = $999 = $62.44/core

    Company I:
    7820x = 8/16 = $599 = $74.75/core
    7900x = 10/20 = $999 = $99.99/core
    7920x = 12/24 = $1199= $99.92/core
    7940x = 14/28 = $1399= $99.93/core
    7960x = 16/32 = $1699= $106.2/core
    7980xe= 18/36= $1999= $111.1/core

    Company A encourages buyers to purchase more of their cores by offering more bang/buck as you go up the stack, while their competitor does something best described as...gouging. Works great without completion, but will likely push customers away when there's a viable alternative...which there is at this time. I do not think this is a well thought out strategy on the part of company I.

    Quote:
    It is possible that the 16-core SKL breaks the official 165W TDP, albeit I would be surprised if this happens, seeing as up to now all the SKL models released satisfy the official 140W TDP. I would expect reviews to measure something around 175--180W at the socket level (including loses from non-perfect efficiency).

    On the other hand the official TDP of RyZen products clearly disagrees with real TDP (e.g. 95W --> 128W) as RyZen reviews have demonstrated, and I have been advising for a while that the official 180W of TR doesn't correspond to reality either.


    Breaks the official 165w TDP? ...possibly? Sky-x doesn't just "possibly" break the official TDP, it stomps it into dust and does so early and often when pushed. There's a review on this very site of an x299 board reporting a CPU TDP of 231w under load and I think we've all seen TDP comparisons between the two architectures. Those comparison's are also not in company I's favor. All of this would be more palatable for company I's potential customer's if the performance delta was there, but it simply is not. With few exceptions it's pay more and get less with company I this product cycle for power users, content creators, heavy multitaskers, etc.
    Reply to Solarion
  15. Solarion said:
    gamerk316 said:
    But at the same time, Intel's higher IPC makes up the difference. Per-core performance likely still has a slight (~10-20%) Intel edge even factoring in the slower clock. So at the same number of cores, assuming full CPU utilization, Intel should be ahead by roughly that percent.


    ...which of course it would have to do to make up for the 70% higher pricetag. We'll know for sure in a couple months when both systems are out in the wild, but from a bang/buck perspective this battle is over.


    Unless your software license is several grand per physical core, in which case Intel wins price/performance by default.

    I note per-core licensing in this day and age is stupid, but it does exist and does factor into platform choice.
    Reply to gamerk316
  16. gamerk316 said:
    Solarion said:
    gamerk316 said:
    But at the same time, Intel's higher IPC makes up the difference. Per-core performance likely still has a slight (~10-20%) Intel edge even factoring in the slower clock. So at the same number of cores, assuming full CPU utilization, Intel should be ahead by roughly that percent.


    ...which of course it would have to do to make up for the 70% higher pricetag. We'll know for sure in a couple months when both systems are out in the wild, but from a bang/buck perspective this battle is over.


    Unless your software license is several grand per physical core, in which case Intel wins price/performance by default.

    I note per-core licensing in this day and age is stupid, but it does exist and does factor into platform choice.


    Not that it really applies to the HEDT market so much, but basically every Microsoft "2016" server product uses core-based licensing. I believe Oracle uses cores as a factor in licensing as well.
    Reply to uguv
  17. gamerk316 said:
    Solarion said:
    gamerk316 said:
    But at the same time, Intel's higher IPC makes up the difference. Per-core performance likely still has a slight (~10-20%) Intel edge even factoring in the slower clock. So at the same number of cores, assuming full CPU utilization, Intel should be ahead by roughly that percent.


    ...which of course it would have to do to make up for the 70% higher pricetag. We'll know for sure in a couple months when both systems are out in the wild, but from a bang/buck perspective this battle is over.


    Unless your software license is several grand per physical core, in which case Intel wins price/performance by default.

    I note per-core licensing in this day and age is stupid, but it does exist and does factor into platform choice.


    Plainly a topic more appropriately covered in a Xeon v Epyc thread...It's just not relevant here when comparing HEDT platforms.
    Reply to Solarion
  18. Well basically for 1000$ Intel is out matched in this market like people figured it would be. Didn't read any reviews but watched several videos so far. Very impressive if you are buying a CPU that isn't a 7700K for gaming or lower you are wasting your money to begin with.

    For this market Ryzen has more PCI-E lanes even competes with Xeon has 6 full more cores that can be used for virtual machines or rendering machines oh yeah supports ECC memory which the 7900X doesn't.

    If I was to buy either a 7900X or 1950X for dolphin emulator or BF1 i'd be in idiot.

    Also temps seem to be great compared to the 7900X which is surprising sense the 7900X has a lower TDP.

    Now I want to see how the boards do with VRM temps.

    Keep in mind only the top 5% of ryzen dies are used for threadripper.
    Reply to jdwii
  19. Solarion said:
    Quote:
    Since performance is not a linear function of cost, the company that targets lower performance is always to win the "bang/buck" metric. But there is people that doesn't purchase based on that metric, but simply want the best.


    Except today the company with the fastest desktop processor is also the company that offers more bang for the buck as you move up the performance scale. Allow me to demonstrate...

    Company A:
    1900x = 8/16 = $549 = $68.63/core
    1920x = 12/24 = $799 = $66.58/core
    1950x = 16/32 = $999 = $62.44/core

    Company I:
    7820x = 8/16 = $599 = $74.75/core
    7900x = 10/20 = $999 = $99.99/core
    7920x = 12/24 = $1199= $99.92/core
    7940x = 14/28 = $1399= $99.93/core
    7960x = 16/32 = $1699= $106.2/core
    7980xe= 18/36= $1999= $111.1/core

    Company A encourages buyers to purchase more of their cores by offering more bang/buck as you go up the stack, while their competitor does something best described as...gouging. Works great without completion, but will likely push customers away when there's a viable alternative...which there is at this time. I do not think this is a well thought out strategy on the part of company I.

    Quote:
    It is possible that the 16-core SKL breaks the official 165W TDP, albeit I would be surprised if this happens, seeing as up to now all the SKL models released satisfy the official 140W TDP. I would expect reviews to measure something around 175--180W at the socket level (including loses from non-perfect efficiency).

    On the other hand the official TDP of RyZen products clearly disagrees with real TDP (e.g. 95W --> 128W) as RyZen reviews have demonstrated, and I have been advising for a while that the official 180W of TR doesn't correspond to reality either.


    Breaks the official 165w TDP? ...possibly? Sky-x doesn't just "possibly" break the official TDP, it stomps it into dust and does so early and often when pushed. There's a review on this very site of an x299 board reporting a CPU TDP of 231w under load and I think we've all seen TDP comparisons between the two architectures. Those comparison's are also not in company I's favor. All of this would be more palatable for company I's potential customer's if the performance delta was there, but it simply is not. With few exceptions it's pay more and get less with company I this product cycle for power users, content creators, heavy multitaskers, etc.


    As mentioned above cost is not a linear function of performance, which makes the price per core irrelevant, because not all cores are the same.

    Also the TR lines uses the same MCM approach and the same ZP dies. The difference is how many cores are disabled in each die. On the other hand the SKL line uses different dies. There three dies if my memory doesn't fail, and the cost of the larger dies used in models as the 7960X is much higher than the cost of a smaller die used in models like the 7820X. This gap in cost is again the result of another nonlinearity associated to yields. That is why comparing price per core means nothing.

    This site measured 250W for the i9 7900X overclocked at 4.5GHz and without the AVX offset disabled. Correcting for stock settings one obtains something below the 140W. And several reviews confirmed that the chip is within official specifications.
    Reply to juanrga
  20. As expected the 10C SKL beats the 12C TR and is very close to the 16C TR on performance

    https://www.reddit.com/r/intel/comments/6sti2h/hardwarefrs_i97900x_results_10_cores_delivering/
    Reply to juanrga
  21. juanrga said:
    As expected the 10C SKL beats the 12C TR and is very close to the 16C TR on performance

    https://www.reddit.com/r/intel/comments/6sti2h/hardwarefrs_i97900x_results_10_cores_delivering/


    Good thing that a $1K CPU can beat a $700 CPU and *get close* to another $1K CPU!

    And that is not even counting RAID restrictions, lack of ECC and less connectivity all around. But hey, it gets *close*!

    Cheers!
    Reply to Yuka
  22. Has anyone's Amazon Threadripper order shipped? Mine is still pending. Makes me nervous because they are sold out now.
    Reply to letsrun4it
  23. Threadripper is kinda disappointing... Seems more hype than anything else.. The same folks that were upset about Intel selling CPUs at $1k are now trying to justify spending $1k for an AMD CPU...
    Reply to ROBNTHROB
  24. ROBNTHROB said:
    Threadripper is kinda disappointing... Seems more hype than anything else.. The same folks that were upset about Intel selling CPUs at $1k are now trying to justify sending $1k for an AMD CPU...


    A lot of applications can't use 16 cores efficiently including Adobe.

    I'm not even sure Handbrake uses 16 cores perfectly with great scaling I think it tops out around 12.
    Reply to jdwii
  25. x264 should scale beyond 32 threads, but you get diminishing returns. A better option might be to run two encodes in parallel with a limit on how many threads each can use.
    Reply to randomizer
  26. randomizer said:
    x264 should scale beyond 32 threads, but you get diminishing returns. A better option might be to run two encodes in parallel with a limit on how many threads each can use.


    That will be a *little* better performance wise, but you still have a ton of other issues to consider. For example:

    1: RAM/HDD starts to become a bottleneck, as you are much more likely to end up with a case where the data a particular thread needs isn't already loaded into main memory

    2: Core communication bottlenecks start to expose themselves

    3: The OS scheduler itself starts to become a bottleneck, especially if a lot of background tasks interrupt a running thread and forces threads to start jumping between cores (in which case both #1 and #2 above will manifest themselves).

    As a rule: The more cores you use, the worse your scaling becomes.
    Reply to gamerk316
  27. Yuka said:
    juanrga said:
    As expected the 10C SKL beats the 12C TR and is very close to the 16C TR on performance

    https://www.reddit.com/r/intel/comments/6sti2h/hardwarefrs_i97900x_results_10_cores_delivering/


    Good thing that a $1K CPU can beat a $700 CPU and *get close* to another $1K CPU!

    And that is not even counting RAID restrictions, lack of ECC and less connectivity all around. But hey, it gets *close*!


    On the other hand, we have better efficiency, better OC headroom, better AVX support, the lack of the annoying Creator/gaming usermode selection on BIOS...
    Reply to juanrga
  28. jdwii said:
    ROBNTHROB said:
    Threadripper is kinda disappointing... Seems more hype than anything else.. The same folks that were upset about Intel selling CPUs at $1k are now trying to justify sending $1k for an AMD CPU...


    A lot of applications can't use 16 cores efficiently including Adobe.

    I'm not even sure Handbrake uses 16 cores perfectly with great scaling I think it tops out around 12.


    Handbrake uses more than 12 cores; otherwise the 1950X couldn't be faster than the 1920X despite having a bit lower clocks



    Adobe Premiere Pro CC also scales up above 12 cores

    Reply to juanrga
  29. juanrga said:
    Yuka said:
    juanrga said:
    As expected the 10C SKL beats the 12C TR and is very close to the 16C TR on performance

    https://www.reddit.com/r/intel/comments/6sti2h/hardwarefrs_i97900x_results_10_cores_delivering/


    Good thing that a $1K CPU can beat a $700 CPU and *get close* to another $1K CPU!

    And that is not even counting RAID restrictions, lack of ECC and less connectivity all around. But hey, it gets *close*!


    On the other hand, we have better efficiency, better OC headroom, better AVX support, the lack of the annoying Creator/gaming usermode selection on BIOS...


    You mean, how a 10-core product has worse power consumption than a 16-core one under threaded workloads? Or how Intel supports AVX512 (that no one is really using) and shoots the power consumption through the roof? Or having a proper "NUMA" set of options for real power users in a supposedly HEDT CPU?

    Outrageous! Incredible how AMD can make those calls and get away with it!

    Cheers!
    Reply to Yuka
  30. Yuka said:
    juanrga said:
    Yuka said:
    juanrga said:
    As expected the 10C SKL beats the 12C TR and is very close to the 16C TR on performance

    https://www.reddit.com/r/intel/comments/6sti2h/hardwarefrs_i97900x_results_10_cores_delivering/


    Good thing that a $1K CPU can beat a $700 CPU and *get close* to another $1K CPU!

    And that is not even counting RAID restrictions, lack of ECC and less connectivity all around. But hey, it gets *close*!


    On the other hand, we have better efficiency, better OC headroom, better AVX support, the lack of the annoying Creator/gaming usermode selection on BIOS...


    You mean, how a 10-core product has worse power consumption than a 16-core one under threaded workloads? Or how Intel supports AVX512 (that no one is really using) and shoots the power consumption through the roof? Or having a proper "NUMA" set of options for real power users in a supposedly HEDT CPU?


    Not even close...

    On such workloads, the 10C SKL has better power consumption than the 16C TR: 150W vs 171W respectively, despite AMD is using tricks to mask the huge power consumption. One of the tricks consists on clocking cores under base frequency when full loads get power consumption out of control. From the HFR review: "Moving under the base frequency is however something annoying, even if it is not the first time that we see this behavior at AMD." The other trick is reviewers noted that some watts are missed from the wall to the socket. They have found a discrepancy and their current hypothesis is that the CPUs are drawing the missed watts outside the ATX12 channel: "Which makes us wonder if these processors would not draw a portion of their power from the 24-pin ATX connector."

    AVX512 has been in use for many years in the HPC arena and is now a kind of standard (together with Nvidia CUDA). It has been in use during months is the server arena (for instance Google servers have been using AVX512 for months now), and it is now coming to desktop:

    http://www.sisoftware.eu/2017/06/23/intel-core-i9-skl-x-review-and-benchmarks-cpu-avx512-is-here/
    Reply to juanrga
  31. juanrga said:
    Not even close...

    On such workloads, the 10C SKL has better power consumption than the 16C TR: 150W vs 171W respectively, despite AMD is using tricks to mask the huge power consumption. One of the tricks consists on clocking cores under base frequency when full loads get power consumption out of control. From the HFR review: "Moving under the base frequency is however something annoying, even if it is not the first time that we see this behavior at AMD." The other trick is reviewers noted that some watts are missed from the wall to the socket. They have found a discrepancy and their current hypothesis is that the CPUs are drawing the missed watts outside the ATX12 channel: "Which makes us wonder if these processors would not draw a portion of their power from the 24-pin ATX connector."

    AVX512 has been in use for many years in the HPC arena and is now a kind of standard (together with Nvidia CUDA). It has been in use during months is the server arena (for instance Google servers have been using AVX512 for months now), and it is now coming to desktop:

    http://www.sisoftware.eu/2017/06/23/intel-core-i9-skl-x-review-and-benchmarks-cpu-avx512-is-here/


    Right, the 140W using ~150W and the 180W using ~170W. Yes, of course. AMD is cheating trying to stay inside their TDP while Intel obviously knows you want the performance and beat AMD in single threaded workloads that are the bread and butter of the server world.

    And yes, the AVX instruction that is not used in 90% of the workloads and price point EPYC is aimed to help/work in and is *just* starting to roll out onto the rest of the server workload spectrum. When MySQL (DB engines), Weblogic (app servers) or Apache (web servers), PHP (script engines) or VirtualBox (VMs) actually get compiled to take advantage of it, let me know.

    Cheers!
    Reply to Yuka
  32. juanrga said:
    Handbrake uses more than 12 cores; otherwise the 1950X couldn't be faster than the 1920X despite having a bit lower clocks


    x264 (Handbrake) supports up to 128 threads, but I'm not sure you'd want to use that many anyway. My crude and unscientific testing indicates that very high thread counts (48 and above, which is where the 1950X is at) materially impact the output and so it's not entirely fair to compare the results on speed alone.
    Reply to randomizer
  33. randomizer said:
    gamerk316 said:
    Handbrake uses more than 12 cores; otherwise the 1950X couldn't be faster than the 1920X despite having a bit lower clocks


    x264 (Handbrake) supports up to 128 threads, but I'm not sure you'd want to use that many anyway. My crude and unscientific testing indicates that very high thread counts (48 and above, which is where the 1950X is at) materially impact the output and so it's not entirely fair to compare the results on speed alone.


    Wrong name on quote :p

    But remember, just because you can spawn 128 individual worker threads doesn't mean you'll scale anywhere close to that.
    Reply to gamerk316
  34. juanrga said:
    jdwii said:
    ROBNTHROB said:
    Threadripper is kinda disappointing... Seems more hype than anything else.. The same folks that were upset about Intel selling CPUs at $1k are now trying to justify sending $1k for an AMD CPU...


    A lot of applications can't use 16 cores efficiently including Adobe.

    I'm not even sure Handbrake uses 16 cores perfectly with great scaling I think it tops out around 12.


    Handbrake uses more than 12 cores; otherwise the 1950X couldn't be faster than the 1920X despite having a bit lower clocks



    Adobe Premiere Pro CC also scales up above 12 cores



    Adobe it really depends on the settings you use at times that can still be limited to lower amount of cores for example the picture you showed has the 1950x ahead in adobe I can show where it loses.
    Reply to jdwii
  35. Yuka said:
    juanrga said:
    Not even close...

    On such workloads, the 10C SKL has better power consumption than the 16C TR: 150W vs 171W respectively, despite AMD is using tricks to mask the huge power consumption. One of the tricks consists on clocking cores under base frequency when full loads get power consumption out of control. From the HFR review: "Moving under the base frequency is however something annoying, even if it is not the first time that we see this behavior at AMD." The other trick is reviewers noted that some watts are missed from the wall to the socket. They have found a discrepancy and their current hypothesis is that the CPUs are drawing the missed watts outside the ATX12 channel: "Which makes us wonder if these processors would not draw a portion of their power from the 24-pin ATX connector."

    AVX512 has been in use for many years in the HPC arena and is now a kind of standard (together with Nvidia CUDA). It has been in use during months is the server arena (for instance Google servers have been using AVX512 for months now), and it is now coming to desktop:

    http://www.sisoftware.eu/2017/06/23/intel-core-i9-skl-x-review-and-benchmarks-cpu-avx512-is-here/


    Right, the 140W using ~150W and the 180W using ~170W. Yes, of course. AMD is cheating trying to stay inside their TDP while Intel obviously knows you want the performance and beat AMD in single threaded workloads that are the bread and butter of the server world.

    And yes, the AVX instruction that is not used in 90% of the workloads and price point EPYC is aimed to help/work in and is *just* starting to roll out onto the rest of the server workload spectrum. When MySQL (DB engines), Weblogic (app servers) or Apache (web servers), PHP (script engines) or VirtualBox (VMs) actually get compiled to take advantage of it, let me know.


    The ~170W have been measured in the ATX12 channel, and the reviewers claim that there is discrepancy in watts between the wall and the socket. That is the reason why they introduce the hypothesis that the socket must be getting extra power from elsewhere, as quoted above: "Which makes us wonder if these processors would not draw a portion of their power from the 24-pin ATX connector." This means that the ~170W they measured with their method is not representing the real power used by the chip. And it agrees with what CanardPC said in the past when mentioned that the 180W TR in reality draws >200W.

    AVX512 will not apply to everything, but the claim "that no one is really using" AVX512 was wrong.
    Reply to juanrga
  36. juanrga said:
    The ~170W have been measured in the ATX12 channel, and the reviewers claim that there is discrepancy in watts between the wall and the socket. That is the reason why they introduce the hypothesis that the socket must be getting extra power from elsewhere, as quoted above: "Which makes us wonder if these processors would not draw a portion of their power from the 24-pin ATX connector." This means that the ~170W they measured with their method is not representing the real power used by the chip. And it agrees with what CanardPC said in the past when mentioned that the 180W TR in reality draws >200W.

    AVX512 will not apply to everything, but the claim "that no one is really using" AVX512 was wrong.


    I'm pretty sure AMD wired some black magic inside TR to increase the power consumption; a black hole even. Maybe it's the work of the CPU fairy. I wonder why no other reviews I've read so far agree with you and you only take 1 that mentions it. Although, to be fair, AMD does have a track record of going "off spec" on some designs (RX480 does come to mind and old Athlon Thunderbirds).

    And in the context we're talking (TR servers group publicized by AMD in official presentations and slides), AVX512 is nothing special nor relevant. I'll read a bit more on what it actually entails to use it in terms of calculations, but I have the notion/intuition it won't really do anything for the average load 90% of the server farms out there actually use for regular calculations. I'll post back later.

    Cheers!
    Reply to Yuka
  37. https://www.techspot.com/review/1465-amd-ryzen-threadripper-1950x-1920x/page7.html

    No point on continuing or caring more

    https://www.pcper.com/image/view/84887?return=node%2F68269

    http://hexus.net/tech/reviews/cpu/108628-amd-ryzen-threadripper-1950x-1920x/?page=12

    Out of all those one can tell that max load power consumption is not an issue if anything its using far to much power on idle and that probably matters more.

    Not to mention temps are lower on the platform anyways so this is from a pure TDP rating which has very little to do with max power consumption in the first place but total heat-output.
    Reply to jdwii
  38. Obviously the Intel damage control crew is on patrol. LOL

    juanrga said:
    Solarion said:
    Quote:
    Since performance is not a linear function of cost, the company that targets lower performance is always to win the "bang/buck" metric. But there is people that doesn't purchase based on that metric, but simply want the best.


    Except today the company with the fastest desktop processor is also the company that offers more bang for the buck as you move up the performance scale. Allow me to demonstrate...

    Company A:
    1900x = 8/16 = $549 = $68.63/core
    1920x = 12/24 = $799 = $66.58/core
    1950x = 16/32 = $999 = $62.44/core

    Company I:
    7820x = 8/16 = $599 = $74.75/core
    7900x = 10/20 = $999 = $99.99/core
    7920x = 12/24 = $1199= $99.92/core
    7940x = 14/28 = $1399= $99.93/core
    7960x = 16/32 = $1699= $106.2/core
    7980xe= 18/36= $1999= $111.1/core

    Company A encourages buyers to purchase more of their cores by offering more bang/buck as you go up the stack, while their competitor does something best described as...gouging. Works great without completion, but will likely push customers away when there's a viable alternative...which there is at this time. I do not think this is a well thought out strategy on the part of company I.

    Quote:
    It is possible that the 16-core SKL breaks the official 165W TDP, albeit I would be surprised if this happens, seeing as up to now all the SKL models released satisfy the official 140W TDP. I would expect reviews to measure something around 175--180W at the socket level (including loses from non-perfect efficiency).

    On the other hand the official TDP of RyZen products clearly disagrees with real TDP (e.g. 95W --> 128W) as RyZen reviews have demonstrated, and I have been advising for a while that the official 180W of TR doesn't correspond to reality either.


    Breaks the official 165w TDP? ...possibly? Sky-x doesn't just "possibly" break the official TDP, it stomps it into dust and does so early and often when pushed. There's a review on this very site of an x299 board reporting a CPU TDP of 231w under load and I think we've all seen TDP comparisons between the two architectures. Those comparison's are also not in company I's favor. All of this would be more palatable for company I's potential customer's if the performance delta was there, but it simply is not. With few exceptions it's pay more and get less with company I this product cycle for power users, content creators, heavy multitaskers, etc.


    As mentioned above cost is not a linear function of performance, which makes the price per core irrelevant, because not all cores are the same.

    Also the TR lines uses the same MCM approach and the same ZP dies. The difference is how many cores are disabled in each die. On the other hand the SKL line uses different dies. There three dies if my memory doesn't fail, and the cost of the larger dies used in models as the 7960X is much higher than the cost of a smaller die used in models like the 7820X. This gap in cost is again the result of another nonlinearity associated to yields. That is why comparing price per core means nothing.

    This site measured 250W for the i9 7900X overclocked at 4.5GHz and without the AVX offset disabled. Correcting for stock settings one obtains something below the 140W. And several reviews confirmed that the chip is within official specifications.


    Well good for those sites, would appreciate it if you could provide a link or three. Can I expect that sometime in the near future then? The information I've seen would seem to indicate that the 7900x fares poorly in the performance/watt department as well as the performance/dollar department when compared to the competition.

    BTW, what you're hemming and hawing about is the transition Intel has had to make from LCC to HCC wafers for the 12+ core die chips they never intended to create...until they realized AMD was about to kick them in the ballz. This is why those chips are still vaporware and also why Intel keeps rushing things. Obviously none of this befronts AMD as they're using the same CCX dies across all of their product lines. So yeah, cost *IS* linear...for one company, as I believe I've already demonstrated. That Intel cannot win a HEDT price war with AMD would seem to be self evident at this point.

    Cost per core is irrelevant because...reasons. May I have your permission to use that when I go to buy a HEDT processor? "juanrga told me cost per core was irrelevant, so can haz 18 core processor for dual core price? puhlease?" LMAO

    Thanks for the laugh dude.
    Reply to Solarion
  39. jdwii said:
    https://www.techspot.com/review/1465-amd-ryzen-threadripper-1950x-1920x/page7.html

    No point on continuing or caring more

    https://www.pcper.com/image/view/84887?return=node%2F68269

    http://hexus.net/tech/reviews/cpu/108628-amd-ryzen-threadripper-1950x-1920x/?page=12

    Out of all those one can tell that max load power consumption is not an issue if anything its using far to much power on idle and that probably matters more.

    Not to mention temps are lower on the platform anyways so this is from a pure TDP rating which has very little to do with max power consumption in the first place but total heat-output.


    The PcPer review is rather complete, and measured not only the impact of RAM speed on performance but interdie latencies. As expected the MCM approach hurts latencies

    Reply to juanrga
  40. gamerk316 said:
    Wrong name on quote :p

    But remember, just because you can spawn 128 individual worker threads doesn't mean you'll scale anywhere close to that.


    Fixed :)

    You won't scale that well at all, but there may still be some speed benefits. Even my i7 920 encodes in a little less time with 128 threads than the default of 12, despite the scheduling nightmare that creates. However, the bitrate is noticeably lower, so I wouldn't do it ordinarily.
    Reply to randomizer
  41. Solarion said:
    Obviously the Intel damage control crew is on patrol. LOL

    juanrga said:
    Solarion said:
    Quote:
    Since performance is not a linear function of cost, the company that targets lower performance is always to win the "bang/buck" metric. But there is people that doesn't purchase based on that metric, but simply want the best.


    Except today the company with the fastest desktop processor is also the company that offers more bang for the buck as you move up the performance scale. Allow me to demonstrate...

    Company A:
    1900x = 8/16 = $549 = $68.63/core
    1920x = 12/24 = $799 = $66.58/core
    1950x = 16/32 = $999 = $62.44/core

    Company I:
    7820x = 8/16 = $599 = $74.75/core
    7900x = 10/20 = $999 = $99.99/core
    7920x = 12/24 = $1199= $99.92/core
    7940x = 14/28 = $1399= $99.93/core
    7960x = 16/32 = $1699= $106.2/core
    7980xe= 18/36= $1999= $111.1/core

    Company A encourages buyers to purchase more of their cores by offering more bang/buck as you go up the stack, while their competitor does something best described as...gouging. Works great without completion, but will likely push customers away when there's a viable alternative...which there is at this time. I do not think this is a well thought out strategy on the part of company I.

    Quote:
    It is possible that the 16-core SKL breaks the official 165W TDP, albeit I would be surprised if this happens, seeing as up to now all the SKL models released satisfy the official 140W TDP. I would expect reviews to measure something around 175--180W at the socket level (including loses from non-perfect efficiency).

    On the other hand the official TDP of RyZen products clearly disagrees with real TDP (e.g. 95W --> 128W) as RyZen reviews have demonstrated, and I have been advising for a while that the official 180W of TR doesn't correspond to reality either.


    Breaks the official 165w TDP? ...possibly? Sky-x doesn't just "possibly" break the official TDP, it stomps it into dust and does so early and often when pushed. There's a review on this very site of an x299 board reporting a CPU TDP of 231w under load and I think we've all seen TDP comparisons between the two architectures. Those comparison's are also not in company I's favor. All of this would be more palatable for company I's potential customer's if the performance delta was there, but it simply is not. With few exceptions it's pay more and get less with company I this product cycle for power users, content creators, heavy multitaskers, etc.


    As mentioned above cost is not a linear function of performance, which makes the price per core irrelevant, because not all cores are the same.

    Also the TR lines uses the same MCM approach and the same ZP dies. The difference is how many cores are disabled in each die. On the other hand the SKL line uses different dies. There three dies if my memory doesn't fail, and the cost of the larger dies used in models as the 7960X is much higher than the cost of a smaller die used in models like the 7820X. This gap in cost is again the result of another nonlinearity associated to yields. That is why comparing price per core means nothing.

    This site measured 250W for the i9 7900X overclocked at 4.5GHz and without the AVX offset disabled. Correcting for stock settings one obtains something below the 140W. And several reviews confirmed that the chip is within official specifications.


    Well good for those sites, would appreciate it if you could provide a link or three. Can I expect that sometime in the near future then? The information I've seen would seem to indicate that the 7900x fares poorly in the performance/watt department as well as the performance/dollar department when compared to the competition.

    BTW, what you're hemming and hawing about is the transition Intel has had to make from LCC to HCC wafers for the 12+ core die chips they never intended to create...until they realized AMD was about to kick them in the ballz. This is why those chips are still vaporware and also why Intel keeps rushing things. Obviously none of this befronts AMD as they're using the same CCX dies across all of their product lines. So yeah, cost *IS* linear...for one company, as I believe I've already demonstrated. That Intel cannot win a HEDT price war with AMD would seem to be self evident at this point.

    Cost per core is irrelevant because...reasons. May I have your permission to use that when I go to buy a HEDT processor? "juanrga told me cost per core was irrelevant, so can haz 18 core processor for dual core price? puhlease?" LMAO

    Thanks for the laugh dude.



    Well price per core is irrelevant what matters in this market is multithreaded performance as no one buys a 7900x or 1950X to run games or other tasks that only uses 4 cores or less.

    I mean a 64 core A53 CPU at 2Ghz would still suck compared to an 7900X/1950X meaning yes having the most cores doesn't matter.
    Reply to jdwii
  42. Clearly it's not irrelevant as the mainstream HEDT processors are both priced/core. ...unless you guys think it's just a coincidence that higher core count processors cost more as you go up a company's product stack. Name dropping an arm processor as a strawman while ignoring context doesn't change a thing.
    Reply to Solarion
  43. About threads and scaling. Notice how 7900 scales linearly up to 8 threads not 10. Mesh effect maybe? On the other hand 1900 scales linearly up to 16 threads.


    Frankly, no one should buy a $1,000, 16-core CPU just to play conventional gaming or run lightly threaded applications. It’s the wrong tool for the job.
    Reply to aldaia
  44. Solarion said:
    Clearly it's not irrelevant as the mainstream HEDT processors are both priced/core. ...unless you guys think it's just a coincidence that higher core count processors cost more as you go up a company's product stack. Name dropping an arm processor as a strawman while ignoring context doesn't change a thing.


    It is irrelevant because not all cores are identical (different performance) and because the relation performance/cost is not linear.
    Reply to juanrga
  45. At this point I think you're just being intentionally obtuse. More cores = a higher price for both company's HEDT product stacks. That's a simple fact and repeatedly saying the word "irrelevant" doesn't make it any less a fact. That performance doesn't scale perfectly with increased cores doesn't change a thing either, particularly as scaling is application specific. The only thing "non-linear" about the pricing per core is on the Intel side...where they price gouge even moar than usual(per core) for their 16 and 18 core parts. It's pretty simple maths, but we can go over it if you're struggling.

    The bottom line is spending more as you head up the threadripper stack gives you more bang for the buck, whereas with skylake-x, while you do get more cores you also get to pay a higher "Intel" tax. That Intel has lower margins on their higher end parts doesn't mean jack to me, I'm only interested in getting the most for my buck.

    An intel 7900, 7920, or 7940 core carries an average premium of $37.51/core(+60%) over an AMD 1950x core.
    The 1960x carries a premium of $43.76/core(+70%)
    The 7980xe carries a premium of $48.66/core(+78%).

    While Intel cores do offer higher IPC they do not have an IPC advantage anywhere near high enough to justify these premiums...imo. Part of Intel's advantage over Zen on the mainstream platform has come from higher clocks, however on the HEDT platform that advantage has diminished significantly as there's simply not as great a disparity between AMD 19x0 clockrates and Intel 79x0 clockrates.
    Reply to Solarion
  46. Update 8/12/2017

    Added all current X399 motherboards to the original post.
    Reply to TechyInAZ
  47. Solarion said:
    More cores = a higher price for both company's HEDT product stacks. That's a simple fact and repeatedly saying the word "irrelevant" doesn't make it any less a fact.


    No one is disputing that trivial fact. What is being stated is that not all cores are the same. It doesn't cost the same to design and fabricate a Zen core than a SKL core. Getting 10% higher IPC atop an existent design doesn't cost 10% more. Getting 500MHz extra atop a 4GHz core doesn't cost 12% more. A core with 4.5GHz and 10% extra IPC will have 24% higher performance but it will could be 70% more costly to fabricate, because the functional relationships between cost and performance aren't linear. For instance the relationships between the complexity T of a core and its IPC is given by

    T = b (IPC)^2

    where b is a parameter. You can see that duplicating the IPC, quadruplicates the complexity of the core and its cost. Therefore the faster core will look worse from a (IPC/price) ratio perspective.

    Also when many people talks about IPC they exclusively mean "IPC in serial x86 workloads", and avoid IPC in AVX workloads. Consider the 512bit units in SKL cores. Those are 4x bigger than the units on RyZen (you need 4x more transistors), and those 4x bigger units require 4x wider datapaths and caches with 4x higher bandwidth, which again means 4x more transistors. All those extra transistors increase the costs of design, validation, and fabrication of the core, but you don't see any of that extra performance in action if all what you run are legacy workloads as CineBench and C-Ray that don't use 512bit AVX instructions. Again the faster core looks worse from a (performance/price) ratio perspective

    Also AMD is using the same Zeppelin die for all the chips, from the lower RyZen model to the top ThreadRipper model. This is not true for the X-series, where KBL-X uses one die, the SKL-X models up to the 10C use another die, and the SKL-X models up to the 18C use another die. Designing and fabricating a 18C die is different than designing and fabricating a 10C die. For instance fabrication yields aren't linear. The bigger die has higher costs because has more transistors, but atop that there are extra costs because larger dies have worse yields. In the end a 18C die only can provide 80% more performance than a 10C die, but the 18C die can cost twice more. Therefore the faster CPU will look worse from a (price/core) ratio perspective.

    That is why using performance/cost or core/cost metrics to pretend that Intel is "gouging" whereas AMD is some kind of charity are invalid.
    Reply to juanrga
  48. Solarion said:
    Clearly it's not irrelevant as the mainstream HEDT processors are both priced/core. ...unless you guys think it's just a coincidence that higher core count processors cost more as you go up a company's product stack. Name dropping an arm processor as a strawman while ignoring context doesn't change a thing.



    It does matter as an example as comparing who gives the most cores is useless unless those cores are 100% the same. Ryzen is still a good 15-20% behind skylake/kabylake in IPC. A lot of programs don't always use 16 cores 32 threads even programs like Adobe.

    Not saying Ryzen isn't a better deal it for sure is and I think it basically makes X299 irrelevant except for a small amount of cases.
    Reply to jdwii
  49. TechyInAZ said:
    Update 8/12/2017

    Added all current X399 motherboards to the original post.


    The OP says

    Quote:
    Memory Support:

    All Threadripper CPUs will run a quad channel memory configuration with a max of 1TB of supported RAM (you can thank EPYC for that). Official memory frequency maxes out at 2667Mhz, though this is just offical, if Ryzen 7 is any indicator Threadripper should be able to hit 3200MHz and above quite easily.


    ThreadRipper is not RyZen. Overclocking RAM does very little.



    Ignoring the synthetic memory bandwidth measurement, because obviously this is going to be very sensitive to RAM speed, and taking the average of the remaining 22 benchmarks, the averages are

    2400 RAM: 1.01
    3200 RAM: 1.04

    Therefore the faster RAM provided only 3% higher performance on average, which is nothing. The reason? ThreadRipper has a power throttling mechanism, and overclocking the RAM automatically reduces the turbo frequencies on the CPU for adjusting to the maximum power supported by the socket.
    Reply to juanrga
Ask a new question Answer

Read More

Ryzen AMD Threadripper X399