AMD Ryzen Threadripper 1920X Review

AMD’s Ryzen Threadripper 1950X put a feather in the cap of its high-end desktop aspirations. The company's new line-up challenges Intel's best efforts. But, as usual, some of the best value in AMD's product stack is found in the mid-range models. Ryzen Threadripper 1920X comes arms with 12 physical cores and SMT, enabling 24 concurrent threads fed by 38MB of cache, a quad-channel memory controller, and 64 lanes of PCIe. All of that costs $800, dramatically undercutting the 10-core Core i9-7900X.

Based on the back-and-forth we've witnessed this year, it appears the Ryzen family of CPUs may have caught Intel off-balance. AMD's siren call to enthusiasts includes lower prices, more cores, less segmentation, soldered heat spreaders, less expensive motherboards, and a longer commitment to each platform.

Intel does have pricier Skylake-X options available, but they sag under the weight of deliberate segmentation that fuses off native features on the cheaper models. Don't count Intel out, though; its beefiest Skylake-X chips are still forthcoming, along with a salvo of mainstream Coffee Lake CPUs to rival Ryzen 7, 5, and 3.

AMD has an aggressive roadmap it'll use to improve the Zen architecture and transition to smaller nodes, so the company should remain a competitive force to be reckoned with. Ryzen Threadripper 1920X is a great start, though. Based on the 1950X we already reviewed, this processor is expected to perform well at a reasonable price point (plus the highest overclocking ceiling we’ve seen on a Ryzen processor).

Meet Ryzen Threadripper 1920X

AMD designed its Threadripper processors for anyone able to utilize lots of cores and tons of PCIe connectivity. Think content creators, heavy multi-taskers, and software developers.

The 12C/24T Threadripper 1920X features a 3.5 GHz base clock, which is just 100 MHz higher than the 16C/32T 1950X. Surprisingly, the two chips share the same 3.7 GHz boost frequency for heavily-threaded workloads and a four-core 4 GHz setting for less taxing tasks. If your cooler is robust enough, both processors also enable a four-core 4.2 GHz XFR ceiling.

Like all of AMD's Ryzen processors, the 1920X utilizes two quad-core complexes combined into a single Zeppelin die. Two Zeppelin dies, tied together using the Infinity Fabric interconnect into a multi-chip module, come together to create Threadripper CPUs wielding 16 physical cores. AMD creates the 12-core 1920X by disabling four of them, leaving six cores per die (3+3).

The disabled cores serve as dark silicon, which absorbs heat dissipated by the active circuitry. This, coupled with AMD's use of a soldered heat spreader and aggressive binning (the company claims to use the top 5% of Zeppelin dies), leads to impressive overclocking headroom from our 1920X sample. We maintained a 4.1 GHz overclock, the highest achieved with any Ryzen CPU in our U.S. lab, using a relatively tame 1.42V. 

Threadripper 1950X
Core i9-7900X
Threadripper 1920X
Core i7-7820X
Threadripper 1900X
TR4 / X399
LGA2066 / X299
TR4 / X399
LGA2066 / X299
TR4 / X399
Base Frequency (GHz)
Boost Frequency (GHz)
4.0 (4.2 XFR)
4.3 / 4.5 (TB 3.0)
4.0 (4.2 XFR)
4.3 / 4.5 (TB 3.0)
4.0 (4.2 XFR)
Cache (L2+L3)
Memory Support
Memory Controller
Unlocked Multiplier
PCIe Lanes

Ryzen Threadripper 1920X slots into the large price gap between Intel's Core i9-7900X and $600 i7-7820X. Making AMD's solution more interesting is the fact that Intel cuts PCIe connectivity from 44 lanes to 28 as you drop to the Core i7. In comparison, the Threadripper chip boasts 64 lanes, though four are reserved for AMD's chipset. The extra I/O comes in handy for multi-GPU configurations, large PCIe-based storage arrays, and streamers using dedicated capture cards.

The 1920X and 1950X both feature 32MB of L3 cache sliced into 16MB per Zeppelin die. You do lose 2MB of L2 cache to the four disabled cores, leaving 512KB per core, or 6MB across the MCM, active. Despite the disabled cores and cache, AMD still rates its 1920X with a 180W TDP.

Enthusiasts have to love that AMD uses Indium solder instead of the thermal paste Intel employs. Threadripper's large IHS helps with heat too, and the chip generally features solid thermal performance. We haven't encountered any serious heat concerns with the Threadripper models, which we can't say for Intel's Skylake-X CPUs

Ryzen Threadripper Memory SupportMT/s
Quad-Channel/Dual-Rank/Two DIMMS per Channel (8)
Quad-Channel/Single-Rank/Two DIMMs Per Channel (8)
Quad-Channel/Dual-Rank/One DIMM Per Channel (4)
Quad-Channel/Single-Rank/One DIMM Per Channel (4)

Threadripper features independent dual-channel memory controllers, one paired with each die, that combine to provide quad-channel support with varying data transfer rates based upon memory types and DIMMs per channel. The platform supports ECC memory and a functional limit of 256GB of DDR4, though it can support up to 2TB as memory capacity increases.

The distributed memory alignment, along with the latency imposed by traversing the fabric between two separate dies, creates unique challenges for applications sensitive to timing. AMD has also discovered that certain games won't fire up with all of Threadripper's cores enabled. So the company implemented a pair of toggles that enable either UMA or NUMA mode to mitigate memory latency concerns, and a Legacy setting to disable one CCX, solving compatibility issues.

Selectable Creator and Game modes provide users with performance profiles tailored for either content creation or gaming. We covered how the underlying architecture responds to these modes in our AMD Ryzen Threadripper 1950X review.

We are starting to see dedicated coolers trickle out from leading vendors for AMD's massive 4094-pin TR4 socket. In the interim, AMD also includes an Asetek bracket with all Threadripper models to provide widespread compatibility with existing closed-loop coolers from several vendors.


MORE: Intel & AMD Processor Hierarchy

MORE: All CPUs Content

Create a new thread in the Reviews comments forum about this subject
This thread is closed for comments
Comment from the forums
    Your comment
  • Aldain
    Great review as always but on the power consumption fron given that the 1950x has six more cores and the 1920 has two more they are more power efficient than the 7900x is every regard relative to the high stock clocks of the TR
  • derekullo
    "Ryzen Threadripper 1920X comes arms with 12 physical cores and SMT"


    Judging from Threadripper 1950X versus the Threadripper 1900X we can infer that a difference of 400 megahertz is worth the tdp of 16 whole threads.

    I never realized HT / SMT was that efficient or is AMD holding something back with the Threadripper 1900x?
  • jeremyj_83
    Your sister site Anandtech did a retest of Threadripper a while back and found that their original form of game mode was more effective than the one supplied by AMD. What they had done is disable SMT and have a 16c/16t CPU instead of the 8c/16t that AMD's game mode does.
  • Wisecracker
    Hats-off to AMD and Intel. The quantity (and quality) of processing power is simply amazing these days. Long gone are the times of taking days off (literally) for "rasterizing and rendering" of work flows

    ...or is AMD holding something back with the Threadripper 1900x?

    I think the better question is, "Where is AMD going from here?"

    The first revision Socket SP3r2/TR4 mobos are simply amazing, and AMD has traditionally maintained (and improved!) their high-end stuff. I can't wait to see how they use those 4094 landings and massive bandwidth over the next few years. The next iteration of the 'Ripper already has me salivating :ouch:

    I'll take 4X Summit Ridge 'glued' together, please !!
  • RomeoReject
    This was a great article. While there's no way in hell I'll ever be able to afford something this high-end, it's cool to see AMD trading punches once again.
  • ibjeepr
    I'm confused.
    "We maintained a 4.1 GHz overclock"
    Per chart "Threadripper 1920X - Boost Frequency (GHz) 4.0 (4.2 XFR)"

    So you couldn't get the XFR to 4.2?
    If I understand correctly manually overclocking disables XFR.
    So your chip was just a lotto loser at 4.1 or am I missing something?

    EDIT: Oh, you mean 4.1 All core OC I bet.
  • sion126
    actually the view should be you cannot afford not to go this way. You save a lot of time with gear like this my two 1950X rigs are killing my workload like no tomorrow... pretty impressive...for just gaming, maybe......but then again....its a solid investment that will run a long time...
  • redgarl
    Now this at 7nm...
  • AgentLozen
    redgarl said:

    Now this at 7nm...

    A big die shrink like that would be helpful but I think that Ryzen suffers from other architectural limitations.

    Ryzen has a clock speed ceiling of roughly 4.2Ghz. It's difficult to get it past there regardless of your cooling method.
    Also, Ryzen experiences nasty latency when data is being shared over the Infinity Fabric. Highly threaded work loads are being artificially limited when passing between dies.
    Lastly, the Ryzen's IPC lags behind Intel's a little bit. Coupled with the relatively low clock speed ceiling, Ryzen isn't the most ideal CPU for gaming (it holds up well in higher resolutions to be fair).

    Threadripper and Ryzen only look as good as they do because Intel hasn't focused on improving their desktop chips in the last few years. Imagine if Ivy Bridge wasn't a minor upgrade. If Haswell, Broadwell, Skylake, and Kabylake weren't tiny 5% improvements. What if Skylake X wasn't a concentrated fiery inferno? Zen wouldn't be a big deal if all of Intel's latest chips were as impressive as the Core 2 Duo was back in 2006.

    AMD has done an amazing job transitioning from crappy Bulldozer to Zen. They're in a position to really put the hurt on Intel but they can't lose the momentum they've built. If AMD were to address all of these problems in their next architecture update, they would really have a monster on their hands.
  • Billy Gates
    Mediocre gaming performance. Even worse than the already not-so-good mainstream Ryzen's. Welp you get what you pay for.
  • redgarl
    Sure Billy Gates, at 1080p with an 800$ CPU and an 800$ GPU made by a competitor... sure...

    At 1440p and 2160p the gaming performances is the same, however your multi-threading performances are still better than the overprices Intel chips.
  • Ditt44
    <Moderator edit for rudeness>

    Who buys this class of CPU and bases that choice on how well it 'games'? The performance is not 'mediocre' either. It's competitive. And as for Ryzen, you do get what you pay for: Exceptional value to performance. Back under the bridge with you, please.
  • Billy Gates
    Who pays that much for a CPU only for it to be a one-trick-pony?
  • evarty
    Thank you for the review however, I hate to be that guy but it's 2017 and still no 1440p and 2160p benchmarks? I get a large portion of people are still on 1080p and I appreciate those results but a large portion of people are also not gonna be using Threadripper yet, yet you bringing us those benchmarks.

    I would think with that same conclusion that 1440p(for sure) and 2160p(maybe) benchmarks should be included to say the least.
  • Solarion
    "Of course, switching into Game mode might enable higher performance in some situations, but we don't think professional users will tolerate constant reboots to toggle back and forth."

    A professional would simply find a more palatable solution. For instance one could simply use a program like Bitsum's process lasso to force a troublesome application to run on specific cores.
  • sion126
    Well I would challenge anyone who says the multi-threading is limited on Threadripper. I compile a lot of code and the more cores and the more threads I can spread over the CPU the faster my stuff gets done.Even the Ryzen 1800X I was using before I moved to Threadripper was doing better than any i7 or i9 I tested with.

    There are for sure some annoying latency issues that seem to freeze out anything else while it loads the CPU and executes a job, but for me it is much more effective at multi core multi thread loads simply because I have more cores and more threads.

    And the thermal stuff is much better than Intel, I had MAJOR issues with the i9 packages I tried before I went this way.

    I am sure for most people average use cases Intel will be better but for mine it is not and I get a Price bonus to boot!! :)

    I mitigated by leaving one core for the rest of the system to process system requests, seems to be working well for me.
  • spdragoo
    Anonymous said:
    Thank you for the review however, I hate to be that guy but it's 2017 and still no 1440p and 2160p benchmarks? I get a large portion of people are still on 1080p and I appreciate those results but a large portion of people are also not gonna be using Threadripper yet, yet you bringing us those benchmarks.

    I would think with that same conclusion that 1440p(for sure) and 2160p(maybe) benchmarks should be included to say the least.

    I know it's been said multiple times before, to multiple people, but I guess it needs to be said again:

    • At 2160p/4K (& even 1440p) resolutions, performance is limited by the GPU. Even with monsters like the GTX 1080TI & GTX Titan Xp, there are games out there that are still limited by the GPU's performance even for 60Hz monitors, let alone 144Hz monitors.
    • At 1080p & lower resolutions, you have the potential to be limited by either the GPU or the CPU. The former, however, only happens if your GPU just isn't powerful enough -- i.e. trying to use a GT 710 to play Grand Theft Auto V, or hoping that a Radeon HD 7750 will do well in The Witcher III.
    • The reason is because the amount of data that's processed by the CPU stays the same regardless of the monitor resolution/refresh rate. Whether you're rocking a 4K display or plinking along with an old 640x480 CRT screen, the same data has to be crunched on each Civilization VI turn, the same number of enemies have to be tracked on PUBG, the same number of potential runover targets -- I mean pedestrians -- have to be accounted for in GTA V, etc.

    If you'll consider it as proof, consider what happened when [H]ardOCP tested the Ryzen 7 & a Kaby Lake Core i7 against an old Sandy Bridge Core i7 with a GTX 1080TI ( For the games they tested, while they were able to see small differences in performance at 4K between the 3 CPUs, the key word was "small"; we're talking 1-5FPS margins on average FPS for most of them, & (except for maybe 1 game) percentage margins of only 5%. Maybe some people might call that "definitive" differences; me, I call that, "the performance is too close to call a real winner". And their margins at 1080p were the same or lower, because instead of using that powerful GTX 1080TI to limit any potential GPU limitations, they dropped down to the GTX 1060 instead...a fine GPU for 1080p gaming, but you're unable to tell with it whether it's your CPU or GPU that's limiting your performance.
  • yyk71200
    "Who pays that much for a CPU only for it to be a one-trick-pony?"
    Answer: Intel fanboys who do nothing but gaming.
  • littleleo
    Love this stuff and the review was decent too.
  • caustin582
    So unless you're mainly concerned with Blender, Handbrake, or advanced math/science programs, just get a 7700K.