Chinese MTT S80 PCIe 5.0 GPU Closes in on GTX 1650

MTT S80
MTT S80 (Image credit: Moore Threads)

New driver optimizations have enabled Chinese vendor Moore Threads' MTT S80 gaming graphics card to rival Nvidia's GeForce GTX 1650 at 4K gaming. There may be some untapped performance, so don't be surprised if the MTT S80 eventually becomes a contender for the best graphics cards.

Armed with 4,096 MUSA (Moore Threads Unified System Architecture) cores with a 1.8 GHz boost clock, the MTT S80 is a PCIe 5.0 graphics card that pumps 14.4 TFLOPs of FP32 performance—the 7nm graphics card sports also sports 128 Tensor cores and 16GB of 14 Gbps GDDR6 memory. With access to a 256-bit memory interface, the MTT S80, which has a 255W TDP, delivers a memory bandwidth of up to 448 GB/s. The specifications look very decent on paper, minus the high TDP. However, the driver is what's hindering MTT S80's performance. The MTT S80 reportedly leverages the Chunxiao silicon, based on the PowerVR architecture developed by Imagination Technologies. The exact generation of the PowerVR GPU remains a mystery. Nonetheless, there's a lot of optimization work to get the graphics card to play nice with DirectX 11 titles.

Early benchmarks of the MTT S80 weren't very compelling. The Chinese homebrew graphics card lagged behind archaic mainstream performers, such as the GT 1030 in DirectX 9 games and GTX 1050 Ti in DirectX 11 games. Moore Threads recently deployed a new driver update, which the company claims to improve the MTT S80's performance by up to 40% in some titles. Chinese news outlet Expreview has put the MTT S80 through its paces with the latest driver (230.40.0.2), and the progress is pretty remarkable if you look at it from an unbiased standpoint.

Expreview's testbed consists of a Core i7-13700K processor, an Asus ROG Maximus Z790 Dark Hero motherboard, and 32GB (2x16GB) of G.Skill Trident Z5 RGB DDR5-7200 C34 memory on a system with Windows 10 21H2 64 bit operating system. The GeForce GTX 1650 used in the comparison is the GeForce GTX 1650 XC from EVGA, but Expreview didn't specify whether it was the Black Gaming or Ultra Gaming variant. The publication used a mixed bag of synthetic and real-world testing, but we'll only concentrate on the latter, which is more meaningful for gamers.

Swipe to scroll horizontally
Graphics CardMTT S80GeForce GTX 1650
ArchitectureChunxiaoTU117
Process TechnologyTSMC 7nmTSMC 12nm
Transistors (Billion)?4.7
Die size (mm^2)?200
GPU Cores (Shaders)4,096896
Tensor / AI Cores128N/A
Boost Clock (MHz)1,8001,665
VRAM Speed (Gbps)188
VRAM16GB GDDR64GB GDDR5
VRAM Bus Width256128
TFLOPS FP32 (Boost)14.42.9
Bandwidth (GBps)448128
TDP (watts)25575
Chinese Pricing$164$150

The overview is that the GeForce GTX 1650 was faster at 1080p (1920x1080) and 2K (2560x1440) resolutions, while the MTT S80 excelled at 4K (3840x2160). It makes sense since higher resolutions are more demanding on the VRAM requirement, giving the MTT S80 the upper hand. The MTT S80 has 4X more memory than the GeForce GTX 1650 and almost 3.5X higher memory bandwidth. The GeForce GTX 1650 is way more power-efficient compared to the MTT S80.

In Final Fantasy XIV, the GeForce GTX 1650 outperformed the MTT S80 by 37% in 1080p and 9% in 2K. However, the MTT S80 got its revenge at 4K, beating the GeForce GTX 1650 by 15%. The GeForce GTX 1650 delivered 49% and 45% higher frame rates at 1080p and 2K, respectively, in League of Legends. Meanwhile, the MTT S80 pumped 15% better frame rates at 4K.

We saw similar margins with Valorant. The GeForce GTX 1650 exhibited an 80% lead at 1080p and 17% at 2K. Conversely, the MTT S80 was 27% better at 4K. Finally, the performance margin was 53% at 1080p and 24% at 2K in favor of the GeForce GTX 1650 in Assetto Corsa. The MTT S80 led at 4K with a 9% performance delta.

Moore Threads MTT S80 Benchmarks

Swipe to scroll horizontally
Graphics CardFinal Fantasy XIV (1080p)Final Fantasy XIV (2K)Final Fantasy XIV (4K)League of Legends (1080p)League of Legends (2K)League of Legends (4K)Valorant (1080p)Valorant (2K)Valorant (4K)Assetto Corsa (1080p)Assetto Corsa (2K)Assetto Corsa (4K)
GeForce GTX 165066.440.118.4430.4416.8242.2227.2138.366.91278947
MTT S8048.536.921.1288.1287.3277.5125.9118.684.8837251

The MTT S80 is another example of how a mediocre driver can hold back a good product. It happens even to the more prominent manufacturers, such as Intel and its Arc Alchemist graphics cards. The chipmaker constantly improves performance with every new driver update, with the latest one claiming up to a 119% uplift

It's been a year since the MTT S80 hit the Chinese retail market. The 7nm gaming graphics card has gone through a lot over the last year. Moore Threads has released 12 driver updates, nine in the critical category. The software engineers at Moore Threads aren't miracle workers, but their efforts haven't been unsuccessful. The improvement from the May driver (221.31) and October driver (230.40.0.2) was up to 45% in some games.

Theoretically, the MTT S80 could compete with the GeForce RTX 3060. However, the Chinese graphics card still has a long way to go since it can't even consistently beat the GeForce GTX 1650, and there's a sizeable gap between the GeForce GTX 1650 and GeForce RTX 3060. The MTT S80 has recently dropped to $164, but it's still more expensive than the GeForce GTX 1650, which starts at $150 in the Chinese market. While we don't expect the GeForce GTX 1650 performance to increase anymore, the biggest issue with the MTT S80 is that it's unknown how much performance is left in the tank.

Zhiye Liu
RAM Reviewer and News Editor

Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • TCA_ChinChin
    When you start from basically nothing, anything is a decent improve. That's the feeling I get from these drivers. They're still sorely lacking compared to any modern AMD or NVIDIA arch in terms of raw silicon and hardware to software performance.

    They are using a more advanced TSMC node, with far more cores (even if they're not one to one), significantly higher memory performance in all metrics, higher theoretical FLOPS, while using more than 3 times the power. How can it not surpass the 1650? Are driver's really the main culprit or is some part of the hardware design really inefficient? I'm kind of confused here. They're based roughly off of the PowerVR architecture so they're not even starting from scratch. Is it because it's built from a PowerVR arch?

    Indigenous Chinese CPUs/memory seem to be doing a lot better than what they're GPUs are capable of. Huawei's and YTMC's latest chips are certainly more competitive with their global counterparts than this GPU. Whats different in this case?
    Reply
  • cheesecake1116
    So, Moore's Thread does use IMG IP.
    Their drivers reports as the IMG proprietary driver in Linux.

    1593583734850215936View: https://twitter.com/never_released/status/1593583734850215936

    VkPhysicalDeviceDriverProperties:
    ---------------------------------
    driverID = DRIVER_ID_IMAGINATION_PROPRIETARY
    driverName = PowerVR GEN1 Vulkan Driver
    driverInfo = 1.0@0
    conformanceVersion = 1.3.1.0


    So that is where that is from because if it was their own IP then there would be no reason to use IMG Drivers.
    I suspect that it uses IMG B Series IP, specifically the BXT-32-1024 MC4 configuration.
    And before you ask, yes I do own one of these cards and it has been nothing but a pain to get working so that we can look at the microarchitecture and compare it to Intel, AMD, and Nvidia.
    Reply
  • bit_user
    TCA_ChinChin said:
    They are using a more advanced TSMC node, with far more cores (even if they're not one to one), significantly higher memory performance in all metrics, higher theoretical FLOPS, while using more than 3 times the power. How can it not surpass the 1650? Are driver's really the main culprit or is some part of the hardware design really inefficient? I'm kind of confused here.
    I suspect the limiting factor might be less the drivers than the hardware, at this point. You really shouldn't take the hardware's raw specs at face value, because there can be chip bugs which tank performance to successfully circumvent.

    TCA_ChinChin said:
    Indigenous Chinese CPUs/memory seem to be doing a lot better than what they're GPUs are capable of. Huawei's and YTMC's latest chips are certainly more competitive with their global counterparts than this GPU. Whats different in this case?
    It's their first real generation of gaming GPUs, right? If you look at those other examples, none of them are 1st gen. I'll bet the next Moore Threads GPUs will be improved by leaps and bounds from this learning experience. Their biggest failing was probably overconfidence. Experienced engineers will know that it takes time to learn important lessons and iteratively refine designs & architectures.
    Reply
  • ManDaddio
    Yes, but does it support mesh shaders? Can it run Alan Wake 2? That's the real question. 🤠😎🤓🧐🤔😊
    Reply
  • bit_user
    ManDaddio said:
    Yes, but does it support mesh shaders?
    Maybe the hardware can. If anyone really cared, you'd wan to look into whether/when Imagination's PowerVR GPUs added such support. Then, try to figure out how that generation of IP aligns with what's used in these GPUs.
    Reply
  • notmrdude1
    It is a Linux workstation card. So all the about gaming performance are totally meaningless.
    Reply
  • TJ Hooker
    notmrdude1 said:
    It is a Linux workstation card. So all the about gaming performance are totally meaningless.
    They explicitly advertise it as a gaming card with Windows support. Some snippets from the product page (via Google translate):

    "enjoy a smooth e-sports experience"
    "The MTT S80 gaming graphics card . MTT S80 not only provides gamers with powerful 3D rendering capabilities "
    "Play games smoothly in Windows DirectX games, bringing a smooth operating experience at 4K resolution"
    etc.

    https://www.mthreads.com/product/S80
    Reply
  • notmrdude1
    TJ Hooker said:
    They explicitly advertise it as a gaming card with Windows support. Some snippets from the product page (via Google translate):

    "enjoy a smooth e-sports experience"
    "The MTT S80 gaming graphics card . MTT S80 not only provides gamers with powerful 3D rendering capabilities "
    "Play games smoothly in Windows DirectX games, bringing a smooth operating experience at 4K resolution"
    etc.

    https://www.mthreads.com/product/S80
    The current price is $650 (not $164). It is competing with workstation cards such as the $1400 Nvidia A4000. which has very similar specifications. It is not competing with low or mid range gaming cards.

    If you understand the hardware you will see is specifically optimised for Linux, video processing and ML rather than Windows gaming.

    - compatible with Hygon or Kunmeng ARM server motherboards (Linux only)
    - Custom Linux distro
    - 3x Display Port outputs
    - 128 tensor cores
    - 256 bit data bus
    - CUDA compatibility.
    - PCIE 5
    - 8K video
    - 16GB of VRAM
    - High FP32 and INT8 performance

    The fact they claim it is a gaming GPU is totally irrelevant.
    Reply
  • bit_user
    notmrdude1 said:
    The current price is $650 (not $164). It is competing with workstation cards such as the $1400 Nvidia A4000. which has very similar specifications. It is not competing with low or mid range gaming cards.

    If you understand the hardware you will see is specifically optimised for Linux, video processing and ML rather than Windows gaming.
    Then show us some benchmarks of it performing those sorts of tasks.
    Reply