New driver optimizations have enabled Chinese vendor Moore Threads' MTT S80 gaming graphics card to rival Nvidia's GeForce GTX 1650 at 4K gaming. There may be some untapped performance, so don't be surprised if the MTT S80 eventually becomes a contender for the best graphics cards.
Armed with 4,096 MUSA (Moore Threads Unified System Architecture) cores with a 1.8 GHz boost clock, the MTT S80 is a PCIe 5.0 graphics card that pumps 14.4 TFLOPs of FP32 performance—the 7nm graphics card sports also sports 128 Tensor cores and 16GB of 14 Gbps GDDR6 memory. With access to a 256-bit memory interface, the MTT S80, which has a 255W TDP, delivers a memory bandwidth of up to 448 GB/s. The specifications look very decent on paper, minus the high TDP. However, the driver is what's hindering MTT S80's performance. The MTT S80 reportedly leverages the Chunxiao silicon, based on the PowerVR architecture developed by Imagination Technologies. The exact generation of the PowerVR GPU remains a mystery. Nonetheless, there's a lot of optimization work to get the graphics card to play nice with DirectX 11 titles.
Early benchmarks of the MTT S80 weren't very compelling. The Chinese homebrew graphics card lagged behind archaic mainstream performers, such as the GT 1030 in DirectX 9 games and GTX 1050 Ti in DirectX 11 games. Moore Threads recently deployed a new driver update, which the company claims to improve the MTT S80's performance by up to 40% in some titles. Chinese news outlet Expreview has put the MTT S80 through its paces with the latest driver (18.104.22.168), and the progress is pretty remarkable if you look at it from an unbiased standpoint.
Expreview's testbed consists of a Core i7-13700K processor, an Asus ROG Maximus Z790 Dark Hero motherboard, and 32GB (2x16GB) of G.Skill Trident Z5 RGB DDR5-7200 C34 memory on a system with Windows 10 21H2 64 bit operating system. The GeForce GTX 1650 used in the comparison is the GeForce GTX 1650 XC from EVGA, but Expreview didn't specify whether it was the Black Gaming or Ultra Gaming variant. The publication used a mixed bag of synthetic and real-world testing, but we'll only concentrate on the latter, which is more meaningful for gamers.
|Graphics Card||MTT S80||GeForce GTX 1650|
|Process Technology||TSMC 7nm||TSMC 12nm|
|Die size (mm^2)||?||200|
|GPU Cores (Shaders)||4,096||896|
|Tensor / AI Cores||128||N/A|
|Boost Clock (MHz)||1,800||1,665|
|VRAM Speed (Gbps)||18||8|
|VRAM||16GB GDDR6||4GB GDDR5|
|VRAM Bus Width||256||128|
|TFLOPS FP32 (Boost)||14.4||2.9|
The overview is that the GeForce GTX 1650 was faster at 1080p (1920x1080) and 2K (2560x1440) resolutions, while the MTT S80 excelled at 4K (3840x2160). It makes sense since higher resolutions are more demanding on the VRAM requirement, giving the MTT S80 the upper hand. The MTT S80 has 4X more memory than the GeForce GTX 1650 and almost 3.5X higher memory bandwidth. The GeForce GTX 1650 is way more power-efficient compared to the MTT S80.
In Final Fantasy XIV, the GeForce GTX 1650 outperformed the MTT S80 by 37% in 1080p and 9% in 2K. However, the MTT S80 got its revenge at 4K, beating the GeForce GTX 1650 by 15%. The GeForce GTX 1650 delivered 49% and 45% higher frame rates at 1080p and 2K, respectively, in League of Legends. Meanwhile, the MTT S80 pumped 15% better frame rates at 4K.
We saw similar margins with Valorant. The GeForce GTX 1650 exhibited an 80% lead at 1080p and 17% at 2K. Conversely, the MTT S80 was 27% better at 4K. Finally, the performance margin was 53% at 1080p and 24% at 2K in favor of the GeForce GTX 1650 in Assetto Corsa. The MTT S80 led at 4K with a 9% performance delta.
Moore Threads MTT S80 Benchmarks
|Graphics Card||Final Fantasy XIV (1080p)||Final Fantasy XIV (2K)||Final Fantasy XIV (4K)||League of Legends (1080p)||League of Legends (2K)||League of Legends (4K)||Valorant (1080p)||Valorant (2K)||Valorant (4K)||Assetto Corsa (1080p)||Assetto Corsa (2K)||Assetto Corsa (4K)|
|GeForce GTX 1650||66.4||40.1||18.4||430.4||416.8||242.2||227.2||138.3||66.9||127||89||47|
The MTT S80 is another example of how a mediocre driver can hold back a good product. It happens even to the more prominent manufacturers, such as Intel and its Arc Alchemist graphics cards. The chipmaker constantly improves performance with every new driver update, with the latest one claiming up to a 119% uplift.
It's been a year since the MTT S80 hit the Chinese retail market. The 7nm gaming graphics card has gone through a lot over the last year. Moore Threads has released 12 driver updates, nine in the critical category. The software engineers at Moore Threads aren't miracle workers, but their efforts haven't been unsuccessful. The improvement from the May driver (221.31) and October driver (22.214.171.124) was up to 45% in some games.
Theoretically, the MTT S80 could compete with the GeForce RTX 3060. However, the Chinese graphics card still has a long way to go since it can't even consistently beat the GeForce GTX 1650, and there's a sizeable gap between the GeForce GTX 1650 and GeForce RTX 3060. The MTT S80 has recently dropped to $164, but it's still more expensive than the GeForce GTX 1650, which starts at $150 in the Chinese market. While we don't expect the GeForce GTX 1650 performance to increase anymore, the biggest issue with the MTT S80 is that it's unknown how much performance is left in the tank.
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.
When you start from basically nothing, anything is a decent improve. That's the feeling I get from these drivers. They're still sorely lacking compared to any modern AMD or NVIDIA arch in terms of raw silicon and hardware to software performance.Reply
They are using a more advanced TSMC node, with far more cores (even if they're not one to one), significantly higher memory performance in all metrics, higher theoretical FLOPS, while using more than 3 times the power. How can it not surpass the 1650? Are driver's really the main culprit or is some part of the hardware design really inefficient? I'm kind of confused here. They're based roughly off of the PowerVR architecture so they're not even starting from scratch. Is it because it's built from a PowerVR arch?
Indigenous Chinese CPUs/memory seem to be doing a lot better than what they're GPUs are capable of. Huawei's and YTMC's latest chips are certainly more competitive with their global counterparts than this GPU. Whats different in this case?
So, Moore's Thread does use IMG IP.Reply
Their drivers reports as the IMG proprietary driver in Linux.
driverID = DRIVER_ID_IMAGINATION_PROPRIETARY
driverName = PowerVR GEN1 Vulkan Driver
driverInfo = 1.0@0
conformanceVersion = 126.96.36.199
So that is where that is from because if it was their own IP then there would be no reason to use IMG Drivers.
I suspect that it uses IMG B Series IP, specifically the BXT-32-1024 MC4 configuration.
And before you ask, yes I do own one of these cards and it has been nothing but a pain to get working so that we can look at the microarchitecture and compare it to Intel, AMD, and Nvidia.
I suspect the limiting factor might be less the drivers than the hardware, at this point. You really shouldn't take the hardware's raw specs at face value, because there can be chip bugs which tank performance to successfully circumvent.TCA_ChinChin said:They are using a more advanced TSMC node, with far more cores (even if they're not one to one), significantly higher memory performance in all metrics, higher theoretical FLOPS, while using more than 3 times the power. How can it not surpass the 1650? Are driver's really the main culprit or is some part of the hardware design really inefficient? I'm kind of confused here.
It's their first real generation of gaming GPUs, right? If you look at those other examples, none of them are 1st gen. I'll bet the next Moore Threads GPUs will be improved by leaps and bounds from this learning experience. Their biggest failing was probably overconfidence. Experienced engineers will know that it takes time to learn important lessons and iteratively refine designs & architectures.TCA_ChinChin said:Indigenous Chinese CPUs/memory seem to be doing a lot better than what they're GPUs are capable of. Huawei's and YTMC's latest chips are certainly more competitive with their global counterparts than this GPU. Whats different in this case?
Yes, but does it support mesh shaders? Can it run Alan Wake 2? That's the real question. 🤠😎🤓🧐🤔😊Reply
Maybe the hardware can. If anyone really cared, you'd wan to look into whether/when Imagination's PowerVR GPUs added such support. Then, try to figure out how that generation of IP aligns with what's used in these GPUs.ManDaddio said:Yes, but does it support mesh shaders?