China-Made Moore Thread GPU Can’t Match RTX 3060

Moore Threads
(Image credit: Moore Threads)

A review posted at EXPreview gives us our first look at Chinese GPU maker Moore Thread's new mid-range gaming competitor, the MTT S80 graphics card. This GPU is one of Moore Thread's most powerful graphics cards to date, packing a triple fan cooler setup and theoretically competing with Nvidia's RTX 3060 and RTX 3060 Ti, some of the best graphics cards available, even close to two years after they first launched.

For the uninitiated, Moore Thread is a Chinese GPU manufacturer that was established just two years ago, in 2020. The company has reportedly tapped some of the most experienced minds in the GPU industry, hiring experts and engineers from Nvidia, Microsoft, Intel, Arm and others. Moore Thread's aim is to produce domestic (for China) GPU solutions completely independent from Western nations. These are supposed to be capable of 3D graphics, AI training, inference computing, and high-performance parallel computing capabilities, and will be used in China's consumer and government sectors.

The MTT S80 graphics card was developed with Moore Thread's 'Chunxaio' GPU architecture, which supports FP32, FP16, and INT8 (integer) precision compute, and is compatible with the company's MUSA computing platform. The architecture also employs a full video engine with H.264, H.265 (HEVC), and AV1 codec support, capable of handling video encoding and decoding at up to 8K.

The MTT S80 comes with a fully unlocked Chunxaio GPU core, featuring 4096 MUSA cores, and 128 tensor cores, and clocked at 1800 MHz. The memory subsystem uses 14 Gbps GDDR6 modules operating on a 256-bit wide bus, with a 16GB capacity. As far as specs go, it at least looks decent on paper.

The GPU operates with a target board power rating of 255W, powered by both a PCIe 5.0 x16 slot, and a single 8-pin EPS12V power connector — yes, it's using a CPU EPS12V rather than an 8-pin PEG (PCI Express Graphics) connector. That's because the EPS12V can deliver up to 300W, and for users that lack an extra EPS12V connector the card includes a dual 8-pin PEG to single 8-pin EPS12V adapter. And for the record, that's more power than even the RTX 3070 requires.

Display outputs consist of three DisplayPort 1.4a connectors and a single HDMI 2.1 port, the same as what you'll find on most Nvidia GeForce GPUs from the RTX 40- and 30-series families. The graphics card has a silver colored shroud, accented by matte black designs surrounding the right and left fans. The cooler features a triple-fan cooler design, with two larger outer fans and a smaller central fan in the middle. The card dimensions are 286mm long and two slots wide.

Moore Threads MTT S80

(Image credit: EXPreview)

The RTX 3060 Beats the MTT S80 in Early Benchmarks

Earlier reports on the Chunxaio GPU suggest it can achieve FP32 performance similar to that of an RTX 3060 Ti. The 3060 Ti has theoretical throughput of 16.2 teraflops, while the Chunxaio has a theoretical 14.7 TFLOPS. That's a bit lower than Nvidia, but it does have twice the VRAM capacity, so the hardware at least appears capable of competing with Nvidia's RTX 3060 Ti.

Unfortunately, according to EXPreview, the MTT S80 suffers from very poor driver optimizations. While the MTT S80 is aimed at the RTX 3060 Ti in terms of raw compute performance, in gaming benchmarks Nvidia's RTX 3060 reportedly outpaces the MTT S80 by a significant margin. The problem with the testing is that no comparative charts are provided for the games with RTX 3060 or other GPUs; only text descriptions of the actual gaming performance are provided, while the charts are for an odd mix of titles.

EXPreview ran its benchmarks on an Intel test rig with a Core i7-12700K, Asus TUF B660M motherboard, RTX 3060 12GB Strix, 16GB of DDR4 memory, and an 850W PSU running Windows 10 21H2.

The best potential example of actual gaming performance was Unigine Valley, where the RTX 3060 12GB was anywhere between 2x and a whopping 7.6x faster than the MTT S80 in the DX9 and DX11 tests at 1080p and 4K resolutions. The MTT S60 averaged 26.1 FPS in the 4K DX9 test, while the RTX 3060 spat out a whopping 197.9 FPS. Note also that Unigine Valley was developed and first released in 2009.

3DMark06 — yes, another rather ancient application — showed similar results with the RTX 3060 being 2.5x faster than the MTT S80 on average, at 1080p and 4K resolutions.

The MTT S80 managed to come out on top in some of the synthetic tests, like PCIe bandwidth and pure fill rates. That's understandable on the PCIe link, as the S80's PCIe 5.0 x16 configuration should vastly outpace the RTX 3060's PCIe 4.0 x16 spec. In the OCL Bandwidth Test, the S80 averaged 28.7 read and 42.8GB/s write speeds. The RTGX 3060 managed 18.3GB/s and 14.2GB/s respectively. In 3DMark06's texturing tests, the MTT S80 hit 134.8 GTexels per second in the Single-Texturing Fill Rate test, and 168.5 in the Multi-Texturing test. The RTX 3060 was much slower in the Single-Texturing test, with 59.9 GTexels/s, but came back with 177.3 GTexels/s in the Multi-Texturing test.

Besides these the synthetic and graphics tests, EXPreview did run some actual games: League of Legends, Cross Fire, QQ Speed, QQ Dance, Fantasy Westward Journey, The Great Heroes, Audition, Running Kart, Diablo III, Ultimate Street Fighter IV, Siege, My World, and Need for Speed: Hot Pursuit III. We'll forgive you if you don't recognize some of those, as we don't either. Some are Chinese variants of popular games like Counter-Strike and Final Fantasy.

Performance in a vacuum of course doesn't tell us much. 149 fps at 1080p in League of Legends with max settings, and 128 fps at 4K? Great! How did the RTX 3060 perform? We don't know. Other performance results are equally bad, like hitting the 40 FPS framerate limit in QQ Speed, a rather simplistic looking game.

But the review does mention driver problems, texture corruption, and other issues. It concludes with, "Compatibility needs to be improved and the future can be expected," according to Google Translate. That's hardly surprising for a new GPU company. Hopefully future 'reviews' of the MTT S80 will actually show more real-world comparisons with modern games rather than old tests that have little meaning in today's market.

Aaron Klotz
Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • BillyBuerger
    yes, it's using a CPU EPS12V rather than an 8-pin PEG (PCI Express Graphics) connector
    Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.
    Reply
  • JarredWaltonGPU
    BillyBuerger said:
    Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.
    Best guess: Intel and others define the ATX specs, including EPS12V. PCI-SIG is in charge of PCIe-related things like the 6-pin and 8-pin connectors. I'm guessing back when EPS12V was created and PCIe was doing 8-pin, the idea of up to 300W for a GPU seemed absurd and so they opted for an alternative standard where dual 8-pin connectors could be used on a single harness. Note that combined, that means most PSUs can push 300W through the dual-connectors on a single harness approach.

    Perhaps just as important, it also means that PCIe can have PSU requirements where only 150W is needed on an 8-pin connector. Imagine if every PSU that supported even one 8-pin PEG connector needed to deliver 300W on that connector. It would mean all PSUs would have to be over 700W (one EPS12V for the CPU, one for the GPU, plus 24-pin and other connectors). Today most decent desktop PCs do have 700W and higher PSUs, but the first PCIe 8-pin connectors came out maybe ten years ago. Back then, 300W under load for an entire PC was pretty common.
    Reply
  • InvalidError
    BillyBuerger said:
    Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.
    I agree, the whole thing seems pretty dumb. As jarred mentioned though, back when the PCIe-aux connector was introduced, GPUs were barely over the 75W mark and often used floppy or HDD power connectors for extra power until they switched to the PCIe plug and adapters during the transition to native PCIe cables. The X700 card in my P4 is one of those that used a floppy connector for extra power.
    Reply
  • Eximo
    I still have an old Thermaltake PSU that proudly advertises the NEW 6-pin PCIe connector. Which I did actually need at the time. GPU was just shy of 150W.

    So many scary dual 4-pin to 6-pin PCIe adapters... I have piles of them, some still in their little bags.
    Reply
  • BillyBuerger
    Thanks, good thoughts there. Compatibility is a reason for a lot of awkward work arounds that we have to deal with. Would have been nice if they designed it to be compatible with CPU power though. Extending GPU power would have been the same as extending CPU power and not having to keep up two separate standards that are very similar but very incompatible. The 6-pin GPU power could have been a 4+2pin connector that could also have been used for CPUs if you need more CPU power instead of GPU power.

    Also could have made PSUs easier as instead of every PSU brand having their own take on how their own connectors work as they try to make their connectors work for either CPU or GPU power, they all could have been 1-to-1 connectors and possibly have made things more compatible then the mess that modular PSUs are now.
    Reply
  • qwertymac93
    BillyBuerger said:
    Is there a techincal reason why GPUs use their own PCIe power cables (6pin/8pin) and not EPS power cables? I always found it stupid that GPUs use essentially the same connector (aside from the bit of plastic between the 2 pins) but different pinouts. A pair of EPS would give you 600w instead of having to do 4x 8pin PCIe power.

    There are several reasons. One of which (and possibly the biggest) being backwards compatibility. At the time of the P4 (the progenitor of the modern 8-pin CPU connector) connector's introduction, graphics cards used 4-pin disk drive power connectors, either the larger "Molex" or smaller "burg" power connectors. There was a clear and obvious distinction between the CPU and peripheral power connectors. When PCI-E was later introduced, a new power connector that was capable of delivering more power, with a positive latching mechanism, and more consistent insertion/removal was made. The existing p4 connector couldn't be used because that was for the CPU and the CPU only. Back then, the CPU would often have its own 12v rail on the power supply. A similarly shaped 4-pin connector would've only lead to confusion.

    Instead, a new 6-pin connector was made that had different shaped pins so it couldn't (easily...) be inserted into anything but a PCI-E device. It had more 12v conductors (3 instead of 2) so in theory was able to handle more power, too. Eventually, CPU and GPU power requirements kept growing; Intel decided to make a new 8-pin connector to replace P4 while offering backwards compatibility. Graphics cards started placing multiple 6-pin connectors on cards, then PCI-SIG released a new 8-pin connector that also provided backwards compatibility with its 6-pin predecessor.

    Long story short, while the two 8-pin connectors might look similar today, they have a different family history that keeps them incompatible.
    Reply
  • umeng2002_2
    Clearly, nVidia is the only company to properly invest in software engineers.
    Reply
  • digitalgriffin
    End goal is not consumer apps. It's AI that is sanction proof. So just a stepping stone.
    Reply
  • Elusive Ruse
    The founder of the company Jensen's man in China, two years after founding the company they already have a product out!. Another example of the Chinese "appropriating" tech secrets of the Western companies which operated in China?
    Reply
  • InvalidError
    Elusive Ruse said:
    The founder of the company Jensen's man in China, two years after founding the company they already have a product out!. Another example of the Chinese "appropriating" tech secrets of the Western companies which operated in China?
    Were it not for the patent insanity surrounding everything, I bet there would be quite a few companies looking to get into the GPU business where AMD and Nvidia are getting 60+% gross margins. The fundamental math of 3D rendering isn't rocket science, piecing together something capable of rendering is something that even undergrad people have done. Getting the performance and compatibility up to market expectations is a whole other ball game though.

    The Chinese GPUs may seem impressive for being put together so quickly, though they also mention that game compatibility is limited to about a dozen games, broad and mostly glitch-free compatibility may take a while if it is ever achieved.
    Reply