First China-Designed Gaming GPU Besieged by Bugs

MTT S80
MTT S80 (Image credit: Moore Threads)

MTT S80 benchmarks have revealed that China's homemade graphics card cannot compete with today's best graphics cards. The performance is lacking, but the PCIe 5.0 gaming graphics card doesn't play nice with many games, some going as far back as 2013.

The MTT S80 is the first Chinese homegrown graphics card to support the DirectX API, representing a giant leap in domestic gaming. When Moore Threads announced the MTT S80, the chipmaker worked with up to 60 games. We would later find out that the only catch is that only 11 out of the 60 titles are on the official support list, whereas MTT S80 could run the remaining titles, but performance would be an issue. The MTT S80 supports both DirectX 9 and DirectX 11; however, there's still a long road ahead, longer for the latter than the former.

PC enthusiast Löschzwerg, who kindly shared the MTT S80 benchmarks, has recounted the struggles that the graphics card had to go through with some of the titles. According to the reviewer, old titles, such as Dota 2 and Tomb Raider, were non-functional. The former crashed instantly when a game was started, and the latter refused to start.

Counter-Strike: Global Offensive (CS: GO) was playable on the MTT S80. Löschzwerg noted the graphics driver hampered the MTT S80's performance because sometimes GPU utilization didn't even hit 50%. The China-made graphics card's performance on Elder Scrolls VSkyrim was mediocre, comparing the low to very high preset. The former was a bit better, but it wasn't anything to brag about.

Löschzwerg highlighted that the geometry performance of the MTT S80 is abysmal. According to his observations, frame rates dipped substantially on CS: GO and DUST II in scenes with abundantly visible geometry. Tessellation was another one of the MTT S80's flaws. Due to the early maturity of the driver (200.2), tessellation does more harm than good. Benchmarks and games crashed when the tessellation was active, so the only stopgap solution was to disable it altogether.

Driver performance and compatibility get the best of even the biggest chipmakers, such as Intel. Unfortunately, the first Arc driver made Alchemist graphics cards underperform on older APIs, such as DirectX 9. Fast-forward a few months, and DirectX 9 performance has improved as much as 43% since the initial launch. It just serves to demonstrate that it's enough to have a good product without an equally good driver.

In comparison, Moore Threads is a rookie and doesn't have the same resources as Nvidia, AMD, or Intel. As a result, it'll probably take the Chinese chipmaker a long time to iron out the bugs and have a fully functional driver that doesn't hold the hardware back.

Zhiye Liu
News Editor and Memory Reviewer

Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • bit_user
    I still think they're doing very well, for such a young company. Only founded in 2020, right? Graphics are the most complex APIs I've ever seen. The effort to stand up a new product in this space is pretty monumental.

    This generation of hardware should probably be seen as a development vehicle, in order for them to flesh out and tune their drivers. As long as their investors are willing to keep backing them, I think they'll eventually become competitive with GPUs made on similar manufacturing tech.

    BTW, it was previously mentioned that they support CUDA? I'd be very curious to know how their performance is looking on some common compute benchmarks. That might actually be where most of their focus has been.
    Reply
  • Paul Basso
    Not less than impresive for a new company in the market. Hope we can have another GPU manufacturer well stabishled soon, GPU prices are crazy right now
    Reply
  • Glock24
    Not much worse than Intel and their early GPUs.
    Reply
  • samopa
    I don't get it, when Intel release ARC Alchemist, everyone (almost) bashing it for under performed, but with this S80 everyone (at least 2 first commentatorin this thread) praising it well.

    I'm not American or Chinese and I don't have any prejudice or political favor towards USA or China.
    For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
    Reply
  • tennis2
    samopa said:
    I'm not American or Chinese and I don't have any prejudice or political favor towards USA or China.
    For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
    Intel has had graphics on their CPUs for a long time. I also assume Intel has 10x the R&D budget to bring a dGPU to market. So no, they shouldn't be treated the same.
    Reply
  • samopa
    tennis2 said:
    Intel has had graphics on their CPUs for a long time. I also assume Intel has 10x the R&D budget to bring a dGPU to market. So no, they shouldn't be treated the same.

    They are integrated GPU, not discrete GPU, totally different animal, If else, ypu should praise Intel for their willingness to spend much more money to the unknown journey of making discrete GPU.
    It's a lot easier for someone to convince others to spend less money to invest compare to convince other to spend much more money to invest to unknown.
    Reply
  • bit_user
    samopa said:
    I don't get it, when Intel release ARC Alchemist, everyone (almost) bashing it for under performed, but with this S80 everyone (at least 2 first commentatorin this thread) praising it well.
    Because Intel has been working on these dGPUs since at least 2018, if not before. Even the current ARC GPUs aren't a clean-sheet design - they build heavily on Intel's integrated GPUs, which they've now been making for 15-20 years. In particular, their software stack heavily leverages prior work, and their team is very experienced in developing GPU drivers and tools.

    samopa said:
    For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
    No, the comparison almost couldn't be more stark, in terms of the organizations and their respective starting points.

    samopa said:
    They are integrated GPU, not discrete GPU, totally different animal,
    Not nearly as different as you think, especially from a software point of view.
    Reply
  • hannibal
    wurkfur said:
    Anyone welcoming this as competition to the GPU market has a screw loose. Does anyone want to willingly install drivers on their personal computer that could have a possible backdoor in the code for the CCP? I have tremendous respect for the Chinese people, but their government? Not so much. Even if they fix their drivers, I'm going to take a hard pass.

    Most people in this place allready use phones made in China…
    Nothing wrong in being carefull, but the truth is that everything leak allready.
    Reply
  • samopa said:
    They are integrated GPU, not discrete GPU, totally different animal
    What makes them so different to render over a decade of experience in iGPUs meaningless?

    Does an iGPU not have cache for vertices and indices? Does an iGPU not handle the same textures a dGPU does? Does it apply them differently, perhaps? Does an iGPU not conform to the same DX9/10/11/12/OpenGL/Vulkan/Metal APIs the same way a dGPU does? Does an iGPU not perform the very same matrix multiplications that a dGPU would? Do iGPUs not run the same shaders as dGPUs? Is an iGPU not supposed to support swizzling, like a dGPU would? Does an iGPU not benefit from culling? Does an iGPU not draw lines and triangles the same way a dGPU does?

    What makes you so sure that Intel's experience in iGPUs couldn't possibly have benefited the development of their dGPU in any way whatsoever?
    Reply
  • sivaseemakurthi
    bit_user said:
    I still think they're doing very well, for such a young company. Only founded in 2020, right? Graphics are the most complex APIs I've ever seen. The effort to stand up a new product in this space is pretty monumental.

    This generation of hardware should probably be seen as a development vehicle, in order for them to flesh out and tune their drivers. As long as their investors are willing to keep backing them, I think they'll eventually become competitive with GPUs made on similar manufacturing tech.

    BTW, it was previously mentioned that they support CUDA? I'd be very curious to know how their performance is looking on some common compute benchmarks. That might actually be where most of their focus has been.
    They seem to be based on imagination graphics, they must have licensed the IP. Nobody can develop GPUs in that short span from scratch.
    Reply
  • bit_user
    sivaseemakurthi said:
    They seem to be based on imagination graphics,
    I know some Chinese GPUs are, but I've yet seen no indication that Moore Threads' is. Can you cite any sources or specific reasons to think so?

    I actually think the current state of their drivers & overall performance is the best argument that the hardware is original. I know Imagination doesn't have the best reputation for driver quality, but I think they'd be working a lot better than we've so far heard, not to mention delivering higher benchmark scores.

    sivaseemakurthi said:
    they must have licensed the IP. Nobody can develop GPUs in that short span from scratch.
    I'm not sure about that. AMD and Nvidia have had design centers in China for about 15 years, now. There's enough experience with various hardware design and probably even some GPU software stack development, in country.

    That said, it's possible they've used parts of other IP floating around, like that of S3 or Vivante.
    Reply