MTT S80 benchmarks have revealed that China's homemade graphics card cannot compete with today's best graphics cards. The performance is lacking, but the PCIe 5.0 gaming graphics card doesn't play nice with many games, some going as far back as 2013.
The MTT S80 is the first Chinese homegrown graphics card to support the DirectX API, representing a giant leap in domestic gaming. When Moore Threads announced the MTT S80, the chipmaker worked with up to 60 games. We would later find out that the only catch is that only 11 out of the 60 titles are on the official support list, whereas MTT S80 could run the remaining titles, but performance would be an issue. The MTT S80 supports both DirectX 9 and DirectX 11; however, there's still a long road ahead, longer for the latter than the former.
PC enthusiast Löschzwerg (opens in new tab), who kindly shared the MTT S80 benchmarks, has recounted the struggles that the graphics card had to go through with some of the titles. According to the reviewer, old titles, such as Dota 2 and Tomb Raider, were non-functional. The former crashed instantly when a game was started, and the latter refused to start.
Counter-Strike: Global Offensive (CS: GO) was playable on the MTT S80. Löschzwerg noted the graphics driver hampered the MTT S80's performance because sometimes GPU utilization didn't even hit 50%. The China-made graphics card's performance on Elder Scrolls V: Skyrim was mediocre, comparing the low to very high preset. The former was a bit better, but it wasn't anything to brag about.
Löschzwerg highlighted that the geometry performance of the MTT S80 is abysmal. According to his observations, frame rates dipped substantially on CS: GO and DUST II in scenes with abundantly visible geometry. Tessellation was another one of the MTT S80's flaws. Due to the early maturity of the driver (200.2), tessellation does more harm than good. Benchmarks and games crashed when the tessellation was active, so the only stopgap solution was to disable it altogether.
Driver performance and compatibility get the best of even the biggest chipmakers, such as Intel. Unfortunately, the first Arc driver made Alchemist graphics cards underperform on older APIs, such as DirectX 9. Fast-forward a few months, and DirectX 9 performance has improved as much as 43% since the initial launch. It just serves to demonstrate that it's enough to have a good product without an equally good driver.
In comparison, Moore Threads is a rookie and doesn't have the same resources as Nvidia, AMD, or Intel. As a result, it'll probably take the Chinese chipmaker a long time to iron out the bugs and have a fully functional driver that doesn't hold the hardware back.
This generation of hardware should probably be seen as a development vehicle, in order for them to flesh out and tune their drivers. As long as their investors are willing to keep backing them, I think they'll eventually become competitive with GPUs made on similar manufacturing tech.
BTW, it was previously mentioned that they support CUDA? I'd be very curious to know how their performance is looking on some common compute benchmarks. That might actually be where most of their focus has been.
I'm not American or Chinese and I don't have any prejudice or political favor towards USA or China.
For me Intel and MTT both are newbie in DirectX 9 (or above) capable discrete GPU, which both of them should receive equal treatment.
They are integrated GPU, not discrete GPU, totally different animal, If else, ypu should praise Intel for their willingness to spend much more money to the unknown journey of making discrete GPU.
It's a lot easier for someone to convince others to spend less money to invest compare to convince other to spend much more money to invest to unknown.
No, the comparison almost couldn't be more stark, in terms of the organizations and their respective starting points.
Not nearly as different as you think, especially from a software point of view.
Most people in this place allready use phones made in China…
Nothing wrong in being carefull, but the truth is that everything leak allready.
Does an iGPU not have cache for vertices and indices? Does an iGPU not handle the same textures a dGPU does? Does it apply them differently, perhaps? Does an iGPU not conform to the same DX9/10/11/12/OpenGL/Vulkan/Metal APIs the same way a dGPU does? Does an iGPU not perform the very same matrix multiplications that a dGPU would? Do iGPUs not run the same shaders as dGPUs? Is an iGPU not supposed to support swizzling, like a dGPU would? Does an iGPU not benefit from culling? Does an iGPU not draw lines and triangles the same way a dGPU does?
What makes you so sure that Intel's experience in iGPUs couldn't possibly have benefited the development of their dGPU in any way whatsoever?
I actually think the current state of their drivers & overall performance is the best argument that the hardware is original. I know Imagination doesn't have the best reputation for driver quality, but I think they'd be working a lot better than we've so far heard, not to mention delivering higher benchmark scores.
I'm not sure about that. AMD and Nvidia have had design centers in China for about 15 years, now. There's enough experience with various hardware design and probably even some GPU software stack development, in country.
That said, it's possible they've used parts of other IP floating around, like that of S3 or Vivante.