Here's What Gaming on Centaur's Forgotten x86 CPU Looks Like

Centaur CNS
Centaur CNS (Image credit: Albert Thomas/Twitter)

Centaur may have disappeared for good, but remnants of the company's products remain if you look hard enough. Computer enthusiast Albert Thomas (aka Bizude) got his hands on Centaur's CHA chip and put it through its paces in various modern triple-A titles.

The CHA processor has a die size of 194 mm² and features eight CNS x86 cores without simultaneous multithreading (SMT). Depending on the bin, the clock speed varies between 2.2 GHz and 2.5 GHz. The 16nm chip has 16MB of L3 cache, supporting AVX-512 instructions and quad-channel DDR4 memory. It even has an onboard AI co-processor, which Centaur had baptized as the NCore. The company had envisioned the CHA processor for the server market, which is why the chip has the NCore accelerator with machine learning capabilities.

Introduced in 2019, Centaur claimed that its CNS cores offer a similar level of performance to Intel's Haswell processors. Thomas overclocked his CHA sample to 2.5 GHz and obtained Cinebench R23 single-and multi-threaded scores of 552 and 4,141 points, respectively. Based on the data from Anandtech's database, the CHA chip's single-threaded performance is closest to a dual-core Pentium G3220T (580 points), whereas the multi-threaded performance is in the same ballpark as the quad-core Xeon E3-1231 (4,409 points). The reviewer paired the Centaur CHA with Nvidia's GeForce RTX 3060 Ti graphics card and 16GB (2x8GB) DDR4-3200 memory for his gaming tests.

Centaur CHA Gaming Benchmarks

Swipe to scroll horizontally
Header Cell - Column 0 FrameratesQualityResolution
Cyberpunk 207745 - 65Ultra2560 x 1440
Doom (2016)80 - 187?2560 x 1440
Crysis35 - 101??

In Cyberpunk 2077, with the DLSS quality preset, the Centaur chip delivered framerates between 45 FPS and 65 FPS. The results were pretty remarkable, considering that Thomas did his testing at 1440p (2560x1440) with the image quality preset on ultra.

The CHA processor had no problems keeping framerates above 60 FPS in Doom (2016). The enthusiast didn't specify the image quality preset, but even at 1440p, the framerates varied between 80 FPS to 187 FPS, depending on the scene's complexity.

Crysis, a title that once represented the epitome of PC gaming, couldn't faze the Centaur CHA part, either. Although we're unsure of the resolution or quality setting, the octa-core chip pumped out framerates ranging from 35 FPS to 101 FPS.

You can argue that the GeForce RTX 3060 Ti did much of the heavy lifting. Despite the low clock speeds, the Centaur CHA processor kept up with one of the best graphics cards on the market without difficulty.

Zhiye Liu
RAM Reviewer and News Editor

Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • jkflipflop98
  • slash3
    jkflipflop98 said:

    CyrixInstead (Remember the 6x86?)
    TransmetaCPU (Remember the Crusoe?)
    NexGenDriven (Remember the Nx586?)
    RiseRiseRise (Remember the... uh... mP6? Ok, probably not)

    AMD used to hide fun stuff in their CPUID registers. Early K5 era chips had AMDisbetter! and they snuck this one in during the Piledriver generation.
    "Specific to AMD K7 and K8 CPUs, this returns the string "IT'S HAMMER TIME" in EAX, EBX, ECX and EDX, a reference to the MC Hammer song U Can't Touch This. "

  • abufrejoval
    Please, why do you keep beating the Centaur for being bad at a job it wasn't designed to do?

    The x86 cores on this chip only serve one function: feed the inference monster!

    The neural processing engine you mention as if it were a minor secondary trait of this SoC is actually the main reason for its existence.

    You're putting a light tractor designed to feed and clean a stable full of cows on a race track and have a laugh at how badly it performs, when it was optimized to keep the cows happy with minimal trouble.

    The Centaur design was to optimize production cost, by using an older process and low energy cost for inference, by using a highly optimized neural accelerator. The target clients I believe to have been Chinese Internet giants, which have a large scale demand for inference at the cloud's outer edge.

    So why did it fail? That's I question I've asked myself for years now and that's where you could have contributed something valuable as a tech reporter, instead of just having fun at beating a tractor failing at being a race car.

    These are my guessses:
    One hit wonder: The Centuar might have very well been an ideal solution for a given point in time. But these days nobody adopts anything significantly new, unless the vendor can demonstrate how several generations will continue provide sufficient value to invest a user's engineering resources.

    Opposing trends: Part of the edge neural inference processing is going into the handsets, where the web giants have to pay neither the hardware invests nor the power consumption, both of which are actually paid by "the product" (aka consumer). And the other part is now increasingly handled by "ordinary" CPUs, which are increasingly being augmented with low-precision inference optimized vector extensions, that achieve a similar energy efficiency as the Centaur neural accelerator.