Lenovo unveils compact AI workstation equipped with Nvidia GB10 and 128 GB of system memory
This is Lenovo’s take on Nvidia’s Project Digits.

Lenovo just announced an AI mini supercomputer designed to work straight out of the box. The ThinkStation PGX, which Lenovo calls “a compact, personal AI developer workstation”, is powered by an Nvidia GB10 Grace Blackwell Superchip and has 128 GB of coherent unified system memory.
This delivers up to 1 PetaFLOP or 1000 TOPS of performance, allowing users to work on models with up to 200 billion parameters. If you need more power, Lenovo lets you connect two PGXs, increasing the parameter count to 405 billion parameters.
The mini workstation is powered by the Nvidia DGX operating system and comes with the standard Nvidia software stack for AI development. Lenovo also added common tools like PyTorch and Jupyter, allowing developers to work locally instead of relying on on-prem clusters or cloud computing environments. The company says that it will launch the PGX in 3Q25, but we don’t have pricing details for it yet.
The ThinkStation PGX seems to be Lenovo’s take on Nvidia’s Project Digits. The AI chip giant launched this desktop AI supercomputer at CES 2025 and priced it at around $3,000 per unit. So, we expect the PGX to hover at around the same cost.
This isn’t the first third-party AI mini supercomputer to hit the market. Asus launched the Ascent GX10 in March 2025 at GTC 2025, while MSI has already teased its own version that it will reveal at Computex 2025. These units got some enthusiasts excited, but others were a bit more skeptical. Tiny Corp, the startup behind the TinyBox AI accelerator, said that users should “just buy a gaming PC” instead of dropping that amount of cash on a dedicated AI PC.
According to the company, the AI mini PC’s 1 PFLOP of performance is pegged at FP4, which is practically unusable. This means that the PGX (and all other Project Digits systems) will only deliver 500 TFLOPS of performance at FP8.
By comparison, tinybox green, which is now powered by four RTX 5090 GPUs, delivers 1,492 FP16 TFLOPS. This converts to 2,992 FP8 TFLOPS, or around 3 PFLOPS at FP8, making it around six times more powerful than Nvidia’s Project Digits. However, the tinybox green starts at $29,000, which is substantially higher than the around $3,000 asking price of the tiny AI workstation.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.
-
thaddeusk If you're using a 4-bit LLM quantized with AWQ it's actually very usable. I've even used a 4-bit Flux 1.D model that was quantized with the new SVDQuant and it's pretty much indistinguishable from the FP8 or FP16 versions.Reply